playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_10_Quiz_1_Review.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICHARDO CABALLERO: And so we started with something a little boring, basic definitions. And the first thing we had to do is to understand how do we measure output at the aggregate level. It's very easy to understand what output is at the level of an individual factory, but at the aggregate level, it's a little tricky. And so we had an example of a very simple economy with two companies, one that produces steel and the other one that produces cars. And in this particular example, the steel company doesn't sell anything to the final consumers, it sells all its production to the car company. And we ask the question, what is the GDP of this economy? The simplest answer would have been, well, 300. I summed the output of the two companies and that could be one answer. But then I show you the three different methods that that's the wrong answer. And method 1 was-- a definition is GDP is the value of final goods only. And final goods in this simple example is, well, this company is not producing anything as a final good because all its sales are going to-- as an input into other companies' production. And so this one doesn't count at all in our simple example, this one counts, and then the answer is $200. Not 300, but $200. Method 2 was to count only the value added in each company. And value added is the difference between the final output-- that is, the revenue from sales, minus whatever that company spends on intermediate inputs. In this simple example, this company-- the steel company is not spending anything on intermediate inputs-- it's a strange production of steel, but anyways, it is quite a decent example. And so this $100 is value-added, completely value-added. There is no expenses on intermediate inputs. For the car company, however, the revenue from sales is 200, but the company spends 100 on intermediate inputs. Therefore, the value added of this company is 200 negative 100, so you get 100 value added from this one, 100 of value added from that one, total value added, 200. So same answer. And the third method-- these are-- the two methods that I just described are production methods. You're measuring the production side. The alternative is to look at the income side. And the income side says just-- let's sum all the incomes in the economy. And the incomes are income to workers, wages, and income to the owners of capital, profits. Income to workers is $80 plus 70, is 150. Income to owners of capital is 20 plus 30, that's 50. So 150 plus 50 is, again, 200. So these are three equivalent ways of measuring output. And I said-- one of the features I show you of this method is that they are immune to organizational structure within the economy. So for example, if these two companies were to merge, clearly the sum of incomes would not change. It would still be 100-- it would be 200. This one would not change because if they were to merge, then the whole production of the revenues from sales of the car company would be value-added. Everything would be produced in-house, and still, the answer would be 200, then, because this company would disappear, it would emerge inside here, and you would get still get 200. And the same happened with method 1 because still, the sales of final goods is only 200. The naive approach of just summing output would be terrible because once you merge it, output would collapse from 300 to 200. That tells you that's not the right way of doing things. So while the three methods we propose do work, are immune to this organization-- changes in the organization structure. The next step was to highlight that when we say output, we're really after real output. And there is a distinction between nominal output and real output. Nominal output is simply the quantity of final goods measured at current prices, while real output is measured at some fixed set of prices of one fixed year. And I think I gave you an example. This is an example I gave you, and then in the P-sets, you had more complicated examples with multiple goods. Here, you have an economy that produces only one good, cars, and that produces 10 cars here, 12 cars, 13 cars here. The price of the cars is rising, so the nominal GDP is rising a lot while the real GDP is rising less. How do we measure real GDP here? We use-- in this particular example, we use the prices here, 12-- 10 times the price of the car in 2012 is 24,000, that's 240. Obviously for the base year, nominal GDP is the same as real GDP. And then for 2013 is 13-- not times 26,000, but times 24,000 and we get that. Now in this particular example of only one good, you can pick any base year and you'll get exactly the same rate of growth of real output. If you have multiple goods, that's not true because the relative prices of goods are moving over time, but that's the basic idea. So-- I mean, again, you should know these things-- I'm not going to be tremendously important in the quiz, but they will show up in your quiz. Then we went through some definitions. The unemployment rate being the number of unemployed over the labor force, not population, that's important. We talked about the inflation rate as well. And that's the rate of change of prices, and there are different prices in the economy. One of them is the deflator, the other one is CPI, and so on and so forth. That's it. So that was the first lecture relevant for the quiz. Any questions about that? Good. Keep moving. OK, then we move to-- then we began to really get serious because we began to construct a foundation for the IS-LM model. And the first thing we did is we looked at the goods market. No. And what we did here is we said-- we describe the different components of aggregate demand, and we said, in this economy-- for now, at least, we're going to make this economy closed, so we remove exports and imports, and for your quiz, absolutely you're not going to see anything about exports or imports. So this is your aggregate demand. We wanted to build a little more, so we had to have some behavioral assumptions. We made it initially very simple. We assume this was exogenous, that government expenditure was exogenous. Taxes were also exogenous. And the only behavioral equation we had was this consumption function, and we said consumption is increasing disposable income. So we assume something linear like this. Disposable income is just income minus taxes. And remember, income-- remember from the alternative ways of measuring GDP, income is the same as output. So when I say income, because that's what is relevant for the consumption-- consumers world, but it's the same as output. So that was our consumption function. it had an upward-sloping-- it was upward-sloping because there is a marginal propensity to consume, c1. And then a key assumption of this part of the course is that output is aggregate demand-determined. Prices were completely fixed, and we said, well, but output is whatever demand wants, that's what output is. So this is an equilibrium condition. This is the aggregate demand, this is an equilibrium condition, so we can solve out because I can say in equilibrium, z is equal to y, and I can solve for equilibrium output from that equation. And that's exactly what we did in this slide, and you got to an expression like this. Knowing how to do that is very important for you. So you better be sure that how to find equilibrium output in this model. I mean, it's going to be very difficult to do IS-LM if you don't know these steps, so you better know this stuff. And remember something-- we call this guy here in the simple economy the multiplier. Why the multiplier? Well, because given certain something we call exogenous expenditure, the 1 minus c1 multiplies that. If the marginal propensity to consume is very high-- say it's close to 1, then the multiplier is very, very high. If the marginal propensity to consume, say, is 0.5, then how much is the multiplier? 2. OK, good. So the multiplier is 2. Good. And that was our equilibrium. We had the aggregate demand. The slope was less than the 45-degree line because c1 is a number less than 1, and so you have some equilibrium output there. That's the equilibrium output. At this point, aggregate demand is equal to-- well, aggregate demand is equal to aggregate supply, that's always true, but that's consistent also with aggregate demand-- with the function of aggregate demand. And important for this equilibrium output is that the equilibrium output is a function of a lot of things that we took as parameters in this aggregate demand curve. What did we take as parameters in the aggregate demand curve? Just give me examples. Well, investment, government expenditure, and taxes at the very least. Also, parameters like autonomous consumption, that c0 we're taking as given. Anything-- if any of those things move, the position of the aggregate demand curve will shift around. And that was one example. Suppose autonomous consumption c0 goes up. So suddenly consumers decide to spend more. Well, then what we have is that aggregate demand shift up and equilibrium output ends up changing by more than the initial change in c0. Why is that? So this is the change in c0, but the change in output-- and so the initial change c0 leads to an initial change in output, which is equal to c0. That's up to here. But then we end up with a final equilibrium output is higher than the initial response. All this happens infinitely fast in this model. Why is this change greater than c0? There is a multiplier in front. Exactly. We change c0 by 1, but then you have to multiply it by 1 over 1 minus c1. And that's what we illustrated in this picture there. OK, good. And so you should move everything-- you can move here around. Move G up, D up, and stuff like that and see what happens. The last thing I did in this section is I showed you an alternative way-- entirely equivalent way of illustrating equilibrium, which was saving equal to investment. Remember, I derived this, and I go to an expression like that. That's exactly the same as aggregate demand equal to aggregate supply. An investment, which, in this particular basic model, is fixed, is equal to saving by the government, which is also, in this basic model, is fixed because it's G minus T, which is fixed. Sorry, it's T minus G, which is fixed. And then private saving. And then I show you an interesting result, which is called-- known the paradox of savings, which says the following. If, for whatever reason, consumers decide to save more, say, for example, because c0 now comes down. So now they have certain income. Out of that same income, they want to save more. Then from this very simple equation, I know that-- what happens to output? Why? AUDIENCE: Because savings go up, consumer demand goes down. And then also, investments will also go down. And then-- RICHARDO CABALLERO: No. Investment doesn't go down here because it's fixed. In this basic example, not IS-LM. AUDIENCE: [INAUDIBLE] RICHARDO CABALLERO: Yes, but that's an explanation, which is the right explanation, but the explanation in the outer space, output and the income. I wanted in the space of saving and investment. So let me give it to you very quickly. But your answer is correct, but it's not what I wanted here because what I wanted to say is the following. If, for whatever reason, for any given level of income, savings go up, then we have an imbalance. Total saving is greater than investment. The only variable that can adjust here-- so we restore equilibrium, investment equal to savings, is for the output to come down because if output comes down, savings come down, and that's the way you restore equilibrium. I told you, this way of looking at things is entirely equivalent as we have already done. So I can also do what you wanted to do, which is represent that in the space of output and-- aggregate demand and aggregate-- and output or income. And an increase in c0-- a reduction in c0 would lead to a decline in aggregate demand and then through the multiplier larger increase in output. So this is the way we characterize it before, this is a slightly different way of characterizing which is what gives rise to what is called the paradox of savings because suddenly you decide to save more, supposedly that should be good. Well, in the short-run it's not really good, it causes a recession. Anyways. It's cute, but it may show up in your future, so I wanted to remind you. So that was the goods market side. Oops. Then we look at financial markets and we trivialize financial markets, really. We said let's assume the financial markets are very, very simple. Money and bonds. That's it. Nothing else. And the first behavior-- the only behavioral equation we really had here was money demand, and we said, well, money demand is increasing in nominal GDP because if nominal GDP is larger, then you need to do more transactions, you need more money, more cash. Cash of deposit, but here, we're looking only at cash. But it's decreasing in the interest rate. Money-- the money is decreasing in-- why is it decreasing in interest rate? Interest rate is the return on the bonds. Why is money demand decreasing in the interest rate? Yes. The opportunity cost of holding cash in your pocket is higher. You didn't care about this stuff a year ago. But now, it costs you 5% to hold cash. That's what you get in 1-year certificate bond. US Treasury bond at this moment. So it's more significant. Maybe it's not that relevant for you, but corporation, makes a big difference, I guarantee you. I'm keeping the thing in the checking account, now they're really buying short-term Treasuries and stuff like that. Good. So that's the reason this is downward-sloping. And that's the concept here. So then what the central bank controls is money. How much money it injects in the economy, that is how much-- OK. How much money it injects into the economy, how does-- let me say just that for now. And so that's the money supply. So the equilibrium interest rate is simply the point in which money demand is equal to the money exogenous money supply. And I said in the modern world, the central banks don't tell you M S. They tell you this is the interest rate we want, and then they provide whatever M they need in order to get the interest rate they have told you they want to have. So that's the case of an expansionary monetary policy. Suppose the Fed wants to lower the interest rate from here to here. Well, what it needs to do is increase money. And increase money means it goes out there, an open market operation, and buys bonds from the private sector. Buys bonds, takes bonds in, and gives them cash. Money. That's an expansionary monetary policy, and an expansionary monetary policy will lower the interest rate. That's an open market operation. So that's what we just saw, was exactly that. The Fed wants to lower the interest rate. What it does is it goes out there, it buys bonds from the private sector so it's balance sheet on the asset side has more bonds now, but it has more liabilities because it gives cash to people and that's a liability of the central banks. So that's an open market operation. That's an expansionary open market operation which is designed to lower the interest rate. Then I talked about the relationship between the interest rate and the price of the bond. And that's the return on a bond. It's the face value of the bond, what you get when the bond matures. Say it's 100, it's a bond for 100, minus whatever you pay divided by whatever you pay. So say if you pay today $95 for a bond that will pay you $100 a year from now, that's approximately-- it's 5% interest rate. It's a little more, but that's about it. Which also helps you understand a little bit what happens during an open market operation. In an open market operation, an expansionary monetary policy, the central bank goes out there and buys bonds. What typically happens to a price of a good or an asset that is being bought by somebody big? Goes up or down? Now we have a big buyer out there that goes and buys bond. Do you think the price of bonds will go up or down? Up. Big buyer got into the market to buy bonds, the price of bonds go up. But if the price of bonds goes up, that means the interest rate goes down. So that's an intuitive way of understanding how monetary policy lowers interest rates. It's a big buyer buying bonds, the price of bonds will go up, but the interest rate and the price of the bond are inversely related. You can see that. Now suppose that the initial price of the bond was 95, and now the price of the bond goes to 100, the interest rate goes from a little more than 5% to 0%. Good. Then we talked about intermediaries. Forget it for now. So then we got into two lectures about IS-LM-- about the basic IS-LM model, and then we did one more on the extended IS-LM model. And I told you that at least two-thirds of your quiz will be about this. So-- and I already know what is in the quiz and I cannot tell you, that I honor my commitment. So you better understand the IS-LM model very, very well. Now understanding the IS-LM model also means understanding the previous two lectures because we're building the IS-LM model there. So the first thing we did here is we said, well, to make this stuff a little more interesting-- we already had a model in which we could find equilibrium output. But remember, that was in lecture 3, we had that. But we said-- but we took many things as exogenous there that are really not exogenous in practice. In particular, private investment. Private investment is certainly something that responds to aggregate activity and to the cost of borrowing and things of that nature. So what we did-- the first thing we did here is we changed the investment function for some constant for something that was a function of output and the interest rate. That component here, the fact that it was increasing in output just increase the multiplier, but it didn't change anything qualitatively in the analysis. But the fact that it depends on the interest rate is important because now we have as a parameter in the goods-- in the aggregate demand curve the interest rate. When you solve out the whole thing, the interest rate is one of the things that can move aggregate demand around. And that's important because now you can begin to see the connection between what the central bank does and how it affects aggregate activity because what the central bank does is affect the interest rate. The central bank cannot go out there and buy hamburgers as I said. It can go out there and buy bonds, and with that, it affects the interest rate. And for that to matter for the economy, not only to bondholders, it better be the case that that interest rate matters for the equilibrium level of output. And it does so by affecting real investment. So that's the mechanism through which monetary policy affects real activity. Is through the cost of borrowing. We simply-- in reality, consumers are also affected by that, by the interest rate and so on, but let's keep things simple and have only investment as a function of the interest rate. And very importantly, it's a decreasing function of the interest rate. The higher interest rates, the lower its investment for any given level of output because it's more costly to borrow to fund that investment. So that gives us our curve, which is a combination of output and interest rate that are consistent with equilibrium in the goods market-- that is, when output is equal to aggregate demand. So that's the IS So that point belongs to one IS for one interest rate here. So how do we construct the IS? Well, we start moving the interest rate. So suppose we start from this one point in the IS, the point I just showed you. Supposing that now-- we increase the interest rate, we look at the new equilibrium output. Well, that also belongs to this IS. And you can keep moving the interest rate around. So you move z around only by moving the interest rate. Don't move G, T, or anything else. Only by moving the interest rate, and then you can trace an IS curve. If you move other parameters than the interest rate, then it's a shift in the IS curve, it's not a movement along the IS curve. So if, for example, if I increase G, what happens? With this IS curve? The IS curve shift to the right because-- now for any given level of interest rate, output will be higher because aggregate demand moves up. And so that's a shift to the right of the IS curve. Good. That's an example of the opposite. An increase in taxes, well, it will shift IS to the left. The LM relationship is already described. It is no equilibrium in financial markets, but we said the way monetary policy is conducted is the Fed sets the interest rate and the money is whatever the market needs in order for that to be the equilibrium interest rate. So the model in LM, if you will, is horizontal just like that. So now we're set, because once the Fed decides to set this interest rate, we can find not only the equilibrium combinations of interest rate and output that are consistent with the equilibrium in the goods market, but the particular equilibrium level of output that is consistent with that interest rate. And that's exactly equilibrium output. So given the LLM, now I looked at it intersection with my IS, and that gives me equilibrium output for that level of the interest rate, which has been set by the Fed. And then you can use this model. This is a very powerful little model because now you can do lots of things with it. For example, that's a contractionary fiscal policy. That's what happens when you reduce G or when you increase T. What happens if you reduce G and T by the same amount? You see what I'm doing. Maybe if you-- that's often done. OK, you can increase government expenditure, but then you find a source of revenue, or reduce government expenditure, but then you don't need to generate a fiscal surplus, and so on. So what I'm saying, this is a balanced budget fiscal policy. That's what it's called. What if I move G and T by the same amount? Does that curve move? Yeah. AUDIENCE: It does because the multiplier next to T is c0 in the equation-- original equation. RICHARDO CABALLERO: c0? AUDIENCE: c1. RICHARDO CABALLERO: Yeah, OK. Perfect. Yeah. Yeah. So, in which direction does it move? So if I reduce G and reduce T by the same amount, what happens to IS? Moves to the left or to the right? Yeah, it moves to the left. Because why is that? I can always go back to my basic goods market equilibrium model. If I reduce G by 1, that reduces aggregate demand one by 1. One for one. And then the multiplier kicks in. If I-- but the initial change shift down is 1. If I reduce taxes, I increase aggregate demand, but by c1 times 1. And so I had a reduction in aggregate demand of 1 and I had an increase in aggregate demand of c1. 1 minus c1 is greater than 0. That's the reason you have, on net, a reduction in aggregate demand. Hint, this is not a random thought I had, so do understand it. OK, good. OK. Thus, monetary policy. So that's an expansionary monetary policy. And in equilibrium, why it's expansionary? So cutting interest rate, because it will increase equilibrium output. That's the case in which the Fed probably is unhappy with this low level of output, maybe it's a recession. So one of the main policy tools we have to fight a recession is to lower the interest rate. And you can see here how lowering the interest rate will increase equilibrium output. How does it happen? Why is it that this happened? Why is it that the equilibrium output rises? Exactly. Because it's increasing investment. That gives us the first kick. And once equilibrium starts rising, then consumption rises and we get the whole multiplier. But the initial impulse is exactly because this increase in investment. And how does it implement that? Open market operation. So what the Fed will do is if it wants to cut the interest rate, it goes out there, buys bonds from the public, and gives them money in exchange, and that's what happens here. And then I talk about different policy mixes. This is what typically when an economy is deep into recession, you're going to see both policies that work at the same time that's very powerful. That's the case in which we have a very-- we cut-- we have an expansionary monetary policy that shifts down and an expansionary fiscal policy. And that's definitely what we did during COVID, was massive. And during the Global Financial Crisis-- so typically big recessions will lead to-- any recession will lead to something like that. Obviously if it is big, you're going to have a bigger combination of this kind of stuff. Some problems that monetary policy may face is that sometimes you hit a zero lower bound. And then when you hit a zero lower bound, you just can't lower the interest rate more, you lose monetary policy, you need to do other stuff, and typically fiscal policy then becomes very, very active. And this is not just a theoretical curiosity. I mean, we have been against the zero lower bound for a sustained amount of time during the last 20 years or so. Oh, that's another policy mix as well that-- suppose that you need to do a fiscal adjustment, I said. So you want to reduce the deficit, reduce G, but you don't want to have a recession as a result of that. One way you can do that is by-- you have a contraction in G or increase in taxes. That's contractionary, but you can offset it with an expansionary monetary policy. I think in the quiz somewhere you have a question-- I don't think that is specific to this, but you could you're asked to compensate for something with something and something like that. So some curve move, and then you're asked to offset that effect on output. So you should understand this kind of things. The next step was to extend a little bit our IS-LM model. And by extension we said, well, look, at this moment we have only-- prices are completely fixed, but in reality we have inflation. And so the nominal interest rate is not really the effective cost of capital for a company. A company that wants to fund the real investment is more concerned with the real interest rate in Spain, not the nominal interest rate. So with prices that are constant, there is no distinction, but if you have positive inflation, then the distinction makes a difference. That's the reason we want to talk about that. And the second thing is that the same firms are very unlikely to pay the same that the Treasury pays for borrowing. It's going to pay-- it's a riskier proposition to invest in bonds issued by a corporation, and therefore, they're going to have to pay a risk premium for that. And so the importance of these two things is that a we ended up with an IS-LM model that now had something a little more complicated here because it didn't have only the nominal interest rate, but also had expected inflation. If, for any given nominal interest rate, if we expect a higher inflation, that means a lower real interest rate. So for any given nominal interest rate, if expected inflation goes up, that's expansionary, really, for firms. It's like it's cheaper, in a sense, to borrow. Conversely, if x goes up, the credit spread goes up, that's contractionary because it's now more expensive for the firms to borrow for any given real interest rate. So we can-- this is called extended IS-LM model simply because it has been extended to incorporate these additional factors. And now, you have two more parameters in your model, which is expected inflation and the credit spreads. So if you move either of these, you're going to move your aggregate demand curve in the goods market. And it's going to move for exactly the same reasons that that aggregate demand moved when you move the interest rate. It enters symmetrically. In this model, these guys here enter completely symmetrically with the interest rate. So whatever was the comparative statics you had with respect to the nominal interest rate before, they applied to x minus pi. What I'm trying to say is if you know what is the change in equilibrium output as a response-- as a result of an increase in 100 basis points on the nominal interest rate, then you know what is the response of equilibrium output to an increase in credit spreads of 100 basis points, or to a reduction-- or to a reduction in expected inflation of 100 basis points. Entirely symmetric. Because that's a channel. It's the real-- it's the cost of capital channel for the firm. That's they're all entering exactly through the same place. But the Fed doesn't control this guy, it controls only the nominal interest rate. So anyways. So these are new parameters here. So this is an example here. That's an example in which credit spreads or expected inflation went up. Sorry. Where credit spreads went down or expected inflation went up. And that's expansionary. That will increase aggregate demand because for any given level of output, now there will be more investment. Credit spreads are lower or expected inflation is higher mean the real interest rate is lower for any given nominal interest rate. So if the Fed doesn't react to that, that's going to lead to an expansion in output. Of course, the Fed could react to that. Suppose the Fed is OK with the level of output we have. Suppose it's a low output. And the Fed's seeing credit spreads falling, so output is expanding. But the Fed says, no, no, no, the level of output Y0 wasn't what I needed. I don't want Y1. What would the Fed do? Increase the interest rate. Exactly. And it's very easy to see in this picture here that if you don't want this guy, the total sum to move, then if this guy moves down-- or this guy moves up, then I need to move i exactly to offset that, and that's it. It's very easy to calculate. I don't need to solve my whole model, actually. You tell me this thing in net went down by 100 basis points, if I don't want to change output, then I need to increase the interest rate by 100 basis points so I don't change the cost of borrowing-- the effective cost of borrowing for corporations. In fact, this is exactly what is going on right now in the US economy. Every time Marcus gets very excited, credit spreads are compressed, the stock market goes up, the Fed comes out and says, come on, guys. I mean, we have inflation problem, I'm going to need to keep hiking interest rates because I need to offset your enthusiasm. They don't use those words, but that's exactly what happened. I mean, Chairman Powell was testifying in Congress yesterday and today, and that's what he said. I'm just giving you a summary of what he said. Now a problem that the central bank may face, suppose you have the opposite situation, is that one in which credit spreads are going up a lot and expected inflation is declining a lot. And the Fed doesn't want output to decline because that combination will lead to a reduction in output. So the Fed wants to cut interest rate. What problem may it face? The zero lower bound. It may not be able to bring interest rates as much as-- Because suppose that the interest rate today is 50 basis point. It's not the case today, but it was two years ago, 50 basis points, or 25 basis points, and credit spreads go up by 200 basis points. Well, there's no way the Fed can offset that because it has maximum 25 basis points to lower and credit spreads went up by 100 basis points. And that's when you start seeing all these more exotic policies-- quantitative easing and other things-- to offset the negative impact of the increase in the credit spreads in the economy. And the last thing we did was to begin our transition to wear medium running shoes, and the whole thing began from the labor market. Now you're going to get a little bit in the quiz of that, but it's not going to be as important as what I just described. A little bit you're going to have. And the basic-- well, definitions, you should know the basic definitions. Well, this was the first important equation. We had that wage-setting equation that said essentially that wages are increasing in expected prices. Obviously the nominal wage the workers are going to demand is going to be higher if they expect the price level to be higher in the future. But importantly, it's decreasing and unemployment and increasing in this variable that represents their bargaining power and so on. Then we look at what happened on the price-setting side, meaning what firms do. And for that, we have to start with the production function. We had a very simple production function which said if you want to produce one more unit of the good, you need to hire one more worker. That means that the marginal cost of production is the wage. So it's very simple. And then we said we're going to have a very simple model in which the firms charge their marginal cost, which is the wage times the markup 1 plus m. So m is a number like, say, 0.2. So the wage is 100, the markup is 20%, they want the price of-- they're going to charge a price of $120. We can rearrange this in terms of wages, and you can say, well, the firm-- the maximum real wage that firms collectively are willing to pay is really 1 over 1 plus the markup. That's just from that. So then we look at a concept that is important, which is the natural rate of unemployment, and we said the natural rate of unemployment has nothing of natural. It just means that is the level of unemployment when the price is equal to expected price, or expected price equal to the price, you pick. So all that we did was to replace the wage-setting equation the expected price for the actual price, and then we divided both sides. And now we have this real wage demand by workers, when the price is equal to expected price, and we also had a price-setting equation. We can-- and I said when we replace P e for P, then I get the right to put an n-- superscript n there. That's the natural rate of unemployment. Because that's my definition of the natural rate of unemployment. It's what happens when I can replace in the wage-setting equation the expected price for the price. And we look at the natural rate of unemployment, which is the equilibrium here of the price-setting equation. It has an implied real wage of 1 over 1 plus m. And that's the wage-setting equation, which is obviously decreasing unemployment because the higher is unemployment, the lower the wage demand by the workers. And that's one natural rate of unemployment. Again, nothing natural, it's a function of parameters. Which parameters? Well, it's a function of that markup parameter, it's a function of this institutional variable z, for example. So that's an equation. That's an example in which z goes up. So suppose that somehow unions go up or something like that, unionization goes up, something of that kind. Or unemployment benefits go up, something of that kind, which, in principle, is supportive of workers. Well, in this model, that will immediately lead to an increase in wage demand-- at this level of unemployment, there are going to be a higher demand of-- a higher real wage demand by the workers because they have more bargaining power now. In this particular model, that cannot happen because the real wage that firms are willing to pay is only this, 1, 1 plus m. So in order to restore equilibrium in the labor market, what has to happen is unemployment-- natural rate of unemployment will go up, and that will restore equilibrium here because, well, the workers-- the bargaining power workers gain through those benefits in z they end up losing by an increase in the equilibrium level of unemployment. So that's the reason here this stuff backfire in a sense-- to the workers because you end up with higher natural rate of unemployment. So Europe, for example, has much higher labor protection than the US. Well, they typically have a much higher unemployment rate than the US. So that trade-offs, all these things. That's the case of increasing the markup, and increasing the markup means effectively that the firms are going offering a lower real wage. Well, at this level of unemployment, workers are not going to take that lower real wage. So what will have to happen for workers to take that lower real wage is for unemployment to rise. So those are the two canonical experiments you can have here. It's what happens when markups go up. And that can go-- they can go up for the wrong reasons. It could be for oil shocks and stuff like that. It could be because the market becomes less competitive or it will be used to [INAUDIBLE] and so on. But the final outcome here is that we end up with a higher natural rate of unemployment, which, again, highlights the idea that this is not a God-given unemployment rate, so it's not-- it's not good in any sense, it's just whatever it is the equilibrium. OK. Anyway. So you should understand well what these two type of shocks do to the natural rate of unemployment. And I think that's-- because lecture 9 is not for this quiz. That's all I want to say. Any questions? No? So-- oh, yeah. AUDIENCE: [INAUDIBLE] RICHARDO CABALLERO: Yeah. This? AUDIENCE: I think so. I think-- is that the same x as next to the-- RICHARDO CABALLERO: Yeah. This is the credit spread. AUDIENCE: All right. RICHARDO CABALLERO: You want me to explain this-- AUDIENCE: I just want to make sure it's credit spread. RICHARDO CABALLERO: Yeah, this is the credit spread. I said-- that's the way you calculate this credit spread here. It's-- remember, there are two reasons why-- you really want to know? In any event, let me say. So there are two reasons why the credit spreads really happened. One is the actual probability of default of a bond, which the Treasury has a very low probability to default. Corporations, depending on the ratings, they may have a high or low. And the other one, which is very significant, is how risk-averse investors are. And that risk aversion changes a lot over the business cycle. We capture everything through just that x spread, which we capture through this probability of default. But you can think of that probability of default as being the perceived probability of default. When you're very scared, you perceive that terrible things can happen. So it's a subjective probability of default. So when that probability of default is different from 0, then you start getting a positive spread. AUDIENCE: How impactful is the actual default? I know there were some recent defaults in at least the European real estate markets. RICHARDO CABALLERO: Yeah. AUDIENCE: Like how-- I guess, is there a difference between a fear of a default and like an actual default, like the implications-- RICHARDO CABALLERO: Oh. This is all about perceived risk. So because this determines the borrowing that firms can do-- the cost for firms of borrowing. If you already defaulted, you cannot borrow, so that's over. That has other consequences. It may have impact on the balance sheets of the banks, it's destruction of wealth, it may lead to other problems. But the problem we're highlighting here in this model is the cost of borrowing, and that is something that happens only before you default. Yeah. I mean, actual defaults, especially typically in developers and stuff like that, can-- and that's what happened in the Great Recession, it can have consequences, especially for the banks that typically lend to these developers and so on. But I may do something about financial crisis, but much later in the course at the end. OK. Well, good luck. Enjoy it. If you understood what I said today, you're in good shape.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_4_The_Financial_Market.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: All right, let's start. So today I want to talk about interest rates. If you have followed the news, there is a lot of debate on these days on where will the interest rate in the US end at the end of this tightening cycle. And so I'll show you. There has been a very aggressive monetary policy trying to fight the inflationary episode we're going through. And that's done through interest rates. And the question that I want to address today is well, how is it that the interest rate is determined? So when a central bank decides to hike interest rates, how do they do that? And obviously this is about financial markets. Interest rates are set in financial markets. And financial markets are very, very complicated. So we're going to keep it very, very simple. We're going to introduce more complexity later on in the course. But still, we're going to keep it pretty simple because our main objective in studying financial markets is really to achieve some sort of understanding of how the interest rate policy that the central bank controls is determined. So before I get into specifics, let's see some trivia here. Who knows who that person is? OK. You, sir. AUDIENCE: Jerome Powell. RICARDO CABALLERO: Jerome Powell, exactly. And who is Jerome Powell? AUDIENCE: The chair of the Federal Reserve. RICARDO CABALLERO: The chair of the Federal Reserve System in the US, which is the central bank of the US. OK, so that was an easy one. And if you are into financial markets, everyone is worried about what this guy is thinking and his friends are thinking at this moment because they determine the interest rate. How do they do that? Well, that's what we're going to talk about later on. A little trickier. Who is this? No, no. That's cheating. Katsuo Ueda. He's the next president of the Bank of Japan, the central bank of Japan. And an interesting-- there are many interesting things of him, about him-- but he's a he's a graduate from our program, the PhD program, which actually is an incredibly successful program at producing major central bankers. Ben Bernanke is our alumnus. He was the chairman of the central bank, the chair of the central bank of the Fed during the financial crisis. Mario Draghi is one of our graduates. Mario Draghi was the most successful central banker in Europe. He was the president of the ECB, European Central Bank, for many years, and in particular, during the global financial crisis and the European crisis, which followed the global financial crisis. But you name it-- Stan Fisher in the past of the Bank of Israel. And nowadays, Phil Lowe of the chair of the Reserve Bank of Australia. In Chile we have had two or three presidents and so on. So if this is your career, this is a good program. You should join our program. You may end up in a good place. But that's the next one for Japan. And Japan is a very interesting place from the point of view of monetary policy, precisely because they haven't had much space to do conventional monetary policy. So they have had to do lots of unconventional things from the point of view of what a central bank typically does. But today, we're going to study conventional things. And we'll talk a little bit more about unconventional things later in the course. Now, some institutional knowledge-- this is, again, in the US, the central bank of the US is called the Federal Reserve System. And it's a system, really. It has the Board of Governors which sits in Washington DC. And there are seven governors. And the governors are nominated by the president and then confirmed by the Senate. And the president of that board is-- so the president of the Fed will be one of those members. Now, in addition to that, the US has a Federal Reserve bank system. There are 12 regional banks. There is one in Boston. In fact, if you look at the skyline near the waterfront, there is a building built out of recycled aluminum. Well, that's the Reserve Bank of Boston. And of those 12 regional banks, they rotate. So the policy, the policy interest rate, is set in a committee, which is called the FOMC, Federal Open Market Committee. And the member of that, the voting members of that committee are the seven governors plus 4 out of the 12. And they rotate. Most of them rotate. The only one I think does not rotate is the president of the New York Fed, because they are so important, because that's the financial heart of the US that you certainly want that president to be involved in interest rate decisions. And it's really the Fed that is in charge of communication with financial markets, which is a huge thing for the Fed. They are in New York and they're crucial for that. OK so those are the people that-- and in most places around the world you're going to have at least something equivalent to the 7 governors out there-- most places. Obviously, the ECB is different because it's multiple countries. So each country sends one member. But otherwise, it tends to be like the US without the regional banks. So why do we care about monetary policy? Well, because the thing is one of the main policy tools you have in the short run remember in this part of the course, we're trying to understand output in the short term. And one of the main policy levers-- I mean, how can you affect output-- is monetary policy. Which one is the other one? There are two major ones. AUDIENCE: Fiscal policy. RICARDO CABALLERO: Fiscal policy. Exactly. And fiscal policy we did look at in the previous lecture. Remember, we said an expansionary fiscal policy is an increasing G while a decline in T would lead to an expansion in aggregate demand. Equilibrium output would go up. So fiscal policy is something you always use in deep recessions. But monetary policy is much more nimble. I mean, it's a bunch of people that need to meet and just change the interest rate. Anything that is fiscal, there are some automatic stabilizers, by the way. So automatically, fiscal policy becomes more expansionary during recessions and stuff like that. But any deviation from the typical automatic stabilizers require Congress to approve things. It's a long process. And so it's not something that can react as quickly as monetary policy can. And so that's what we're going to do. That's the reason why monetary policy is important for us at this point. Then there are medium and long run issues. Fiscal policy still affects equilibrium output in the medium and long run. Monetary policy is much less effective at that. It's very difficult to change sort of equilibrium growth or things like that with monetary policy. It's very pretty minor. Most of the impact of monetary policy in the medium and long run is really on the price level and inflation more than on real activity. But in the short run it's very powerful for reasons that we're going to discuss in this and the next lecture. OK, so as I said, monetary policy acts through financial markets. I think it's a very interesting part of my research agenda is about that. I think the central bank is a very strange institution because most of its mandates are in terms of what happens in the goods market. It says try not to get into recession. Try to get us out of a recession, and so on. But unlike fiscal policy, which has tools that are directly aimed at the goods market, remember, is a purchases by the government of goods. So if there is insufficient demand for goods, the fiscal policy, the fiscal authority can go out there and buy goods. That creates more demand. The central bank is given the same mandate aside from price stability and things like that we want to worry about later. But it doesn't have any tools. The central bank, if there is insufficient demand, the central bank cannot go out there and buy hamburgers. Fiscal policy can do that and expand the demand for hamburgers, but not monetary policy. What monetary policy can do is buy bonds, buy instruments in the financial markets. And through that, affects real activity. So it's the fastest policy tool but it's the most indirect, in a sense, because it has to go through financial markets. And those channels can be very complex. But obviously, we're not in the business of complicating things in this course. So we're going to make it very, very simple. We're going to assume that financial markets only have two instruments. Obviously a huge simplification. There are millions of financial instruments out there. But I'm going to isolate only two instruments because this happens to be the instrument where the central banks typically participate. They affect all asset prices around. But the direct interventions typically-- recently has been a little different-- but typically it's only in these kind of instruments. And so we're going to focus all of our analysis on those two instruments for now. Later on, we're going to talk about equity and stuff like that. But just to isolate how monetary policy works, it's sufficient to just focus on two financial instruments. So we're going to assume that people have their wealth in only two forms, in two financial assets. One, we're going to call it money. And the other one we want to call it bonds. Money, the characteristic of money, what we call money, there are many definitions of money, and 1, and 2, and 3, and 4. But let's keep it very simple. Money essentially means something that you can use very easily for transactions. So for example, the most important example of money-- not the largest, but the most important one-- is cash, currency. You can buy anything with cash. It's no problem whatsoever. Now the characteristics of the money typically does not have, it doesn't give you any return. You don't invest in cash. You have cash because you need to do transactions. But it's not the way you get a return. And so money, that's what it's going to mean for us, is something that is used in transactions. But it pays no interest rate, no interest. There's no interest on that. A bond is going to be the polar opposite. It's going to be something that pays a positive interest rate. So if you buy a bond for 100, for 95, you're going to get $100 a year from now. So you get something out of that bond. But it's not very useful for transactions. I mean, you cannot go and buy your lunch with a bond. You get the piece of bond for that. And many things cannot be-- even within financial markets-- you cannot buy assets with asset. You have to go through some process in which you can sell something, get cash, and with cash you pay for that stuff-- not necessarily cash. It could be other forms of money. But typically, financial instruments need to be sold before you can buy something else. You don't swap them. It's not a barter. But this is what it means for us. So this, whenever you see something like this, is always interesting for an economist. Why? Any economists, microeconomists or whatever. What have I done there that makes-- Economics is about decisions. And then it's about the equilibrium-- decisions and equilibrium. AUDIENCE: What makes people decide how much [INAUDIBLE]?? RICARDO CABALLERO: There's a decision to be made here. There's a trade off. If I need to do lots of transactions, I better bias my portfolio towards cash, towards money. If I don't need to do lots of transaction and the interest rate is very high, particularly, then this is an issue today. Today interest rates are very high. Nobody cares about bonds or anything two years ago when the interest rate was 0. But today interest rate is 5%. So it's an issue. If you want to keep it in cash, you're going to give up a lot of return. So this is a decision to be made. It's a portfolio decision. And that's always interesting for economists. So let's talk about that decision. And we're going to talk-- you can describe this decision either from the side of bonds or from the side of money. We're going to describe from the side of money. Because there is a total amount of wealth and you have to decide what to allocate it to. It suffices, if I tell you there's a total amount of wealth and if I tell you how much you allocate to money, then I'm telling you implicitly how much you are allocating to bonds. It's a complement of that. So I cannot do an analysis either way. But I'm going to do it through money, which is the way it's normally done. So money demand, if I plot it in the space of m, money, and the interest rate, it's a downward sloping curve. Why is it a downward sloping curve? AUDIENCE: The interest rate is higher. It pays off more to get bonds, your utility shifted. RICARDO CABALLERO: Exactly. Your decision, if the interest rate is very high, is to go more towards bond. The interest is very low, I don't care too much about bonds. I'd rather keep the thing that helps me transact, which is money. And it's interesting because for most of your adult life, you have lived in a world in which that was not a very interesting decision, actually. Because interest rates were very close to zero. Now you're living in a different environment. Interest, for the first time, sort of high for you. For you, there may be a bigger decision between investing in equities and cash. But bonds-- and these are safe bonds, by the way, US treasuries and things like that. And money is not something you have to worry about. OK, good. What I'm saying is interest rates were around here. So you were all in cash one way or the other. And now interest rates are a lot higher. The other thing that-- so that's a movement along the curve. I'm plotting something in the space of interest rate, money and interest rate. Then when I tell you what happens when the interest rate rises, I'm asking you for a movement along the curve. So if I raise interest rate, I move along the curve up. So the second thing that we have there, argument we have is something that shifts the curve, because it's not part of my axis. So it will shift the curve. It's a parameter for each of these curves. And that's nominal income. So in particular, if nominal income goes up, I'm showing you there, that money demand goes out. Meaning for any given level of interest rate, you're going to demand more money. Why do you think that's the case? And again, I'm looking here at the aggregate. But it applies to an individual, as well. So for the same level of wealth, now nominal GDP is higher. Remember that nominal GDP is the same as nominal expenditure here. So if nominal GDP is higher, then that means there's going to be more transactions because there's going to be more expenditure. and therefore, you need more of the thing that is useful for transactions. Money demand goes up. The second point to highlight is that while, in most places, we use Y, here I'm using dollar y, nominal y. Why do you think is that? Why don't I just put there real GDP rather than nominal GDP? Is it a typo? Let's go into steps. Suppose that prices are totally fixed and real GDP goes up. That means that this economy needs more transactions because it's going to be more expenditure. So that explains that yeah, when real output goes up, then money demand goes up. When I say money demand goes up, I said for any given interest rate we have more money demand. Now fix real output. And I tell you, suppose that the price of goods doubles. Do you think you need more money to transact? Of course, because it's the same. It's equivalent. Money is dollars. I have $10. And now prices are twice what they used to be. Well, I'm going to need more dollars to transact. So that's the reason we have nominal GDP here rather than real GDP. Let's now determine the interest rate. And I'm to do it in the simplest possible model first. And by simplest, I mean here in particular, no banks. No intermediaries. OK, so there's only a central bank and people. OK, so suppose that the central bank decides that it wants to-- this is the way monetary policy used to be conducted. Suppose the size that it wants to offer m dollars to the economy. And the central bank is the one that produces at-- this moment in which I have no intermediaries, money is really currency. The only one that can produce currency-- forget Bitcoin-- only one that can produce currency is the central bank. I mean dollars. No one else can produce dollars. Watch out when we talk about banks later on. In this economy I'm describing, there are no banks. The only one that can produce dollars, currency, is the central bank. Any other one that produces dollars, that's illegal. So it's a central bank. So the central bank can decide how much money to supply. And suppose it decides to supply m. That's it. It's a decision. Well, what is now? For the first time I can answer the question is, well, what is the interest rate in this environment? Why do I know that? Because I have a money supply. I have a money demand. The intersection at this level of money supply, the equilibrium interest rate, that is the interest rate is consistent with money demand equal to money supply, is that one. Is it clear? So I have my money demand. And I'm saying the central bank, suppose the central bank decides to supply m. Well, in equilibrium, that will be the interest rate. That's the way the interest rate is determined. Now, suppose that the central bank increases. Remember, the goal of this lecture is that we get to understand how is it that the interest rate is determined. How is it that the Fed, the central bank, determines the interest rate? Here, we said, look, the central bank decides on certain amount of money. And then the market determines what the interest rate is. And this is the way monetary policy used to be conducted. The central banks would typically decide the amount of money in the system. It was called monetary aggregates. The amount of money in the system and then the market would determine the interest rate. It turned out that was a nightmare. And so now that's not what central banks do. And it was a nightmare because this money demand that looked so peaceful here, in practice is moving around all the time for a variety of reasons. Even holidays affect the money demand and all that. So if you fix the monetary aggregates, and money is moving all over the place, what happens? Suppose that the central bank says I'm going to offer this amount of m, that's it. And now I tell you the money is moving all over the place. There are weeks that have three days of weekend that have no vacations in between, or there is a Super Bowl and lots of people decide to buy tickets or whatever, or beer, so whatever. AUDIENCE: [INAUDIBLE] RICARDO CABALLERO: Well, no. Because when you say shortage-- I mean, it depends what you mean by shortage. So you could be right with that part of the answer. But when you say excess demand, then I have a problem. Because that's true only if the interest rate doesn't move. But in a situation in which a central bank fix em. And let the market do its thing, then the interest rate will not be fixed. And that's exactly the problem, that the interest rate becomes very, very volatile. If you do it only through monetary aggregates because this guy is moving all over the place. And so imagine what happens. If this demand shift up, then an equilibrium interest rate goes up, precisely to avoid your excess demand. Because if this thing goes up, I have an excess demand at that interest rate. But what will happen in equilibrium is the interest rate will go up until the excess demand disappears. Now in central banks like the US, central bank the Fed, they can control this stuff fairly well. Many of the central banks don't have that. If you look at the Bank of China, for example, they have lots of high frequency movements in interest rates because they are not very good at doing this stuff. It's not that easy to control an interest rate, for example, to keep it fixed. But the central bank used to do that. That's what I'm saying. It used to do that. I'm saying in practice this system wasn't very good because in practice, the money is moving around for a lot of idiosyncratic reasons. Some of them are using credit. Some of them are very predictable. There was a lot of panic around the shift of the year 2000 that the ATMs would stop working and stuff like that. So there was a massive increase in money demand because people wanted to have cash just in case ATMs stopped working. and so there are some things that are predictable and some. But you can get very large fluctuations. So in practice, what central banks do nowadays is really is they tell you what the interest rate is. And then they offer whatever m the market needs for that interest rate. That's the way modern monetary policy works. OK, I'll tell you a little bit more about that. So let's see what happens here if-- suppose the Fed, for reasons that you will understand better in the next lecture, suppose the Fed decides that they want to have an expansionary monetary policy. Expansionary monetary policy means it's going to expand m. It was offering a certain amount of m. And now it decides to offer more of that m. So what happens in equilibrium? Well, we start from an equilibrium here. If the interest rate remains at that level, now we have an excess supply of money. And the only way to restore that equilibrium is by people demanding more money. And when will people demand more money? Well, when the interest rate is lower. So excess demand, the interest rates are declining. Then the demand for money starts catching up with the new hire supply of money. And you end up with a lower interest rate. So that's an expansionary monetary policy, an increase in m that leads to a decline in the interest rate. Again, modern central banks don't tell you we're going to increase m by 20%. What they tell you is we're going to cut interest rate by, say, 50 basis points. When they tell you that they're going to cut interest by 50 basis points, they're reading you this axis. But behind that operation there must have been an increase in money supply. And then it comes all the fine tuning that they have to do now so the interest remains there when the money is vibrating around that thing there. But when we say an expansionary monetary policy, we really mean cutting interest rates. How is that the interest rate is cut effectively? By an increase in the money supply. But they don't tell you that they are doing that. But that's what they're doing. Good. So nowadays, we're in the opposite process. We're moving that way, up. But you seen, in every single meeting now for the last seven meetings or so, we have seen a hike in interest rates. Well, they don't tell you that. But they're saying, we're going to cut money supply and moving in that direction. Because if they move in that direction, then interest rate go up. It's clear. Good. What else shifts? This is a different kind of shift. Suppose that nominal income goes up. And this is happening all the time. Nominal income is growing in most economies most of the time. Unless you're in a recession, nominal income is growing. It's growing for two reasons-- one, because you have inflation. And the other one is because of real output is growing. That means that typically in an average year, money demand is shifting to the right. So if money demand shift to the right and money supply does not change, then what happens to the interest rate? Money demand goes up, money supplies doesn't change. What happens to the interest rate? Increases because at this interest rate here, we have an excess demand for money. So we have to reduce the money demand. How do we reduce money demand? By increasing the interest rate. That's the way we get there. So that's yet another reason why it's not a good idea to be targeting monetary aggregates. The Fed tells you we want the interest rate at 5%, say. And then the economy will be growing and so on. And what the Fed will be doing is if they want to maintain the interest rate at 5%, well, they keep expanding m so the interest rate doesn't go up. That's the way it's normally conducted. But if the Fed stays sleepy and they have an increase in money demand. Then the interest rate will tend to go up. So I told you that the central bank is moving m, increasing m, reducing m, or whatever. So how do they do that. I mean, it's not that they-- although that policy has been advocated. It's not that they go in a helicopter and sort of fly builds on top of all of us. They don't do that. Although, again, it has been advocated in extreme cases of recessions, and so on. And it's called helicopter money. Just give money away. But that's not the way it's normally done. For normal operations, is not done that way. So how is it done? Remember that we have a financial system that is very simple for now. We have money and bonds. So what the central bank does, the central bank wants to change the portfolio of people. They want people to have less bonds, say, and more money. That's to raise the interest rate. So what the central bank does is goes out there and expansionary open market operation. Open market is because they go into the open market to buy bonds, the as opposed to an operation that happens behind doors. If the Fed went directly to the Treasury and bought bonds, that would not be an open market operation. But what they do is they go to the financial system and they go with a big bag of money and say, OK, we want bonds. And so the public sells bonds to them in exchange for money. That's an expansionary market. So what we saw before when m shift to the right, what the central bank really did is went out there and bought bonds from the public. The public sold the bonds and got the cash at some price. When I show you equilibrium in the money market, that means also an equilibrium in the bond market necessarily. So it has to be the price is right for the portfolio of individuals. A contractionary open market operation is the opposite. It's a central bank goes out there and sells bonds to the market and takes the cash back. That's a contractionary monetary policy. So when the Fed wants to-- Yeah, let me finish. When the Fed wants to cut interest rates, what the Fed does is an expansionary monetary policy, which means it goes out there and buys bonds from the market and gives cash to the market. OK, yeah. AUDIENCE: How can a central bank, in this instance, guarantee that someone's going to buy their bonds? If their government didn't have the financial stability, I guess, the United States has, how would they guarantee that these bonds will be worth anything in the long run? RICARDO CABALLERO: Well, I mean, you're talking about risk premium. And that's an additional issue. Typically central banks intervene in very short duration bonds. So it's not the typical bond that is risky in many sense. It's very, very, very short duration bonds, treasuries, very short duration. In fact, it's even less. Nowadays central banks really intervene in the overnight market. But you're that countries that don't have a bond market, a well-developed bond market, have problems with the management of monetary policy and so on because they have a credit spread that is moving around, as well. But they tend to focus on the type of instruments. That thing becomes very important. I mean, for Japan, even big guys, very big guys, now they have an issue because they have been buying bonds, very long duration bonds-- so 10 year bonds and stuff like that. And there it's a little trickier. Market needs to trust you a lot more if you're dealing with 10 years things and you're dealing with three month horizons and so on. In other instances, for example, Chile has a situation like that, the central bank itself can issue bonds. And so those are bonds issued by central banks. And they tend to be very reliable bonds because these are bonds that are issuing your same currency. So you have the currency to always pay them. You can always print money and pay for those bonds. So it rarely happens that there is default on bonds of that kind. The typical bonds, government bond, that defaults is a bond that is issuing a currency different from the one you have because then you may not have the currency to pay for the stuff. And that's the reason emerging markets run into trouble and things of that kind. So in terms of the balance sheet of the central bank, how does an open market operation look? So if you look at the balance sheet, this is an incredibly simplified version of the balance sheet of the Fed. They have lots of things, more assets, gold, and all sorts of stuff. But in our simple economy, the government, the Fed, the assets of the Fed, they have some bonds. It has already some bonds out there. And the liabilities is the money they issue. They owe that to people. I mean, they issue currency. But that's a liability. People can do things with and buy things with that money and so on. It's a liability. So that's the way the balance sheet looks. So when the Fed decides to do an expansionary open market operation, then what it does is it goes out there. It issues, say, a $1 million expansionary open market operation. They do it in much bigger tickets on this. But let's make it simple. $1 million of an expansionary operation. When they go there, they print $1 million and they buy $1 million from the market. And so at the end, the balance sheet ends like it used to be. There is an extra $1 million of liabilities because now there's a million more of dollars circulating out there. But against that, the central bank also has a million more of bonds holdings. So that's what happens. With an expansionary monetary policy, the balance sheet of the central bank expands. Both sides expand by the same amount, but expands. And one of the big themes of recent years, starting from the global financial crisis, is that these balance sheets used to be small peanuts type things, for $1 trillion or something like that for something like the US, the size. Nowadays they're much, much larger than that because they have done so many operations to first get us out of the global financial crisis and then to get us out of COVID and so on, that these balance sheets are now an order or two orders of magnitude larger than they used to be. But in other words, they have had to do lots of expansionary monetary policy over the last couple of decades. So let me talk a little bit about interest rate and bond prices because that's the other side of this thing. So what is the connection-- because sometimes it's easier to understand things in terms of prices and in terms of bonds? So suppose a bond pays $100 a year from now and no coupon in between. So if you buy a bond now for some price PB, you get $100 a year from now. So what is the interest rate of that bond? That is, what is the excess return that you get out of that, or the return you get out of buying that bond? Well, it's going to be $100 minus whatever you pay today-- say you pay $95, then $5-- divided by your investment, initial investment, which was 95. So that's a bond that gives you a little bit more than 5%. OK, that's the return. That's the interest. So that's the connection between interest rate and prices. And notice that this is an inverse relation. When the price of a bone goes up, the interest rate goes down because you're paying more for the same principal. You're going to get 100. But now you're paying more for it. Well, that means the interest rate, the return on that bond, went down. Conversely, you can say that I went around. The high is the interest rate, the lower is the price of bond. So for example, today the interest rate is 5%. If I want to issue a bond that doesn't pay any coupon and I'm completely safe, so I'm the US Treasury, I'm going to be able to sell that bond for 95. Two years ago, when the interest rate was 0, the Fed would have been able to sell that bond for 100. So there is an inverse relationship between the interest rate and the price of bonds. And now I can take you back to my open market operations. So remember what I said. There's another way of remembering which way these signs go. Remember when you have an expansionary monetary operation, I said the Fed goes out there with a bag of money and buys bonds from the market. When bonds, when you buy something, when there is an enormous increase in demand for something, what do you think happens to the price of that something? Goes up for anything-- cars, whatever it is. It's a lot more demand, then prices will go up. And the Fed has a lot of cash. So he can really buy large amounts of bonds. And so when the Fed goes out there and does an expansionary open market operation, means it goes out there and buys a lot of bonds. The price of those bonds goes up. And by that formula, that means the interest rate is going down. So that's another way of understanding how is it an expansion open market operation lowers the interest rate, because it raises the price of the bonds that it's demanding. And when you raise the price of the bonds, then the interest rate goes down. So really, the only thing I really-- I want you to understand what I just said well because we're going to use it. I want you to understand what comes next ideally well, but it's OK if you don't understand it too well. And I'm going to tell you when you have to start understanding very well again. For now, this is a piece in which, what I'm doing here is making things a little bit more realistic. The message will be similar. But it's a little bit more complicated. That's the reason, substantively, I already told you what I wanted to tell you. What I'm going to tell you now will give you a little bit more realism, and therefore will allow you to understand a little bit better a technical description of monetary policy or something like that. So in practice in the economy I just described, you had sort of households and firms and the central bank. But in practice, there are lots of financial intermediaries. And in financial intermediaries, the most prominent of them, for especially for monetary policy, are the banks. And there are certain banks, actually, that are even more important than others. But banks are financial intermediaries. They take money from someone or they post it from someone and lend it to someone else, or buy some instruments. So the intermediate the funds of somebody that wants to save in the bank, a deposit and a deposit account or something, and they lend that money, effectively in the name of the other person, to someone else. So banks do two things to the model I just described. The first is that they produce money, as well. Because money is made of currency, which is issued by the central bank. But it's also made of checkable deposits. If you have a checking account, something you can have a debit card against or something like that, then that's money for you. If you deposit in a bank in a checking account, that's money. And so banks, the liabilities of banks, the deposits is part of money, is something you can use for transactions. You can write checks. You can use your debit card and stuff like that. So that's the first thing that banks do. The other thing the banks do is they, themselves, have a deposit account at the central bank. So they, themselves, take part of this deposit. They take the deposit. Part of the deposit they lend to other people. Another part they buy financial instruments, bonds, and stuff like that. In this economy would be only bonds. And the other part, smaller part, 10% or something like that, they deposit at the central bank. That deposit at the central bank is called reserves. So when you hear the word reserves or the banking sector, this is the deposits of the banks at the central bank. And that's also a liability for the central bank. The central bank is holding that deposit of the banks. It's not the central bank's money. It's the bank's money. And they're holding it there. It's a liability, as well. So now, if you look at how the balance sheets look, our central bank now has more things. It has assets. It's going to be bonds. In this economy, it's the only thing you can have. But now it will have the currency that it had before and in the liabilities, but also will have the reserves, the deposits of the banks themselves. OK, and this is called-- this stuff here is called central bank money. In contrast, it's called central bank-- has so many names-- it's called central bank money, high power money, some other name. They have it some. But it has several names. And what is it? I like the name central bank money because that's in contrast with this kind of money. It's the money that the banking sector produces, the checkable deposits. So this is the money produced by the central bank. And then through deposits, the financial system, the banks themselves produce more money. So the total money in the system is much more than the central bank money because deposits is a big thing. This is much larger than that in practice. Good. So how does this change our model? Not really much. And so that's the reason I don't care too much if you understand this last two slides. So let me rederive what we had with this approach, with this extension. So money, the money is the same as it used to be. What happens is you're going to demand it in the form of currency or checking accounts or something like that. But the total money demand is exactly as it used to be. Banks typically hold a share. And assume for now that there is no currency. So everything is in deposit, otherwise you get a more complicated formula. So no one holds cash, which is probably quite realistic nowadays. So everyone has a checking account, and that's it. So the banks typically hold, for regulatory reasons, a share of their deposit. They have a minimum for regulatory reasons, a minimum share of the deposit they have that they need to hold in the form of reserves. And I'm going to call that a fixed fraction, theta. So if all the money is held in the form of deposits, then theta, which is a number like 0.1, say, times the deposit, which is money demand, is equal to the reserves, is equal to the demand for central bank money. This is the demand for central bank money because these are the reserves that the banks want to have at the central bank. So the demand, they want to have this amount of deposits at the central bank. This we'll call Hd. It's theta times Md. And so now, rather than M, what the central bank controls really-- because it's the only thing you can really control-- is the money that it issues, which is the central bank money. Supplying M, which used to be currency. Now M is a bigger thing because of deposits and all sort of stuff. But the central bank is the one that controls how much high power money, how much central bank money, it issues. And we're going to call that H. So if you go back to my balance sheet here, the central bank cannot control total money because it cannot control the amount of checkable deposits in the economy. But it can control this money here. And since we have no currency in my example, it really can control how much supply of reserves it can do. And so now we have a demand for reserves, which is just proportional to money demand, equal to some fixed amount, H, which is the supply of high powered money by the central bank. And then the equilibrium looks exactly like before. It's going to be some-- remember, before we have M equal to dollar YLi. Now we're saying it's not M. It's theta times M, really, because it's a fraction only of total money. Has to be equal to money demand. It's not total money demand. It's the money that ends up being demand for central bank money. Because total money demand is the checking account. That leads to demand for central bank money by the banks, which is theta times those deposits, not the full deposits. They don't take all the deposits and deposit the central bank. They say only 10% of those deposits will keep them at the central bank. So $100 in deposit, if theta is 0.1, $100 in deposit leads to a demand for reserves, that is for deposits by the bank and the central bank, of $10. And that is the thing that the central bank can control very well. It's the demand that comes to me. The central bank is $10. And I decide whether we do $10 or not. We can do 5 and then we're going to have the interest rate sort out 5 to be an equilibrium. So if the banking sector wanted 10 and I'm only going to supply 5, then something, the interest rate is going to have to rise so that the deposits decline, less demand for money, and the final demand that gets to me is 5, not 10. But the mechanism is exactly the same. I know this can be a little confusing. But the mechanism is exactly the same. This just illustrates-- I mean, this is important because it's institutional important, more than conceptually important. Because this is the market where really the interest rate is set. Okay? That market, the market for reserves, really, because what the banks do is they want reserves, deposit at the central bank. That market for reserves, the interest rate in that market for reserves, is called the Federal Funds Rate. And it's called the Federal Funds Rate because the federal reserves controls that rate. So when there is a meeting by the Fed or a policy announcement and it tell you that the Fed has increased the rate by 50 basis points, it means it has increased the interest rate in that market by 50 basis points. So the interest rate that the central bank controls directly is called the Federal Funds Rate. And the Federal Funds Rate is that market, is the market for reserve for high powered money. So all the operations happen there. They don't participate in the deposit market, anything like that. They participate in that market, which every night is a huge number of transactions. Every night because I put all the banks together. But what happens every night is some banks have more reserves than they need. Some other banks have less reserves than they need. So the supply and demand within the banking sector for reserves, and there is an interest rate that tends to equilibrate, if too many banks are short reserves, then there is going to be lots of demand for reserves and the interest rates are going to tend to go up. If the Fed doesn't want that interest rate to go up, then what it will do overnight, it will go there and inject reserves into the system, high power money. So the interest rate in that market comes down. And that's the way the Fed operates. That's the way it controls it. Participates in that market. Controls the amount of high power money, which regulates the rate in the reserves market. And here it's only-- and then so here you have the major banks participating here. And then this leaks into the other interest rates in the economy, into the deposit rates, to bond rates, and so on. But the real rate they control, the rate that they control most directly, is that right here. And that's the reason I wanted to give you a little bit of institutional-- I mean, to add complexity, so you could get to this word, Federal Funds Rate because that's what you want to read in the newspapers. The Fed increased the federal funds target, it's going to say. Because they target. They cannot guarantee there's going to be a target in an interest rate. And typically, they give you a range. It's a very narrow range. Let me just conclude by showing you how that rate has looked in the US recently. And I finish here. So here is before COVID. The interest rate, the Fed was already hiking interest rates. It had overdone it there so it was beginning to lower interest rates. And then COVID came and they cut interest rate very aggressively. This is the maximum they can cut it to. And we're going to talk about that briefly in the next, to zero. You cannot go below zero. You cannot pay negative interest rate because then I don't demand money. I don't demand bonds. Why should I hold a bond that pays me a negative interest rate when cash pays 0, always 0? I can keep it under the mattress and I don't pay anything negative. So the interest rate, the lowest can be 0. And you see that they did the maximum they could in terms of cutting interest rate. They went to 0, effectively 0. And now they have been trying to catch up because inflation is very high. And so they're trying to hike interest rates. And you see what they're doing. Well, all those interventions happen in the reserve market. That's what this is. That's the rate, is in that market over there. OK, so I think we're meeting on Tuesday next week. I think that's the plan.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_12_ISLMPC_Model_continued.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So let me continue with the IS-LM-PC model. In fact, I want to rush a little bit because I was overexcited with the SVB bank event. And I want to make sure that you certainly understand this model. It's going to be very important. And I think it's one of the most important models in this course, as I said before. Also, I want to use it a little bit more, this model itself, to explain what is going on right now. Today, we got hit by a second shock from the financial system. And so it's getting exciting these days. So let me skip all this and remind you that that's where we were. That's the IS-LM-PC model, which I said is nothing else than just integrating the IS-LM analysis with the Phillips curve. This is the IS part. And I said at this point, I will follow the book and assume that the central bank can control the real interest rate, rather than the nominal interest rate, which is what really controls in practice. But I'm going to make that assumption so the pictures have less curves moving around when things are moving when we do the dynamics. But that's just a IS-LM, nothing different. And then we look at the Phillips curve. And I said, well, this is not very useful because in the IS part I have output here and in the Phillips curve, I have inflation but then I have unemployment and I don't want to be carrying around three variables, endogenous variables. And so it's difficult to diagram in three dimensions. Everything is less clear. So I'm going to replace this unemployment here for output. And it's very easy to do that with the production function we have because output is just equal to employment. And the employment is just equal to the labor force times 1 minus the rate of unemployment. And we could define a concept of potential output as simply that output that happens when employment is equal to the natural level of employment, which is equal to the labor force times 1 minus the natural rate of unemployment. And taking the difference, subtracting the second line from the first one, you get a concept that is used very frequently in macroeconomics, which is the concept of the output gap. An output gap refers to the difference between actual output and potential output, in which potential output is nothing else than the level of output that you get when unemployment is at the natural rate of unemployment. So we get the output gap is related to the employment gap. And now we can replace the employment gap from here with the output gap. And we end up with a Phillips curve in the space of inflation and output gap, which is something we can integrate very easily with the IS-LM model that has output in it. So the IS-LM model is really combining this equation with that equation, and some model about expected inflation. That's what the IS-LM-PC model is. So I gave you one example here, 1 mole of inflation. This is the case of the central banks do not like, an anchor inflation expectation. That's my model of expectation, then my Phillips curve. I plug this into my Phillips curve and I get this relationship between the change in inflation and the output gap. And it's an increasing relationship. And that's what I'm plotting here. For any given level, for any given level of Yn, then as I increase output, the Phillips curve-- the change in inflation, the left hand side, the difference between inflation and expected inflation rises. That's the reason this upward sloping. And the reason this rises has to do with all the things that happen in the labor market. If output rises, I mean, unemployment is falling. You need more employment to produce more output. If that's the case, it's more wage pressure that leads to a price pressure because there is a markup in between wages and prices. And that's the way you get into inflation. So I gave you one example. It says, OK, suppose that we have some equilibrium level of output. This, which is the result of this monetary policy, this rate set by the central bank. And that's the IS, which is a function of the fiscal policy of the country, how confident are consumers, and all these kind of things. So in quiz one, we really worry only about this top diagram. And all the shocks we had were shocks that happened in this top diagram. And we look at what happened to output, to equilibrium output as a result. Now, this hasn't changed. It hasn't changed. That block is the same as it used to be. The only difference that we have here is that this level of output, which is the equilibrium level of output at any point in time, needs not be equal to potential output. And if it is not equal to potential output, that will lead to something with inflation-- inflation, disinflation, or something of that kind. In this particular case, the equilibrium level of output, which is still determined as we used to determine, happens to be higher than the natural rate of output. And if output is higher than the rate of output, that means this is positive, which means inflation is rising. And that's exactly what we see here. It means inflation is above expected inflation. When expected inflation is equal to log inflation, then that means inflation is rising. And that's what we have here. So any question about that? So this is-- we just, what we did is kept analysis we used to have and now we added this diagram here at the bottom. Because it turns out that, yes, any equilibrium here, if I move the interest rate around, I'm going to change the level of output. All those are equilibrium levels output. But that doesn't mean that they're consistent with the natural-- with potential output. And if it's not equal to potential output, it's a valid equilibrium at any point in time. But it's going to lead to issues on the inflation front. That's all. The second diagram here tells you that we get issues on the inflation front with any equilibrium level of output that is different from the natural rate of output. That's what this diagram does. And that's what happened. In the short run, that's what it does. So we keep doing what we used to do in the first seven lectures or so. And this diagram just tells us what are the implications for inflation. That's in the short run. The medium run, we said, is when we start processing, which output converges back to the natural rate to potential output. How does that happen? Well, it involves the central bank. But it's not that the central bank is doing crazy things. The central bank is reacting to what the economy is telling it needs to do. Here is a central bank that was before doing whatever change in the interest rate or in the IS delivered the equilibrium output here. Had an inflation of around 2%. That was consistent with the target it had. Now suddenly it finds itself in a situation like this and inflation starts climbing. We get 3% one year, 4%, the next, 6, 9, in a situation like that. Well, it's very natural for that central bank, if it's a responsible central bank, to react to that. And the only reaction the central bank can have, the main reaction can have, is to raise interest rates. And that's exactly what starts happening. The central bank finds itself with inflation, in this case that is accelerating. It will start increasing interest rate. And this process of acceleration of inflation in this case would only stop when output is equal to the natural potential output. And we can define implicitly what that interest rate is. And we can call it the natural rate of interest rate. Sometimes we call it [INAUDIBLE] rate of interest rate, neutral interest rate, R star, lots of names for this interest rate. But this interest rate is simply the one that gives us an equilibrium output in our IS-LM diagram that is equal to potential output. Thus, that's all that this Rn means. And here I'm solving it as the rate that implicitly gives us an equilibrium output here that is equal to the natural rate of output that defines it implicitly. OK, there we are. Is all that clear? Yes, OK. I think you're going to have a big chunk of your current pset about this model and so on. And that's a good thing. And you'll see it also in the next one. Because, again, I think this is important. Then they talk about the difference between anchor and unanchor expectations. I said, look, here, we have a situation that suppose we started at 2%. And then we found ourselves in a situation like that. That means inflation starts building up. We go to 9% or so. So now the Fed gets scared and it starts raising interest rates. So that's moving up, lowering output. And as it lowers output, reduces the output gap. And therefore, reduces the change in inflation here. But inflation keeps rising in this particular model because expected inflation is an anchor. And I said, well, suppose that eventually the Fed gets to that interest rate here. So we get to a situation like that. And I ask the question, has the Fed solved the problem now? OK, finally we get to a situation where the interest rate is equal to the natural rate of interest rate. That tells me that output is equal to potential output. That tells me here that inflation is not changing. Problem is that we already had inflation of 9% at some point. So here, inflation stops rising in this particular model with expected inflation and anchor expected inflation. But stopping is not enough because that leaves us with a level of inflation of 9%. That means that the Fed, in order to bring back inflation to 2%, it needs to go into this region. So inflation starts coming down from 9%, 7%, 6%, 5% and so on. So if you have an anchor expectation, an inflation overshoots, you're going to have to cause a recession, and probably a severe recession. There is no way around that. And that's what the Fed has been struggling to do, is struggling not to. Because the Fed, we are in a situation-- not only in the US, but in the US in particular, where inflation is way above the target level. But expected inflation has been more or less stable. And so when people talk about being able to restore sort of reasonable levels of inflation in a soft landed manner, with a soft landing, that means that you don't need to cause a recession to bring inflation back to 2%. You just can bring it smoothly here. With this model of expected, inflation doesn't work. But if the bank, central bank, has credibility and inflation remain anchored, people continue to believe that the Fed will go back to 2%. Then you don't need to cause a big recession. Otherwise, you need to invest in bringing expectations down. And the only way you can invest in doing that is causing a recession. But that's the reason I said central banks worry so much about keeping inflation credibility, because otherwise they need to overshoot in order to restore a long run balance. Good. Now, you may wonder, well, I mean, this looks pretty simple. And just, if you have a problem like this, just go quickly to that point there. And then the problem is over. You don't let inflation build to 9%, or something like that. You react immediately. The problem is, there's a famous sentence that was coined by Milton Friedman, is that monetary policy acts on the economy with long and variable lags. So first of all, it's very difficult at any point in time to know where is potential output and what is a natural rate of unemployment. I mean, you sort of sense it. But the truth is that the only way you really know is by looking at inflation. So it's inflation that really tells you that you are on one side or the other. It's very difficult to-- you have some historical average and so on. But these things do move around. So it's difficult to at any point in time to know whether you are at our end or not. The second thing is that here everything happens immediately. If I move immediately, then output immediately jumps here. That's not the way monetary policy operates in practice. It takes time for monetary policy to affect the economy. And so the situation that happened, I would say, until last week, was the Fed knew that the inflation was still too high. But it also knew that it had done a lot. It had hiked rates very aggressively by a lot. And since there are lags between the increase in interest rate and the decline in output, the Fed's concern was, well, it's clear that I still have inflation. But it may well be the case that when this thing finally hits the economy, it hits it too much and we end up in a recession, and an unwanted recession. That was the concern. Now, what is happening right now, there is a little bit of a concern that we got to that point because things were very slow for a variety of reasons. But now something broke. And the question is, now that something has broken, will we sort of decelerate the economy very, very fast? And that's a concern. That's what is happening right now. But that's what makes monetary policy much more difficult than this little diagram, is that you have all these lags, these uncertainties, and all these non-linearities. And suddenly things happen. Let me tell you when things can go really, really wrong. It's not the issue now. But we were very close to that during the global recession. Japan has experienced several episodes like this, which is the following. Suppose you have a situation where your inflation is low. Typically, these things happen in situations where inflation is low. And for whatever reason, your natural rate of unemployment is negative. So you have inflation close to 0, say. And then the natural rate of interest rate is negative. What's the problem? Suppose you have a 0 lower bound. Well, you have a 0 lower bound means that you're not going to hit this rate. The best you can do with inflation is around 0. Then you set the nominal interest rate to 0. And then the real interest rate is around 0. Well, the problem is that at 0, you generate negative inflation. But if you generate negative inflation and the nominal interest rate is fixed at 0, then now you get a positive real interest rate because the real interest rate is equal to the nominal interest rate, which is 0, minus inflation, expected inflation. But if your expected inflation or inflation is negative, minus, minus is positive. So that means your real interest rate is actually positive. So you wanted something negative. But you end up with something positive. That means now you have a big gap here. So inflation, you get into deflation. Now inflation is very, very low. It gets very negative. Well, as inflation gets more and more negative, your real interest rate keeps climbing. So you keep moving further and further away from the natural rate of interest rate. Thus, something that is very scary for an economist, that deflationary trap. And that's the way you get into deep recessions. In fact, that's what happened during the Great Recession in the US. The no, during the Great Depression in the US. During the Great Recession, we were close to it, but we didn't get quite there because lots of things were done to prevent a repeat of the Great Depression. One of the biggest problems with the Great Depression was that monetary policy was not against the 0 lower bound. But it was very slow to react. They were in a situation like in this diagram here. But they kept the interest rate high. And they moved very slowly. And when they tried to catch up, well, they went into deflationary environment. So the real interest rate was moving away from them, despite the fact that they were moving the nominal interest rate down. And that you can see here. So the Great Depression starts around 1929. It starts really in 1929. Unemployment initially was low. The nominal interest rate was around 5%. And inflation rate, growth declined very rapidly. The inflation rate was around 0. So you had a one year-- the real interest rate was around 5%, as well. Well, things got worse. Unemployment began to climb very rapidly. And so the Fed began to lower interest rates, nominal interest rate. Went from 5% to 4%. Those were unusually low interest rates for the time. But the problem is that the inflation by then was -2.5%. So the real interest rate-- they were lowering the nominal interest rate. But the real interest rate was rising. And unemployment, accordingly, was rising, as well. It kept going. Then they began to cut the interest rate more aggressively. But we got into real deflation, -10% or so. So the real interest rate kept climbing. At that point was a very poor time interest rate hike, was a disaster for unemployment because it really hiked the interest rate, real interest rates even more. And eventually got out of it with a bunch of policies that were non-monetary policies. But that's what happened. So the Great Depression was very much a story of this kind in which essentially we fell into depression so that interest rate began to climb and got the economy deeper and deeper into recession and unemployment higher and higher and so on. And at some point, monetary policy just didn't work. So that's the reason you have to essentially do massive fiscal policy to get out of it. Let me talk about some of the shocks we have discussed in the context of this more complete model now. And actually-- OK, let me talk about two canonical type shocks. You can have two broad type of shocks or policies in these models that you want to analyze. Some of them are aggregate demand, either policies or shocks or whatever. Those are things that you know. Aggregate demand policies move the IS curve-- IS and LM, but moves operates in the goods market. So this is one case of a contractionary fiscal policy, a fiscal consolidation. So what happens here? Suppose you start an equilibrium level of output equal to the natural rate of output. But now, for whatever reason, we're running deficits that are very large. You want to reduce the size of the deficit. Well, you move the IS to the left. Now you cut government expenditure. You increase taxes. That will bring output below the natural rate of output. You go to the Phillips curve here. That means inflation now, you get into the deflationary forces or inflation starts declining. The result of that-- So in the short run, you get exactly what we had in lecture five, six or whatever. You get a contraction in real output. But on top of that, you start getting inflation coming down, or even going negative. And as a result of that, the central bank will react. And it will react. And that reaction will stop when in the long run when output goes back to the initial level of output, its natural rate of output. So the point of this picture that is new relative to things you already knew is that in the short run, you get very much the type of responses we had early on. In the medium run, in the medium run, you don't get that. A fiscal consolidation does not reduce output in the medium run. The fiscal consolidation does is reduces the real interest rate in the medium run. So you see here, output eventually goes back to that level with a much lower real interest rate. The point is sometimes this path, this path takes your output down initially and then comes back, can be very painful. It can take a long time to generate a procession and so on. And sometimes it can happen in any sort of conditions and be faster and so on. Many of the policies, agreements that people have, people that understand what they're talking about, had to do with the speed at which these things happen. So sometimes people can agree that you need a fiscal consolidation. But someone may think, no, this stuff is going to be very slow. And so I don't want to incur in a very deep recession for very long just to adjust a little bit the fiscal deficit. And others may think the opposite. It's mostly about the speed. But shorter response to a fiscal consolidation or to any aggregate demand contraction is different than the medium term response. And again, the signals for the central bank that it needs to move interest rate, they all come from this block here-- inflation falling. That tells the central bank, oops, we may have a problem, and so on. Another kind of shock that is more complicated and that has played a role actually very much in the recovery from COVID is some sort of supply side shock-- for example, an oil shock. The price of energy goes up or something like that. Well, that one, how do you analyze that? A supply side shock is not something that comes from, that would go to the IS-LM part of the model. A supply side shock, remember, we analyze it when we analyze the natural rate of unemployment. It's something that affects the supply side of the economy. We can model that as an increase in the markup. And we know that an increase in the markup will increase the natural rate of unemployment. That means that this shock will do what to potential output? So an energy shock, especially if it's a persistent one, will operate like a markup shock. And that we know will increase the natural rate of unemployment. So what happens to the potential output? Goes down, of course. Output is equal to employment, to labor force times 1 minus the natural rate of unemployment. If the natural rate of unemployment goes up, then the potential output goes down. So that is not a shift in the top diagram. It's a shift in the lower diagram. It says, we used to have this Phillips curve. And now the Phillips curve has shifted to the left. Because we have a new natural rate of an output. And remember, the natural rate of output is at that point when output equal to the natural rate of output does not produce inflationary forces. What happened with this shock here? So suppose the economy was in this equilibrium here and now it gets hit by an oil shock. In the short run, if no one reacts, nothing happens to output. Doesn't move much. But what happens? You see, if I don't move something in this part of the diagram, I'm not moving equilibrium output. In the short run, equilibrium output is determined exactly in the same way we have determined up to now. So if we get a markup shock, nothing happens to output. If nobody moves, nothing happens to output in the short run. But what happens that we may not like? AUDIENCE: [INAUDIBLE] PROFESSOR: Exactly. What happens is the Phillips curve went up. Before, that level of output was consistent with no changes in inflation. Now, it's not. We get an increase in inflation. So the first place where you'll see the effect of the oil shock here is inflation will pick up. Remember, the price of gasoline going up and all those things. Well, that's where you see it first. Before activity falls, you see it there. That's what makes up the Phillips curve also, in the '70s and '80s. You can see lots of shocks of this kind. Initially, unemployment didn't move much, but inflation kept climbing. So that's what happens. Well, obviously, when that happens, if it persists-- and typically central banks, if they think is very short lived, they're not going to react to this stuff. But if it is persistent and they think it's persistent, then the reaction is what? Well, they need to-- this only means that the natural rate of interest has gone up because I need, for that same IS, I need to bring down equilibrium output. That means I need a higher natural rate of interest rate, or a higher R star. So what the Fed needs to do, the central bank needs to do, is just start increasing interest rates. That's a natural response. A lot of what happened during the COVID, why inflation picked up so much in COVID is because we had a shock of this kind. It was not energy. The energy shock came later. But it was supply side, transport costs, and stuff like that, the network, the production network and things like that broke down. But they thought it was going to be very temporary. So understanding this model, they thought, OK, look, this, this curve will come back by itself, so better not react right now. Why? Because a recession is really, this curve will come down, back down by itself. Well, the problem is that it didn't come back by itself that fast. Some things came up fairly fast. Some others did not. In particular, labor force participation did not come sufficiently fast back. And so that's the reason we stayed too long in a situation like this. And that's one of the main reasons inflation sort crept up in the US, and also in other places in the world. In Europe, the big reason for why inflation picked up there is because this curve moved a lot up. Why is that? AUDIENCE: [INAUDIBLE] PROFESSOR: Exactly. They had a massive energy shock. And so that moved that curve up a lot. Good. I want to now return to what is going on right now. Actually first, I'm going to-- yeah, right now, meaning the last few days. So it turns out that this diagram that I used for the fiscal consolidation shock can also be used to understand a little bit what happens with the Silicon Valley Bank event. Remember, we model that as a credit shock. We said that x we had is like x going up. Well, x going up does exactly that. It moves IS to the left. So a shock to x, a panic of the kind that we saw, is that. It moves IS to the left. Why is that? So for any given level, the safe, real interest rate, a panic, a shock to credit and so on, moves the IS to the left. Why is that? AUDIENCE: [INAUDIBLE] PROFESSOR: Exactly. So the safe interest rate doesn't go up. But what goes up is the cost of borrowing because firms need to pay this extra risk premium. And then for any given safe interest rate, the real interest rate, firms have to pay more, which means there is less investment for any given level of the real interest rate. And so the IS moves to the left. And if that happens, then you start getting deflationary forces. So again, all this happens very quickly here. In reality, I told you there are lots of lags and so on. But markets begin to anticipate what will happen. So markets begin to anticipate. So in the immediate, output doesn't collapse immediately or anything. And inflation doesn't collapse immediately. But markets realized that there are long lags. But there is a shock already. So it's likely that these things will happen. And it's likely that this will happen. And it's also likely that the Fed will react to that. What should be the reaction of the Fed if this stuff gets to be persistent? How do you get out of a shock like that if you really want to go back there? You cut interest rates. In the case of the US, they were hiking interest rates because we dealing with high inflation. This tells you, well, you should slow down the pace of hiking. Again, they don't do it immediately. They meet next week. But the markets don't need to wait for the Fed. They anticipate what the Fed is likely to do and they start betting on that. So let me show you next a bunch of charts that show you that someone in the market understands these mechanics-- a lot of people, because the prices are moving exactly that way. This is something-- this is the one year ahead inflation expectation, as traded in the market. It's called Inflation breakeven, the one year inflation break. So these things are traded in the market. And you can trade expected inflation all the maturities you want. So this is what the market was expecting before this shock. We're getting hotter and hotter numbers. So the economy was-- inflation, expected inflation, as price in the market, was climbing one year out. And then the shock came. And look what had happened to expected inflation. Boom. It collapsed. OK, why is that? Well, people thought this shock, that leads to that. That's what they thought. This bounce was markets got a little excited yesterday. It was a little risk on environment. Today they lost all that already for a shock I'm going to tell you about in a few minutes. But anyways, the point I wanted to highlight is, again, expected inflation was getting a little out of control. And then this x shock came, the panic shock, and then immediately expected inflation declined because people anticipated something like this. The market anticipates something like that. What is that? This is the market's expected next hike. So March 22nd, the Fed will decide on the increase in interest rate. Remember, the Fed had decided, as I said in the previous lecture, to go for a path of 50 basis points initially, very high. But since a couple of meetings ago, they decided to slow down to 25 basis points, precisely because they want to wait and see a bit what [INAUDIBLE] do we have? I mean, there's long variable lags. They have increased rates a lot and so on. So they have gone back to a pace of 25. So if you see somewhere here, in February 22, if you ask the market, what do you think will be the next hike? There will be lots of answers-- trades, and so on. But when I say answers, I mean what is price, what is traded, these financial instruments. The average answer was 30 basis points. 30 basis point is that most of the people thought that they were going to increase the interest rate by 25 basis points. And there were a few guys out there that thought the Fed doesn't increase the interest rate by 33 basis points. That's 25, 50, 75. So this 30 meant that almost everyone thought it was going to be 25, but there were a few people that were concerned that it could be higher than that. What happened here? We started getting very hot numbers on inflation. And so all of a sudden the equilibrium changed dramatically and we went to 45, which means most people then thought in the market that the next hike on March 22 was going to be 50 basis points. And a few people said stay at 25. That's the reason this is 0.5. But it was almost priced in that. When people say pricing, they are talking about this. What is the hike that's priced in? It's this statistic. Look, that's what happened with the Silicon Valley bank event, a collapse in this thing. Now, it's trading at around 13 basis points. That means most of the people think, the traders here think, that there will be no hike at all. So a few days ago, they all thought there was going to be 50 basis points, which is a big hike. And now most people think there will be no hike whatsoever. But a few think that it's going to be 25. Actually, it's almost 50/50. I think today is a little lower than that. But it's almost 50/50 that it's 25 or 0. That's what it is. But had you asked anyone around here, and certainly around here, is there any chance of 0, and there will be no one, literally. That contract was not traded. OK well, you see things happen. Accidents happen. So now that's where we are at. Now, if this all lasts a week and everything gets resolved, it doesn't have a lot of macroeconomic consequences. It's just a little panic. You know, some people make money. Some people lose money, and so on. But this can be a very problematic shock, actually, Because what you see here, this is the size, this is Silicon Valley Bank. These are all the banks smaller than that. It turns out that all this bank at this moment are reshuffling their portfolios, are becoming very conservative. Because they don't want to be exposed to similar risks. They realize that the environment became very unfriendly to-- there can be ransomed banks in any moment, despite the fact that it's a big policy package out there. But people are still withdrawing lots of deposits from small banks, small and regional banks. And they all deposited it in JP Morgan, you know, the big banks. So there's lots of deposits that, despite the insurance, the blanket insurance that is implicit, at least at the moment, lots of deposits from these sectors are moving to these major banks here. That's called a flight to quality. Now, the problem of that for the economy as a whole is that small banks and regional banks play a huge role in lending. I think a little more than 50%, for example, of the commercial and industrial loans are made by small banks. 80% of the mortgages are given by a small bank. So it has a big potential consequence. What I'm trying to say is that x may stay high for quite a bit of time. And that's the reason, there is anticipation that this will have macroeconomic consequences. And as a result of that, that the Fed will react, that inflation will change, and all that. So that's where we were at on Monday. Remember when we had the lecture, I was telling you more or less that the story. What is this? You can read it there, but it may not mean much to you. But what I'm highlighting is this. This is pretty big. 35% decline. This is an equity. It's a share. So this is the value of the equity of a pretty major bank, Credit Suisse. So Credit Suisse has been in trouble for a while. But today got into really big trouble and saw a massive collapse in the equity shares. In fact, they stopped trading for a while and so on. This thing here-- you know, I updated your slides many times today because I began to look at this event around here. And then the thing kept going and they stopped, kept going, and so on. And I'm not sure where he's at now. I stopped-- at what time did I stop? 9:00 in the morning. And I was awake at 4:45 today. So they tell you this was pretty intense. But what this is is the credit default swaps on Credit Suisse. Credit default swap is whenever it's a bond issued by a bank, you can buy an insurance on that bond. So if the bond defaults on you, you then use the insurance and you get paid. So these things for banks normally are very small numbers, but for Credit Suisse, bigger than for other big banks because they've been in trouble for a while, all sorts of trouble. But look at that spike there. I mean, that's pretty big for these kind of things. It's not Lehman yet, but big. So anyway, that caused a little panic today. This is the stock prices of the main European banks. It's sold today. So look at this. It was a little rally yesterday and so on. I mean, this is the decline as a result of the US problem, the Silicon Valley Bank's. Then a rally yesterday. And then Credit Suisse happened. And you had a big decline in all the major banks in Europe. The US banks are also declining. But that was bigger for the major banks. The Vix, remember I told you last week, last Monday, about this indicator of fear in the market, which is really the price of put options? I'm simplifying things. I mean, protection for big declines on the equity market. Again, it began to spike very, very sharply as a result. This is what happened with the US event. Then yesterday we got a rally, risk on type thing. And then today we got a new event. Look at this. I like this picture. What this is. Let me tell you what the blue line is. The blue line is the market expected federal funds rate at different dates in the future. Today the federal funds rate is around 4 and 1/2. And this is what the market expects. So they expected the Fed to continue to hike interest rates and to reach a peak around in June 14 in that meeting of the order of 5.3% or so. That's what's the average. There's lots of dispersion. That people are betting on 6%, but that's the average. That's what people expected. The yellow things is the number of hikes that you're likely to see. So you're likely to see one hike in the next meeting, another one in the next meeting, and another one in the next meeting. And then stop and begin sort of cutting rates. That was expected path on Friday-- 10th was Friday. More or less around there. Maybe Thursday, I don't remember. But anyway, that was expected path. So still, hike rates reach a peak of 5.3% and still pretty high interest rates by the end of the year. That was expected path. That's the way it looks now. Very different. Now people are expecting very small changes now, I showed you. It's like 13 basis points what people expect. Still people expect sort of a hike, but small one. Now they expect the peak to be sort of in May and then the Fed to start cutting very aggressively to at the end of the year, end up with much lower rates than today. So this is exactly what I was telling you before. The market is anticipating that we had a huge contraction in the IS because x went up a lot. The major consequences, that is going to be lower inflation. Yes, we have a problem. I mean, if the US did not have 5.5% of inflation today, I can assure you that the Fed would have come out and said, we cut the rates right now. The only reason they're not cutting right now is because we have two problems. We have the financial panic on one side and we have the high inflation on the other side. So they have to balance these two forces. But the expectation of the market is that the balance of two forces is going to be dominated by the contraction in aggregate demand much sooner than people were expecting. So that's what the market is pricing at the moment. And what I'm saying is, I was trying to highlight that this is very consistent with that. It's just the market looking ahead, but what is likely to happen. It started from a situation which is a little bit more complicated, again, because we already had high inflation. Well, I do not know, really. I mean, it is on one hand, more complicated because we have a problem of high inflation. On the other hand, having high inflation allows you to cut the real interest rate much more aggressively. Because if you bring the nominal interest rate to 0 and you have inflation of 5%, that allows you to cut the real interest rate to -5%. Well, if you start with a situation where your inflation is 0, you don't have any space to cut the real interest rate. So the Fed can be very aggressive here. And the only reason is not being very, very aggressive-- They were very aggressive in terms of supporting deposits and all that. But they can be very aggressive in terms of interest rate cuts if the need arises. Hopefully we won't. But they have a space because we're starting from a much higher level of inflation. That helps. It hurts in the sense that it will delay the reaction. But it helps in the sense that they have much more space for policy. What is this? This is just one year interest rates. That reflects the previous picture, as well. One year out rates were over 5% a few days ago and now are in the low 4s. And this picture I kept updating, as you can see. It was really dropping fast. That's a big change. The one year rate, 60 basis points, that's a big change. So that's where we're at. And from the next lecture, I'm going to start with growth. But any questions about this? OK. I don't want to start growth now in four minutes. But so the set of topics we're going to discuss from the next lecture are very different, subject to not having any major events. If there is a major event, I'm going to reshuffle things so we can talk about financial panics and things of that kind. Let's hope that it can stick to the program and do growth in the next week. OK, good. Have a good weekend.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_14_Saving_Capital_Accumulation_and_Output.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDEO CABALLERO: I couldn't connect. So the Fed just hiked by 25 basis points. And as people expected-- this is the way it works. When there's lots of uncertainty, essentially, the Fed starts communicating what it's going to do, and the communication was very clear that 25 basis points was to be expected. And apparently-- I was reading this right now. It was released 3 minutes ago, 4 minutes ago-- they also said that further hikes are no longer guaranteed. So remember that we saw that expected hikes-- we saw several expected hikes for the next few months before the SVB mess. And right after it, we saw the whole thing declining. And at least the minutes are consistent with that. So there we are. So not big uncertainty. I imagine the markets are rallying or something like that, at least for the next 10 minutes or so. But we shall see. Anyway, but today, we're going to really start-- I'm going to show you the first model of economic growth. And before I do that, who knows who that person is? No, no clue? Actually, he's Robert Solow. He's an emeritus professor at MIT. Together with Paul Samuelson, essentially, he's responsible for building the Economics Department at MIT. And he won the Nobel Prize in 1987-- I was a student then here-- primarily for his work on economic growth. And so what we're going to do in the next two or three lectures are essentially things that Bob Solow developed many, many years ago. The basic mechanism-- you remember that we had this Keynesian cross before where we had this multiplier in the goods market and aggregate demand feeding into income and so on and so forth. That was the star mechanism in short-run macro. In long-run macro growth theory, this is the key mechanism, and you can think of it as the following. At any point in time, an economy has factors of production-- primarily, labor and capital. That capital is stock. Labor is more or less fixed, so it depends on population, growth, things that are difficult to control, or they're not really that endogenous to economics, not, at least, in the current times. Many centuries ago, yes, they were. We had this Malthusian theories, in which population growth determined growth because food scarcity and stuff like that. But that's no longer the case, fortunately, in most parts of the world. But what can change over time and quite a bit, and it depends on economic decisions, is the capital stock. But at any point in time, there is certain capital stocks, which, combined with labor, give you some certain output. Output is income. Part of that income will be saved, as we have seen. And those savings will be used for investment. But investment is nothing else than capital accumulation. So this income will lead to savings, which will fund investment, which will change the stock of capital, will feed into the capital stock, that will feed into income, and so on. All this is happening very slowly because the capital stock accumulates slowly. But this is what is happening. And so all the models we're going to look at-- certainly, the model we're going to look at in this lecture is all about this mechanism. So let's remember what we did in the previous lecture. And I'm going to assume that population is constant. I'm going to relax that at the very end. But assume that the population is constant and equal to N. And remember, we're not worried about unemployment and stuff like that here. And so output per capita or per person is Y over N. And we remember we had a production function f of K and N. Then because of constant returns to scale, we could divide by N on both sides, everything. And we ended up with this relationship. So output per person is equal to-- is an increasing function of capital per person. It's an increasing function of capital per person, but it's also a concave function of capital per person. Why is it concave? That is, why is it increasing at a smaller pace? Yeah? AUDIENCE: It's decreasing marginal method. RICARDEO CABALLERO: Decreasing marginal product of capital, exactly. For fixed amount of labor, the more capital you put into production, well, output keeps expanding, but by less and less because it has less and less labor to work with, each unit of capital. Perfect. That's very important. Then we're going to work in closed economy. I haven't opened it. I'm going to do that after quiz 2. And I'm going to assume, also, no public deficits-- so g equal to T, capital T. And in that case, then, we know that private investment-- private savings equal to private investment. That's the way we derived our S curve. So that's not new. I'm going to modify a little bit what we did in the short run, and I'm going to assume that savings is proportional to income. So savings-- little s times Y. Notice that this is different from what we did in the short run. In the short run, remember, we had a C0 floating around. We had a constant in the consumption function. So savings, which was equal to income minus consumption, also had a constant floating around. Now, that constant was important in the short-run model because we were approximating for a bunch of things that are not related to short-term income-- wealth, the price of houses, and stuff like that. We put all that in that constant there. When you think about the long run, though, most of those things that we excluded there-- asset prices, stuff like that-- tend to scale with output, as well. So these are inconsistent on the surface. But if you were to fully work out what is behind the C0 in the consumption function, then this is not a bad approximation. They are not that inconsistent because you endogenize things that, over the long run, scale with income. I mean, wealth tends to rise with income, and all these things tend to move together-- not at high frequency. You can have all sorts of fluctuations. But over the long run, they tend to scale up together. So that's going to be our saving function. So that means that we know, in equilibrium-- this is not investment function. We know that, in equilibrium, investment will be equal to-- it will be proportional to income. So remember where we were going through the box. At the top of the box, we had capital. That led to output. We're doing everything per capita. That led to savings, and that funded investment. So that's what we have. So this growth model is really about these three functional forms and then a dynamic equation for the stock of capital. So the evolution of the stock of capital-- capital will increase because of investment. That's what investment is. It's an increase in the stock of capital. But it will also decrease as a result of depreciation. I mean, things do break up once in a while. And different types of capital have different depreciation rates. Equipment depreciates much faster than structures, and buildings, and so on. But we're not going to make those distinctions here. But you see this tells you, the capital stock at t plus 1 is equal to the capital stock we had before minus what is depreciated of that stock of capital plus any new investment we do today. In per worker terms-- and remember that, for now, I'm keeping population growth constant-- equal to 0-- not population growth constant-- yeah, constant, but equal to 0. So population is constant. I can divide these both sides by N, and I get that capital per worker. Per worker or per person is equal to this expression here. I did two things here. I divided by N, and I replaced this I function, this investment, for savings because I know, in equilibrium, they have to be equal. So I have that. I can rewrite this. Just subtract Kt over N on both sides, and then you get-- the change in capital per person is an increasing function of savings and decreasing of depreciation. So the last step that is important in this model is to-- so here, I have, essentially, a difference equation for capital, but we have an output per capita on the right-hand side. But it turns out that I know that output per capita, per person-- I said per capita, per person. They're the same thing. Per worker-- it's the same thing in this part of the course. So this is equal to-- is a function, is an increasing and concave function of capital per person. So this is-- I would say it's a fundamental equation of the Solow growth model. It says, the change in the stock of capital increases with investment, of course, and decreases with depreciation. And both of these expressions here are increasing functions of the stock of capital per person. So let's try to understand what is in here. So why-- so this is linear, obviously, because the depreciation is linear. Say you lose 5% of your stock of capital every year because it breaks down. Obviously, the more capital per person you have, the more units of capital you're going to lose. This is units of capital per person. You have a larger stock of capital, you're going to lose-- 5% of a larger number is a larger number. And this is proportional. It's linear. Now, this one-- remember, this comes here from the saving function. And this term here is equal to income per person. Now, suppose that you start in a situation where the capital stock is relatively low, and this is positive. What does it mean that this is positive? I mean, the implication of this being positive is that the stock of capital per person will be growing. But what does it mean that it's positive in words? I mean, you have a stock of capital. There are things that reduce the stock of capital, and there are things that increase the stock of capital. This is the thing that increases the stock of capital. That's the thing that reduces the stock of capital. So if this is greater than that, what does that mean? That means it is positive. But in words, what is happening? And we simplify, but remember, this is just investment per person. Well, this just says that in this economy, there is more investment than destruction of capital due to depreciation. That's what this means. This is investment. And this is positive means that the investment, which is a function of saving, the saving rate, and stuff like that-- it's a function of the funding available for investment-- it's equal to the funding available for investment. If this is positive, well, this is greater than the stock of capital. Another way of saying it-- you need a minimum level of investment in an economy to maintain the stock of capital. The minimum level of investment you need to maintain the stock of capital is equal to the depreciation. So 10 machine breaks, you need to invest at least 10 machines in order to maintain the stock of capital constant. Now if this is positive, it means that you're investing more than the machines that are breaking down. Now, suppose you start in a situation where that's the case. So that means the stock of capital is growing. Suppose I ask you, the next period, do you think the gap will be larger or smaller than it used to be? Actually, that's not a great question because I'm not doing it in the right units for that. Let me ask you a variation of that question. Suppose we keep going. After a while, do you think that number will get larger or smaller? Let it run for quite a while. Do you think that number will-- so remember what I'm saying. We start from some stock of capital. This is positive. If this is positive, it means that the capital stock is growing. That means this guy is growing, and that guy is growing, and they're growing equally. But after a while, do you think this number will get smaller or bigger-- after a long while, just to make sure that my approximation is not tidy. AUDIENCE: Smaller because of the decrease in [INAUDIBLE].. RICARDEO CABALLERO: Exactly. It's going to get smaller because this guy keeps growing linearly [INAUDIBLE] and this one is not. It's concave. At some point, this income-- you need to put a lot of capital for income to keep rising, and therefore, for saving to keep rising, and therefore, for investment to keep rising. And at some point, yes, it won't be able to really grow. I mean, you're going to be using all your investment, really, to maintain the stock of capital. That's the logic of the Solow model. And it's all in this diagram. So this is the diagram you should really, really understand well and control it, and play with it, and all that. It's the equivalent to your IS-LM model in the first part of the course. So look at what we have here. So I'm going to plot output per worker-- per worker, per person-- against capital per worker here. And so this red line here is just the depreciation, this term here, and thus, is a linear function of the capital per worker. That's what it is. The blue line here is output per worker, which, as we said, is a concave function of K over N. Remember, I showed you that production function in the previous lecture. There you are. What is the green line? It's investment per worker, which is equal to saving per worker. And saving per worker is little s, the saving rate, times a output. So it's little s, which is a number like 0.1, if we're talking about the US, and 0.4 if we're talking about Singapore. It varies a lot across countries. So this green line here is nothing else than this blue line multiplied by a number that is less than 1. That's the reason it's lower. OK, good. So the point I was describing before was a point like this. Remember? The point I was describing-- suppose that the economy starts in a point like this one, K0 over N. And I want to understand the dynamics of this economy. How will it grow over time? So what you see here is that, at this level of capital per worker, investment is greater than depreciation. So that's exactly a situation where this is positive That distance here is that. And the reason I said, ah, I'm not going to do any local analysis because we could have started with a K over 0 over here, and then that number is growing. But it's growing-- if you were to normalize by the stock of capital, it's declining. But I didn't want to do that then. But now, that's what I-- so let's look at this case. You're in a situation where this is positive. If this is positive, it means the capital stock per worker is growing. So you're moving to the right. In the next period, you're going to be here. That means the capital-- so it keeps growing but by smaller steps. Eventually, the investment is entirely used for recovering from the depreciation of capital, so covering the depreciation of capital. And at that point, the capital stock stops growing. We call that a steady state, stationary state. We stop there. So that's the steady state of this model. That means this economy, regardless of where I-- I'll do the analysis from the other side. Suppose that you start from a situation like this. You start with a lot of capital. Well, if you start with a lot of capital in this economy, what happens here? Well, what happens here is that the investment you're putting into the ground in this economy is less than what you need to maintain the stock of capital, which is the depreciation. And that means the stock of capital will be shrinking over time. You're moving that way. So regardless of where you start in this economy, if I ask you the question A hundred years from now, where are you? you tell me, I don't need to know where you start from. I know that we're going to end up around there. You can-- either you start from here, you can go there. From here, you go there and so on. That's the reason we call this a steady state. This is where you converge in the long run. Now, this is already interesting because it tells you, at this point here, the economy was growing. The capital stock was growing, and output was growing. You see the capital-- if you start from here, the capital stock is growing. Well, output is also growing. You're moving up. So you had growth. That kind of growth we call transitional growth. It goes from one point to another point. It's not a permanent growth. It's a transitional growth. It's the fact that you were away from your steady state, and then you're going convergent towards your steady state. A lot of the growth we observed and the difference of growth we observed across countries-- remember, I showed you those downward-sloping curves and all that-- is as a result of that. Poorer economies tend to have lower capital-- capital labor, capital employment ratios, capital population ratios. And therefore, they tend to grow faster because they're catching up with the steady state. Very advanced economies that have been more or less in the same place for a long time are moving around there, so there's less catching up growth. And that's the main responsible for the downward-sloping curve I showed you within OECD countries and even broader than that. Africa was a little bit of a problem there. OK, so this is an important model for you, important diagram. Let's play a little with it. So suppose that, at the time-- this is a very simple model. But at the time, the view was that, well, what really supports growth is savings. So economies that save a lot grow a lot. And this sort of makes sense here because investment, which is what leads to capital accumulation, is entirely funded by savings. It makes sense. You have more savings, you should grow more. OK, so this is something-- we can do an experiment. Suppose you start at a steady state, if you will. And now we increase the savings rate. What moves? Which curve? This is the kind of thing you should know when you work with this model. If I change the saving rate, which curve moves in this model? Let me go one by one. Does the red line move? No. It has nothing to do with savings. It has to do with depreciation. If I move depreciation rate, that curve will move, but not-- will the production function move? No. So the blue line cannot move. All that will move is the green line because the green line is the saving rate times the blue line. So if I increase the saving rate, I'm going to move the green line up. That's what we have here. So you see what happens is you start-- this was a steady state for this saving rate in this economy. Now, all of a sudden, this economy starts saving more. What happens? This tells you very much the story of Asia. The Asia miracle of the '60s, '70s, and so on is very much something like that-- a little more complicated. A big part of what explains the fast growth of Asia during that period is that something like that happened. Now, why the savings rate increase and so on-- that's all very interesting and so on. But it's not what I want to discuss today. So what happened here, then? So what happens-- this economy was in a steady state. So there was no growth. It was growing at 0 in a steady state because this says, in a steady state, output per worker remains constant. And since we have no population growth, that means output is not growing, either. The only way you can have that ratio constant with the denominator not moving is that the numerator is not moving, either. OK, good. So now, boom, all of a sudden, we get a higher saving rate. So what happens now? What reacts? So the saving rates go up. It's a closed economy. It means the investment rate will go up. What happens now? What does that gap tell you? Now you have a positive gap there, which means you're investing more than what you need in order to maintain the stock of capital at the previous steady state. So that means that the stock of capital is going to start growing to the right. It's going to start growing. And as the stock of capital grows, then output per capita also grows. And this will keep happening until you reach the new steady state. So a higher saving rate-- so important conclusion there. This, as simple as it is, proves something-- that the conventional wisdom that the higher saving rate would give you sustained growth, higher growth, isn't really true, certainly not in this model. Eventually, you'll go back to growth equal to 0. When you reach a new steady state, you're going to be also growing at 0. What is true, though, is that you get, again, what is called transitional growth because here, you're going to start growing very fast, in fact. And then you're going to keep growing at the lowest-- lower, lower pace until you go back to 0. But you're going to get lots of growth in the transition as a result of that. And it turns out, in the data, when you're looking at 20, 30 years of data, it's difficult to disentangle very permanent rates of growth versus transitional rate of growth. This is one of the things that has concerned China quite a bit. They grow very, very fast. They have been growing very, very fast for a long time. But it's very clear that it's becoming harder and harder for them to grow at the type of rates of growth that they had 20 years ago. They had rates of growth 15% or so. They had very high-- they had a very low initial capital population ratio-- big population, little capital, and enormous savings rates. So they grew very, very fast. They had the green line very close to the blue line, the capital stock very low, so they grew very, very fast but they have been growing very fast for a very long period of time. So now it's getting a lot harder because they're getting closer and closer to their steady state. That's the issue. There are other sources of growth, and that's what we're going to talk about in the next lecture. But this is sometimes called the easy part of growth. It's sort of running out in China. And it has run out in all developed economies for quite a while. Good. Is this clear? It's important. I mean, a question like that is guaranteed in your quiz for 81. What happens if the saving rate does something? So this is a plot over time. So this is a case in which you were in a steady state. And at time t, the saving rate goes up as 1 greater than 0 jump. Then output cannot jump. So the saving rate goes up, but output cannot jump at day 0. Why? Why is it that the output doesn't jump immediately to a new steady state? This is the-- I'm saying, this is what will happen to output. You're going to start growing very fast early on, and then keep growing, keep growing at a slower and lower pace because of decreasing returns to capital. And eventually, you'll convert to the new steady state with the rate of growth equal to 0, like the one you had before, this savings shock. And the question I'm asking now is, why does output have to do this? Why doesn't it just jump? What is the only variable that could make it jump? So you need to look at the production function. The production function is the function of K over N. N is fixed. The only thing that can make it jump is the capital stock jumps, but the capital stock is not jumping. That's a stock. And in order to accumulate a larger stock of the new steady state, you're going to go through a lot of flows. That's investment. Every year, you're going to be adding a little more to the stock of capital. That's the way you grow. It's not that, all of a sudden, your stock of capital jumps. That's very much because this is a closed economy. If you're in an open economy, the capital stock can move a lot faster in a transition because you can borrow from abroad. You don't need to fund it all with domestic sources. And in fact, that's what typically happens in emerging markets and so on is they typically borrow for a long time. Problem is that they tend to consume it rather than invest it, and that's the reason you end up in financial crisis and so on. But in principle, things could go much faster if you have an open economy, and you have capital inflows into your country. But we'll talk more about that five or six lectures from now. But anyways, but this is what happens with an increase in the savings rate. So yes, it affects the rate of growth of the economy during the transition but not in the long run. Now, this transition can be very long. Now, what about consumption? So invariably-- and there's no way around that-- given a technology and so on, if the saving rate goes up, then output per worker will go up. The next question is, what happens to consumption per worker? Does consumption per worker go up or not? You are inclined to say, well, I mean, it makes sense that it goes up because we have more income. If the saving rate is little less times y, then consumption is 1 minus little s times Y. So income goes up, consumption should go up. And yes, that's a dominant source. But it's not all the story because remember what I told you. So consumption here is going to be equal to 1 minus little s times Y. So consumption per person will be that. Remember that what is increasing Y over N there-- so what is making this guy go up, which will lead to an increase in consumption over N-- is that this guy went up. And that's a force in the opposite direction. And in fact, that was one of the debates with the Southeast Asian miracle is that it was fueled by lots of savings. So people say, OK, that's wonderful. Your output growth is very fast, but consumption growth is not so fast. And at some point, it may be hurting you. I think that they were right, though, for other reasons. But that picture makes the point. So if your saving rate to start with-- this is a general lesson. If the saving rate you start with is very, very low, then an increase in the saving rate will lead to a strong increase in consumption because this change is small relative to the big bang get in output because, if you have low saving rate, that also means that the capital stock is very low. And if the capital stock is very low, f prime is very big. This is a concave function, and you're in the steep part of the function. Later on, if saving is very high, you're going to tend to have capital stock very high. And then, first of all, more capital won't increase output per worker a lot because of decreasing returns. And this is a big number, so it starts dominating. And that's what you see here. This economy has increased the saving rate. Consumption per worker rises. But at some point, it reaches a maximum, and then it starts declining. I mean, think of the limit. If you save 100% of your income, you don't consume anything. No matter how much is your output, if your saving rate is 100%, then you're not going to consume anything. If you have no income, no savings rate, no savings, no capital stock, no income, you're not going to consume anything either. So at least you know these two points. And since you know there are some positive points in the middle. You know that the curve is going to tend to have that kind of change. It's going to be nonmonotonic, and that's the way it is. So let me just play with a few numbers. This is-- yeah, let me play with a few numbers. It's not that crazy. Suppose you have a production function that gives equal weight to capital and workers-- so this production function. Does the production function have constant return to scale? It better be because that's what we're doing. What do you think? Yes. The sum of the exponents is 1, so it's K to the 1/2, N to the 1/2. The sum of the exponents is 1, so you know that it's proportional to a scaling factor. So we're going to use as a scaling-- as before, N. S we have this. This is a f of little f of K over N is the square root of K over N minus delta K over N. All that I'm doing is I'm plugging in that function. So here, I'm replacing all these functions by a specific example, one in which this is a square root of K over N. That's a concave function-- square root. Good. Now, do it as an exercise. If you solve for the steady state, how do you solve for the steady state? Well, set this equal to 0. That will give you the steady state. The steady state is when the capital is not growing anymore. It's when this is equal to 0. When this is equal to 0, I can solve for the steady state level of K over N from here. This equal to 0, I can solve for K over N. And I'm going to call that the steady state, K star. We typically use stars for steady states in growth theory. Well, the answer to this is K-- the steady state, stock of capital per person is the saving rate over delta squared. That's what it is. Output per person, which is the square root of K over N, is, therefore, the square root of s over delta squared, so it's over s delta. So in this particular model, in the long run, output per worker doubles when the savings rate doubles. If I double the saving rate, then output per worker will double. Notice that the stock of capital is going to grow a lot more when you increase the savings rate. It's squared. So in that economy, if you do increase the savings rate from 10% to 20%, this is the way it goes. So remember, 10% to 20%-- that means that the new steady state output per worker will be twice what it was in the previous steady state. So you go from 1 to 2. But it takes a long time. The numbers are [INAUDIBLE]. 50 years it takes you to go to the new state. And so that's the time frame we're talking about. So it is true that the saving rate will not change the long-run rate of growth, absent other mechanisms. But you can grow faster than your average, your steady state level for quite some time. And again, a lot of the Asian miracle has been of that kind. This is what I was telling you of China before, no? Well, yeah, you can grow very fast, especially if you have saving rate much higher than 20%-- I mean 50% or so. But the rate of growth will have a tendency to decline, absent some other miracle. There are a lot of the reasons why we have all this fight about technology and so on. It has to do with-- because that's the main mechanism-- alternative mechanism to grow is technology. We're going to talk about that in the next lecture. But this force, which is, what I said before, is the easy part of growth-- it's very difficult to fight this pattern. So here you have numbers for the steady states. So if the saving rate is 0, obviously, everything is 0-- no way around. If saving rate is 0.1, 10%, then, in this model, capital per worker is 1. Output per worker is 1. Consumption per worker then go from 0 to 1. Why? Because you were saving something. So it's 1 minus 0.1, which is a saving rate. Suppose you double the saving rate. Well, we know that we're going to double output per worker in this economy. We said that. We're going to go from 1 to 2. The capital stock is going to have to grow a lot more to double the amount of output. Why is that? Decreasing returns. To double output, you're going to have to much more than double capital because you're going to be fighting decreasing returns. What about consumption? Well, it won't double because you're doing this out of increasing the saving rate. So you get the 2 minus, now, 0.2, not 0.1-- minus 0.2 times 2, so you get 1.6 and so on. And the higher you go with your saving rate, the harder it gets for capital to bring along output per capita, and the more the drag on consumption because you need to be saving a lot in order to maintain this high stock of capital that you're having. You have a very large stock of capital. That means you need to save a lot just for the sake of maintaining that stock of capital. And so little is left for extra output per capita. And so you see that, here, for this particular model, when the saving rate exceeds 0.5, then output obviously keeps rising when you increase the saving rate, but output starts declining. So you're in the declining part. And if you get to 1, of course, there is no consumption. So that's the curve that we trace. Is everything clear? Now I'm going to-- that's the basic Solow model. And that's a model that, again, you need to control completely. All that I'm going to do now is very simple. I'm going to just modify a little bit this model to add population growth. So what happens-- by the way, for centuries, population growth has been one of the main-- in this model, we concluded that output per worker was not growing. What we're going to conclude in a second is that output per worker will not grow if population is growing. But that means that output is growing. If population is growing, and output per worker is not growing, it's constant, that means output is also growing. And for a long time, growth of output, not of output per worker, was driven by large population growth. And sometimes, you get big migration flows into a country that leads to growth and so on. Now big parts of the world have negative population growth. So now we're going through a cycle in which things are going the other way around in many large parts of the world. This is true in almost all of continental Europe, certainly in Japan. I said South Korea, China, and even some places in Latin America. So the drag, actually, is against that. We don't have the natural force for growth that we had for many, many years. So let me introduce population growth. So assume now that population, rather than being constant, grows at the related gN, which could be positive or negative. I'm going to do the example for a positive population growth example. So there's no equation that changes, in the sense that this is still true. It's still true that investment is equal to savings. It's still true that output is equal to output per worker, and output is equal to f of K and N and so on and so forth. The thing that is a little trickier is that, in this model, if I don't normalize things for-- in this case here, where population was not growing, I could have just eliminated this N as a constant. And I would have done everything in capital, in a space of capital here, and output here would have been the same, just scaled by a number, a constant N. When I have population growth, I'm not indifferent between doing one way or the other because, if I don't have-- if I do it in the space of K and Y, and population is growing, then all these curves are moving. This is a very unfriendly diagram because my curves are all moving. As N is moving, everything is moving. So the trick in all these growth models, and it's going to be even more important in the next lecture, is to find the right scaling of capital so there is a steady state, so your curves are not moving around as population growth. It's very easy to find the scaling factor. It's population. So that's what I'm going to do. But remember what is different here is-- so what I'm saying is I want to get all my variables scaled by population at some point in time. That's what I want to do because I know-- I practice enough with these things-- that's going to give me a steady state. Now, what is tricky relative to what I showed you before is that, before, I just divided by N both sides, and I was home. Now I can't really do that. OK, let me divide by Nt plus 1 both sides. So that's nice. I get my capital per worker at t plus 1. But there are certain things that are not as nice. What I have on the right-hand side is not what I really want. I don't want capital over population next period. My steady state is going to be in the space of capital overpopulation at the same time. That's my steady state. So this is not so nice. So what I have to do is I want to convert this, the right-hand side, in assumption that these are the kind of things that I want to have. So what I'm going to do is divide and multiply each of these sides by Nt over Nt plus 1-- sorry-- I'm going to divide. I multiply each of these by Nt. So multiply by Nt, divide by Nt, so multiplying by 1. And then I can rearrange the terms in this way so I get what I want, which is capital per person at time t, all at time t. But then I get this ratio here. And I can do the same for this expression here. Now, what is that ratio? It's population today divided by population tomorrow. Well, it's 1 over 1 plus the rate of growth of population. Nt plus 1 is equal to Nt times 1 plus gN. That's the rate of growth of population. So what I have here is 1 over 1 plus gN. Now, gN is not a big number. So 1 over 1 plus gN is approximately equal to minus gN. So 1 over 1 plus gN-- gN is very close to 0-- is approximately equal to minus gN. So that's the reason this guy became that guy-- approximately that guy. I can do the same here, but it turns out that there's an extra term here which is equal to s times gN times Yt over Nt. Well, that's second-order. That's the reason I'm going to drop it. It's a saving rate which is-- it's a small number times a rate of population growth, which is a number like 0.01 or something like that. So that's a small number, so I'm dropping it. That's a bigger approximation than that one, actually, but I'm going to do it. Everything becomes a lot simpler. So this is an approximation. I'm just dropping second-order terms. And once I have that, I have the system I want because now I have a system for the evolution of the capital per worker or per person. And if you see, it looks exactly as we had before. Remember, this is exactly what we had before-- sf K over-- we used to have N subscript t. Now it's K over Nt. But what is different is that now, rather than having only the depreciation rate here, we have the depreciation rate plus the rate of growth of population. Why do you think we have the rate of growth of population there? Remember the economics behind this expression before. It was this is what adds to capital, to capital per worker. This is what you need to maintain, what takes away from capital. Now, it's what takes away from-- given we're doing everything in the space of capital per worker, that takes away from capital-- oh, that's a typo. There's a t there-- t. So why do you think I have this gN here? Well, I have only one minute, so I don't have time to-- because if I want to maintain a stock of capital per worker, and workers are growing, then I need to be growing the capital. So even if I had no depreciation, if I want to maintain the capital per worker constant, and workers are growing, then I need to grow the stock of capital. So in order to maintain the capital-- I still need to spend what I used to spend for depreciation of the capital stock. But if I want to maintain the capital per worker constant, then I'm going to need more investment, just to make up for that extra component. So now, set gA equal to 0. Your diagram is exactly as before in this space. Set a equal to 1 and constant. But this line, the red line here will have delta plus gN. So it rotates up. So you can play here and see what happens if there is a change in population growth and so on and so forth. It's going to be counterintuitive initially because you see, if I increase population growth, this curve will rotate up, and then it will appear as if that leads to negative growth. But you don't get negative growth. In this diagram, you do get that Y over N will decline. But that doesn't mean that you get negative growth. It just means that output is not growing as fast as population. But both are growing, just the population is growing faster than output. I'll start from that-- oh, I think it's after your break, so when you have forgotten everything by then. So I'll do a review of this, and then we-- OK, have a nice break.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_16_Convergence_and_CrossCountry_Variation.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: OK. Let's start. So the plan for today is to wrap up this growth theory section of the course. And I want to conclude by showing you what we can and cannot explain with the models we have looked up till now. And almost as a matter of accounting, I will tell you, what do we need to fill-- which gaps do we need to fill in order to explain the great dispersion we see in income per capita across the world. But before I do that, I need to finish the previous lecture. And so let me do that. I had shown you this table. Remember, in the complete model-- the model that has a productivity growth, unemployment, and population growth, we concluded that in balanced growth, the following happened. Obviously, if we pick the right normalization-- the right normalization, here, was effective workers. So the productivity times population or workers. And so if we normalize the variables by that, we get, obviously, zero growth. That's what it means to have a balanced growth, in which all the relevant variables are growing at the same rate. So capital per effective worker will be a steady state. That's the diagram we plot. Remember, the diagram we plot was output per effective worker against capital per effective worker. And that diagram has a steady state. And that steady state-- at the steady state or the balanced growth point, then we have that the normalized variables grow at the rate of 0. Capital per worker has to grow at the rate gA because capital per effective worker is not growing. So capital per worker will be growing at the rate gA. The same applies for output per worker. Output per effective worker is not growing. That's balanced growth. But therefore, output per worker will be growing at a rate gA. These are the exogenous drivers-- output per worker, we assume-- well, sorry. Labor is exogenous in this model. And so is gN. Population growth is some constant we take. It's not something we try to explain within the model. But the two drivers of absolute growth will be gA and gN. And so capital in the steady state will be growing at the rate gA plus gN. Output will be growing at the rate gA plus gN. So that's balanced growth. Now, let me give you an example so you can do-- suppose that our production function is this. We call that the Cobb-Douglas production function. K to the 1 minus alpha AN to the alpha. Does this function have constant returns to scale? Yes, the sum of the exponents is 1. But-- so take the log of both sides. The change in the log is the rate of growth. And so you get that the rate of growth of output is equal to 1 minus alpha, the rate of growth of capital, plus alpha times the rate of growth of effective workers. So in balanced growth, gK will be growing at the rate gA plus gN. And therefore, output will also be growing at the rate gA plus gN. OK, so that's what you have in the table. And if you want to look at a variable like this, capital per worker, then output per worker, then you need gY minus gN. So subtract the gN, and you can subtract on both sides. Then it's going to be equal to gA. And subtract gN from the right-hand side. This one and that one cancel, and I get gA on the right-hand side. So that's the way you use that expression to fill all these blanks. That's steady state. So I think I stopped right before that-- this slide or at this slide, which is-- if I look at this, this table, gN is pretty easy to compute in most places. There are places in the world where we cannot even measure birth and death, so it's difficult. But in most places, you can measure gN, the rate of growth of population, quite accurately. And the question we have, though, is, how do we measure technological progress? It's also easy to measure the growth in the stock of capital. It's investment minus depreciation. But how do we measure gA, the rate of technological progress? And the first proposal on how to do it was also by Bob Solow. That was the second contribution he had in growth theory. Well, it was also growth measurement. How do we measure gA? It turns out that the only way you're going to have output per capita growing over time or output per person growing over time is due to gA. So we might as well try to measure that since it's such an important variable for the growth of-- the growth of happiness. If you give one indicator of happiness is output per worker, well, such an important driver-- we need to be able to measure it. And the basic idea that Bob Solow has is, is essentially something you could have taken from 1401. It's extremely simple. It says-- there are some assumptions behind this. But in the basic competitive model, you have that you can compute the contribution of each factor of production to output by the payment it receives. So under this assumption, suppose that you're spending, in workers, $30,000 a year. So that means, in a competitive equilibrium in the labor market and so on, that the worker is contributing, to production, $30,000. I'm not going to deal with markups and things like that here. But that's the basic idea. You can adjust all these formulas to include markups. But let me not do that. So that also tells you that if you increase employment by 10%, you're also going to increase output by 10% times whatever is the contribution of labor to output. And that's what I have here. The contribution to output of adding workers is going to be that times delta N. I can divide, by Y, both sides, and here, multiply and divide by N. And I get that the rate of growth in output due to a rate of growth in employment is equal to the labor share. That's called the labor share-- it's the wage bill, W times N divided by total revenue sales and times the rate of growth of a population or workers is the same here. So I'm going to call this gYN-- is the rate of growth of output due to the rate of growth in population is equal to this labor share, which-- I'm going to call that alpha-- times Gn I can do exactly the same with capital. And you can do-- if you have more factors of production, you can do the same for each factor of production. But in this simple example, we have only two factors of production-- labor and capital. So I can do the same for capital. And I can say the contribution of capital growth to output growth is going to be equal to the capital share, which is the complement of the labor share. So it's total revenue minus the wage payment, the wage bill, divided by total revenue, times the rate of growth of capital. So the contribution to output growth of the rate of growth of capital is 1 minus alpha, which is the share of capital, times the rate of growth of capital. So that means if I sum the contribution of labor and capital, I'm left with a residual, which is-- whatever is the rate of growth I have in output-- I know how much I'm getting from labor. I know how much I'm getting from capital. If there is any difference, it must be due to that thing I do not observe, which is technological progress. That's the logic of all this stuff. Go back, for example, to this production function. I'm saying, I know the contribution that K has to output growth. I know the contribution that N has to output growth. Well, the only thing I'm missing is the contribution that A has to output growth, which I don't observe, but I observe output growth. I observe capital growth, employment growth. So I can solve out what is the technological progress-- A growth. And thus, it's called the Solow residual, by the way. But that's the way, the rate of growth of A is measured. The rate of growth of the output minus the contribution to growth of employment, population and capital. Any question about that? Nope? Makes sense? Somewhat? OK, good. So anyway-- so there's a huge industry of measuring these kind of things, of course. Let me give you an example on how to use this accounting. So China '78, 2017-- that's an episode in which China was growing very, very strongly. On average, China grew at over 7%, output-- over 7%. Now, by the formula I showed you before, if I asked you the question, well, what was behind that growth? How much was the contribution of labor? How much was the contribution of capital? And how much was the contribution of technological progress? Well, there are several things we can measure fairly well. Population growth during this period in China was around 1.7% per year. There was a massive amount of investment. The capital stock was growing at a rate of 9.2% per year. And the residual-- you can compute it using the Solow approach-- is around 4.2%. So that's-- 7.2% is a result of the weighted average of these three things. So that's basic explanation of growth in China during that period. Now, just looking at this-- don't look at the diagram. What jumps immediately? If you just look at this part. What jumps to you immediately? Let me ask you differently. Does it look like balanced growth? Do you think that that's balanced growth? In other words, do you think that China had arrived to its steady state, and that growth is a result of what was happening in that steady state? No. [LAUGHS] But what is it that looks-- that tells you that that's not the steady state? Balanced growth is when everything is growing at the same rate. Meaning all the endogenous variables are normalized, are growing at the same rate as the exogenous variables. So what is it that it would merely jump to me here is that-- uh-oh, 7.2%-- that's less than the rate of growth of capital. So I know that there, capital over output was rising. So that's not a steady state. I know that. I can see it in a different way. I know that in a steady state, in balanced growth, the rate of growth of output and of capital should be the sum of the rate of growth of population plus the rate of technological progress. That's 5.9%. But capital was growing at 9.2%, not 5.9%. So I do know that that's a period in which. China was growing beyond its steady state or balanced growth rate of growth. And I know more than that. I know that the reason that was happening is because there was more capital accumulation than you would expect in the steady state. When are you likely to see a situation like that in which capital accumulation is faster than the rate of growth of capital in the steady state and is faster than the rate of growth of output? That is, that's a period in which we have transitional growth being pulled-- growth above the steady state growth is being pulled by capital accumulation. When does that happen? STUDENT: Two things come to mind. First is economies that are transitioning from command market economies, I believe? And also, the second thing that comes to mind is, potentially, countries recovering from war periods. RICARDO CABALLERO: OK, that's deep answer. I wanted something simpler, which is-- I wanted to say-- so the economy is not in a steady state. That's clear. But what do we also know? We know that the capital stock is below its steady state level. And that could be as a result of the things you described. You had a war, so capital stock was wiped out. Or you had a period in which saving rate was very low. And now you're going to a high saving rate. [INAUDIBLE] A lot of that is what happened in Southeast Asia, in particular. Actually, it was a fast increase in the rate-- a sharp rise in the saving rate. But the bottom-- so what I had in mind is for one of these reasons, that's a situation where the initial capital stock was below the steady state. And so you have a period of transitional growth in which the investment rate is more than you need to do to maintain the stock of capital per effective worker. It's a positive gap between the green line and the red line. And that's what leads to transitional growth until you reach a steady state. If things do not change, meaning population growth remain the same as in that period, and the rate of technological progress remains the same as in that period-- we know that the balanced growth rate of China is 5.9%. It's 5.9%, because it's 0.017 plus 0.42. So that's 7.2% average that you are likely to see. And actually, this is an average that has numbers like 15% very early on to close to 6% or so on the last stage. So that would be a steady state. Now, that, I think, is an overestimate because we know that the rate of population growth is declining very rapidly in China-- is turning negative. So for the rate of growth of output, China-- unless there's a big change in technological progress-- in the process of technological progress, it's going to be pretty difficult for China to grow a lot more than 5%, I think, going forward. And that creates some problems. But that's what it is. That's what this model tells you. You need to change something. And the things you can change in this model are what? Well, you could induce higher saving rate. You could get more transitional growth out of that, more capital accumulation. That's costly. That means less consumption and so on. Or it could be some technological breakthrough, but that probably would affect the whole world, in any event. But that's the kind of things you expect. Good. So the models we have developed here are quite good to explain catching up processes, catching up growth, and to understand where are the economists converging over time. What I want to do next is-- I want to end up trying to explain-- remember, one of the plots I showed you very early on is that there's great dispersion in income per person, per capita across the world. And also, some countries are not catching up, especially in Africa and so on. And I want to try to understand-- so people have put lots of effort in trying to understand why do we have these differences. And so I'm going to expand, a little bit, the model we have to show you a few things people have explored. And then I'm going to conclude that those things that people have explored sort of can't explain it either. And then I'm going to go back sort of to growth accounting. so the sort of thing I did for China there, and try to explain what seems to be behind these big disparities we have in the world. So the first thing I'm going to do is just-- it's useful as an exercise, even. I'm going to modify the Solow model a little bit. So this is like the production function I showed you before. But rather than having N here, I'm going to have H. And H is just N-- our old N-- population, labor force over there, but it's scale up by this human capital factor. So what this says is that human capital is really-- is the population times something that controls for the level of schooling of that population. So a big candidate for difference across the world is certain populations are far more educated than others. So this does exactly that. Size is an increasing function of the numbers of years of schooling. And there is a big micro evidence-- literature trying to estimate what is the value of that side. What is the value of an extra unit of schooling for human capital and so on. And the estimate depends on what kind of schooling are we talking about-- primary, secondary, tertiary, or whatever. But on average, that's a number around 0.1. So one extra year of schooling raises human capital by about 10% So the whole population increases, the average by one year. That adds about 10% to human capital. So it's as if you had increased population by 10%. So it makes a difference. Now, it's pretty hard to raise, for a country, one year of schooling for the average. It takes a lot of time. But when you look in the cross-section, there is huge difference across the world between numbers of years of schooling. And that accounts for a big part of the difference in income per capita. Anyways-- so let me do a balance growth exercise with this expanded model and see how far we can get. So the first thing I'm going to do is I'm going to normalize everything by effective work-- sorry, by workers. So I'm going to divide-- all the variables I'm going to show you are going to be divided by N. Now, notice that I'm dividing by N, not by A times N or A times H. So there is a difference with the previous analysis. And I can always divide. The model we had-- I can always divide by whatever I want, all my variables. I divide it by effective workers because I wanted to have a diagram where the curves were not moving around. But I can divide by whatever I want. And I will divide by different things depending on the analysis I want to conduct. Now, I know that once I divide by population only, not by A times a population and education, perhaps, I cannot draw my previous diagram-- the diagram with the saving, with output per worker, and so on, because those curves are going to be moving, so it's not very friendly. But I can divide by whatever I want. But I want to divide, here, by just population. So little h will be simply big H divided by N. Remember that big H was just e to epsi psi u times N. So that's that. Output per worker will be just-- will be K capital divided by N. And here is big H divided by N, which is little h. So all this, now, is measured as output per worker or per population-- per person. Now, remember, what I can do here is-- then I know that the rate of growth of output per person is going to be equal to 1 minus alpha times the rate of growth of capital per person plus alpha times the rate of growth of A times the rate of growth of h, in which h is this years of schooling transformation. Now, if you think about the steady state, it doesn't make any sense that-- gH, at some point, will become 0. I mean, we can increase education. But at some point, we cannot be all going to 150 years of education. Unlike capital and things like that-- you can increase, for a while, education. But at some point, there is a limit. I mean, you're not going to do a post post, post post, PHD, blah, blah, blah, blah. So in a steady state, we know that eventually, in the long run, this gH is equal to 0. So this economy-- expanded to include years of schooling-- has the same sort of balanced growth characteristics of the economy I just showed you before. So in that economy, capital per person will grow at a rate gA, and output per person will grow at the rate gA. Output will grow at the rate gA plus gN. So exactly the same as we had before. Up to now, adding human capital doesn't change our conclusions about balanced growth. It will change some conclusions that are important. So that's the reason I'm introducing this variable. But it doesn't change this conclusion. So this model, which is a little expansion of the previous model, we had, has the same balanced growth characteristics as the model I showed you before. So let me do a little bit of algebra with it. So from the capital accumulation equation, I know that-- so let me-- remember, the capital accumulation-- now written in per person-- will be kt plus 1-- this is little k-- minus kt equal to s times yt minus delta plus m kt. Wait, why don't I have a delta plus n minus plus gA there? Remember that in the previous model I have a delta plus m plus gA there. Did I make a mistake? Actually, this is a useful exercise for you. You remember that I had a delta plus m plus gA there, no? What's wrong? I just told you that this economy, at least in balanced growth, is exactly the same. So did I make a mistake? No. The reason I had the gA there is because I was looking at the change in capital per effective worker. I'm looking, now, at the change in capital per worker, not per effective worker. So I don't have that A in the denominator. So I don't need to account for the rate of growth of the denominator due to an increase in technology. I don't need that. And so what I can do is divide both sides by kt-- that's kt. And this is the rate of growth of k, no? It's the rate of growth of k. But in a steady state, the rate of growth of k, capital per worker, is equal to what? gA is the rate of growth of technology. So this-- in a steady state, this is equal to gA. And so what I wrote there is-- I think it should be exactly that. Is it? Yes, good. That's what I did. And why did I do this for? Well, because now I can-- you know that I can measure delta-- I can measure delta. I can measure n. And I can measure gA. So this implies that I can solve out, in a steady state, for what is the level of capital per effective worker. I can solve from this equation, and it's equal to this expression here. Notice a few interesting things here. Capital per worker-- Each of them is divided by n, so I can also look at big K over A times big H. It's increasing the savings rate. That you already had in the model we discussed before. If the saving rate is higher, then you're going to end up with a higher capital per effective worker ratio in the steady state. It's decreasing in population growth. All these things you already saw before. So now that I have this expression for K over H, I can go back to my production function, to this. And it's sticking there-- this value. And I get that output at time, t. Once you are in the balanced growth path, it's equal to A (t) times h times k over Ah to 1 minus alpha. Just solve for that. And the point being is that now I can write this as y(t) is equal to a t times-- so human capital times this expression here. So the point is, human capital doesn't affect the steady state growth, but it does affect the income per capita that you have. And it makes a big difference. I'll show you. So when people try to explain differences across the world, they notice that they were missing a big component. And that big component is education. So let's compare. What I want to do next is-- I can do this for every country in the world. And I can compare it with the US. OK, I can do this for every country in the world. And I can compare it with any other country in the world. But just-- let's compare it with the largest country in the world in terms of output. That's the US. So let's see what we get. So I'm going to take output per capita everywhere and divide it by the same expression for the US and then define that variable as yi hat. So for country i-- say, Singapore-- we take output per capita and we divide it by output per capita per person in the US. Assume-- big, if. This is a huge, if. But you can do that for the US versus Singapore. Probably is not a crazy assumption to make. Assume that they have the same rate of technological progress and the same technology and so on. Well, the same rate of technological progress-- I'm going to assume that for now. So then this yi hat-- you can write this-- it's just this for Singapore divided by this for the US. It turns out to be this expression here. Solow did something like this and said, OK, assume that technology is the same across the world because, at least for major economies, technology can be imported. And we can have the same-- more or less, the same technology across the world. So assume that this guy is equal to 1 and that both Singapore and the US have the same rate of growth. And so that means that countries that have higher savings rate will tend to have-- we know that as [INAUDIBLE] goes, we'll tend to have higher output per capita. So Singapore has a higher savings rate than the US. That will tend to give Singapore a higher income per capita. If a country has a higher population growth, then it will tend to have a lower income per capita and so on and so forth. And so the question is, well, suppose you make this assumption of equal technology and take the-- we can measure the saving rate in different parts of the world, the population rates in different parts of the world. And we're assuming that there's the same rate of technological progress everywhere. How much of of the difference we observe in income per capita across the world can be accounted by that? So that was the first question, is-- well, suppose that technology is the same, but we measure all these other things-- saving rate, population growth, common rate of technological progress, common depreciation across the world, and so on. How much can we explain of the income disparity? And the conclusion is the following. The conclusion of that experiment is that-- if the only difference behind income per capita were years of schooling-- sorry, a key thing that I forgot to measure is years of schooling. So if a country has more years of schooling, it will tend to have a higher income per capita and so on. So if you try to explain the differences in income per capita across the world, using variables like years of schooling, difference in saving rate, difference in population growth, and so on, the world would be a lot more egalitarian than it actually is. It would look a lot flatter. So this is how much you can account. Here, you put a bunch of lots of countries-- Africa and so on here. And if you just stick, in this equation, the corresponding saving rate, education levels, and so on, the world would be a lot more similar. There wouldn't be the kind of disparities we see between some African countries and Singapore, say. We're talking about Singapore. But the world doesn't look like that. That's the point. So if you take all these things that make a lot of sense-- education, saving rate, and all that-- you're going to explain a small share of the differences in income per capita across the world. Sorry. In this plot here, L is our N. L, labor, is our N. So this is y over N. Our little y in this thing. So we can get so far. We need something else. So what else do we need to add to really explain the amount of disparity we have? Well, the answer is, again, the Solow residual. It turns out that the assumption that As are the same across the world-- that the level and the rate of growth are the same around the world-- is just a very bad assumption. The level of technology is very different across different parts of the world. So the next step was to say, OK, let's measure the difference in technologies across the world. And it turns out that if you try to explain-- so if you go out there and you measure the level of technology across the world-- different places-- Zimbabwe, Singapore, South Korea, and so on and so forth-- well-- and then you plot that, the level of technology that countries have, vis-a-vis their output per capita, per worker-- you explain a big share of it. So here is what you have-- is the relative A. So everything is relative to the US here. So the A that we measure-- the level of A that we measure-- I don't remember which year was this. I don't remember when it was. Doesn't matter. So if you measured the relative level of technology in country I, relative to the US, and then you measure the relative output per capita in that country relative to the US-- and forget about everything else-- education and so on and so forth-- you can get a pretty good relationship between the two. So between one half and 2/3 of the difference in output per worker across different countries in the world can be attributed to the difference in technology level. So that's a conclusion that we have. Now, let me revisit this issue of convergence. And so what you do is you take countries that have, more or less, the same A and that have more or less the same levels of education. And you look at the path of their output per capita. You get that the models we have been discussing here work extremely well. So here, you see that they are, more or less, growing together. There are wars and stuff like that here. So there are great recessions and things like that. But on average, you see the countries that were behind caught up and so on. Big dispersion here. They were all growing together. And as more time passes, the closer they get to each other because they are converging. These guys-- the US and the UK were already very close to a steady state, in the 1870s, while Japan was way behind. But it was sort of in the same class of countries in terms of technology and in terms of education levels and so on. So it works pretty well. This is for more countries. And you plot per-capita income in 1870-- again, for countries that have similar As and Ks-- A's and h. And you look at the rate of growth, and you get exactly what you would expect. Countries that were further behind caught up. That's Japan. Very fast rate of growth. And you get this very negative relationship. So this is the convergence model. It works extremely well, conditional on having the same A and h. So that's the contribution of this lecture. I already told you that this convergence model works quite well. The point is now that it works very well. And I had told you early on-- I think in the first lecture on growth-- that this worked very well for certain kinds of countries. But then when we put all of them together, there were some countries that were clearly off. And they were mostly in Africa, but you had countries that had low per capita income and they grew very little during that-- the sample I showed you. So here, I'm refining that. I'm saying, OK, now I'm going to tell you a little bit more what I mean by countries being similar. And what I mean here is that they have similar A and h. So when I look at countries that have similar A and h, the models we have discussed work extremely well. This is over a short period of time, so you have more fluctuations and more countries. But still-- you can argue that Mexico and Chile probably do not belong with many of these other countries, but you still get this negative relationship. It's quite clear. Now, if you don't control by A and h and you put everything together, then the plot looks like the plot I showed you earlier. So if I control by A and h, the models work very well. If I don't control for A and h, then the models do not look that nice. So this led to a literature, which is called the conditional convergence literature. And the idea-- it's almost accounting, but the idea is the following. So the question that-- what's behind this literature is, well, why is it that we have some countries-- say, here-- that have a very low income per capita and grow very slowly? That's the puzzle. How can it happen that we have that? And the story-- but again, it's more accounting than an explanation, in my view-- is what is called conditional convergence. It says for some reason-- probably it has to be explained in terms of institutions, political instability, or whatever. For some reason, some countries have just lower steady state levels of technology, lower steady states because they have lower technologies, and they're stuck with lower technologies and so on. So what this literature does-- it says, OK, let's compute the steady state. So let's accept that some countries will have lower level of technology. That's what it is. Maybe at some point, they'll flip from there. But they have been, for a long time, in the stack that let's assume that they have a different level of technology. So that means-- let's compute, for each country, its steady state, its own steady state, using its own technology and its own level of education. So in particular, in this plot, I'm going to show you- take the values-- the value of A for 1970. That's the plot I'm going to show you. The value that each country has in 1970. Compute the steady state level of output corresponding to that A. A-- and over time, it will be growing at gA, whatever. But take the A of 1970. Compute the steady state value of that. Compare it with the current output over that. If the current output is below that, that means that this country still needs to catch up-- not with respect to some universal steady state, but with respect to its own steady state, with its lower technology and whatever. And then look at whether we see convergence or not. And the answer is that you start recovering this downward sloping curve. So what does this say? What is the big story telling us? It's saying, look, some countries, for reasons that are beyond this model, just simply have much lower steady state. Yep. They have lower technologies. They don't know how to use more complicated technology. I don't know. But that's what it is. They have lower technologies. And so they have their own steady states, which can be steady states with very low levels of income per capita. Now, for those countries, it still applies-- and that's what this picture shows-- that if they are not at their steady state, they still have lower capital per effective worker than they need to have in their steady state-- that they will have transitional growth. So they will grow faster than their growth in their own steady state. And that's what this picture shows. These are countries that grew very little. Look, we have Japan here together with Botswana. And maybe they won in the same place. So these are countries that still had lots of growth to do, relative to their own steady state. And they did grow a lot. How do I know that? Well, because the output I compute-- the output I compute, relative to the steady state at the beginning of my sample, was much lower than 1. That means that you're not at your steady state. So this variable here is the output you have at the beginning of the sample, relative to what the steady state-- your steady state is. How do you compute the steady state? Well, you input the level of technology. You input the saving rate. You input the population growth and all those kind of things. So you have a number lower than 1. It means you still have catching up growth to do, not with respect to the global universal steady state, but with respect to your own steady state. And when you do that, you see that some countries that are in the total sample looked like they are not growing and so on and so forth. They are growing. They are just growing relative to their own steady state, which has little growth, and it has low levels of technology and so on. And so that's the conclusion of this conditional convergence literature. Now, it turns out that the world has become very unequal, also, along this dimension over time. This shows you the ratio of GDP per worker of the 90th percentile to the 10th percentile country. And so you have not only big differences in technology across the world. But also, you have very different rates of growth in technology across different countries in the world. So this difference is sort of increasing quite dramatically. I don't know what happened here. [LAUGHS] These are the same. So the world started with countries-- this is telling you it started with countries that were richer than others, and that distance has been rising over time that towards the end, we began to change here. I think that has a lot to do with China. That was a poor economy that grew very fast during that period. And it wasn't very large here, so it didn't matter as much. But then it began to count a lot. I think. I'm not completely sure. That's it. Anyways-- but that's the state of knowledge in this-- obviously, there's a big literature around all of this, and very complex, even, literature. But there is-- we understand-- we know that we have good ways of explaining how a country converges to its own steady state, that we have very poor models, certainly, within economics. Or within growth theory, per se, there is little institutions and stuff like that explain some of that. But we have very poor models, in general, to understand what gives rise to this big disparity in technology adoption and so on. So that's all that I want to say about growth. The next topic is we're going to open the economy. We're going to go back to the type of models we had very early on, but now in the context of an open economy.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_5_ISLM_Model.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: But before I do that, before I get into the IS-LM model, let me spend a little time telling you what is going on in the US economy, as this will relate to the kind of things that we'll discuss later in this lecture. So what you see there is the path of net worth, so wealth, essentially, of households and nonprofit organizations, households primarily in the US. And what you can see is that there is a more or less steady trend. Obviously, in recessions, net wealth tends to decline. And certainly, early on in the COVID recession, it declined very dramatically, because the price of equity, the price of houses, everything declined with the initial shock. But what you see after that is a dramatic rise in wealth in the US, and all around the world, but particularly, in the US. And what is behind that, well, there are two things that are behind that, but the main one is asset prices. You have massive rallies in the equity market. The price of houses sort of skyrocketed everywhere, and so on. Last year, 2022, was a bad year for asset values. The equity market declined pretty sharply in the US. But still, I mean, it's a small decline relative to the big buildup of wealth. Now, why do you think that, in this course, I would be talking about this at this point? What happens? Remember, in this part of the course, we're trying to come up with a model of aggregate demand, and then, how aggregate demand reacts to policy. That's the name of the game in this part of the course. So if I tell you that wealth increases a lot, why do you think I'm telling you that? Aggregate demand. Consumers will reach. They will tend to consume more. That will increase aggregate demand. So the point I'm highlighting to here is that there is a big force behind increasing aggregate demand, which is consumers feel richer. By the way, something similar is happening in corporations, and investment is also pretty high because of that, real investment. The other source of increasing wealth, which is not as dramatic as the previous one, but it's very important, especially in lower income segments of the population which tend to have a higher propensity to consume is that incomes did not decline a lot during COVID. And in some cases, they even increased because of the large transfers that we saw from the government to individual households, especially lower income households. And at the same time, there wasn't much to spend on. So that meant that the saving rate also went up a lot in the US during the COVID recession. So people save a lot more. That's sort of the average saving, household savings. This is by quarter, I think. No, monthly. But that's what we saw in the past. Look at during the COVID recession. People save a lot more. And what you're seeing today is, obviously, they save a lot more. That's part of the increase in net worth is due to this. It's small relative to the amount of wealth we saw increase. But this was about-- this excess saving amounted to about $2.7, $2.8 trillion. So you get a sense of the order of magnitude. And what is happening now is what people are desaving so now people are saving less than they used to because now, they have opportunity to spend the stuff on. And so that's where you see massive demand for travel, massive demand for restaurants, hotels, and stuff like that. Well, that has a lot to do with people have the money to do it. They haven't been able to do it for a while. So now, they're doing a lot of that. Why would I be telling you this now in the course, in this part of the course? For the same reason I told you that net worth went a lot. I mean, people had the savings, and they're really willing to spend it. That puts lots of upward pressure on aggregate demand. These pictures capture more or less the same. This captures very much what I said in the previous slide. You see the personal saving rate. That's the average. I don't remember, over-- oh, seven-year average. And you see what happened during COVID. Big spike in the savings rate, and now, big decline in the saving rate, where saving rate is much lower than what normally is. And remember, the saving rate is your income minus your consumption. So if you're saving less, you're consuming more relative to your income. That's the way it works. Obviously, there is lots of heterogeneity. Some people made a lot of money, some people didn't make a lot of money during COVID. Some people save a lot, some people didn't save a lot. And in fact, we do know that on the lower income segments, a lot of the excess saving is already gone. I mean, it accumulated early on, but they spent it also much earlier. But what you're beginning to see in some of those segments is even though they don't have access savings, they're borrowing a lot. So now, you see credit card borrowing, which had declined a lot, and now, has increased quite a bit. And again, what do you borrow for? Well, for consumption. So that also funds additional consumption. So for all these reasons, at this moment, the US economy, and many economies around the world, are what we call overheating. There's a lot of demand for the production, that capacity of the economy. And that translates-- the problem, say, well, what's wrong with that? Well, the problem is, something you don't understand at this part of the course, you understand but you don't have a model for, but you will have six lectures, more or less, from now is that leads to high inflation. You don't know that, but intuition tells you that it's a lot of demand relative to supply. Well, prices tend to go up. That happens in micro, and it also happens in macro. We'll learn that later. But in any event, as a result of this, the US economy is overheating, and therefore, monetary policy has been very contractionary. The Fed has been tightening interest rate to cool down the economy. So how does that happen? Well, that's the kind of things that we can answer with the IS-LM model. So the Fed is very IS-LM like. I mean, that's the way they think. The model is richer. They have more equations, and so on. But they are thinking in terms of the mechanism that we're about to summarize in the IS-LM model. So if you have an economy that has this problem, and you are in the central bank, you need to use monetary policy. Well, to understand how the thing works, you need the IS-LM model. That's a starting point. Then you can add bells and whistles. But your starting point is the model we're about to see. Anyway, so what you see is what I was saying, is that all that wealth, all that excess saving, all that pent up demand, if you will, led to lots of-- led to a very-- an economy that's overheating. And you can see here what happened. I disentangled between consumption of goods and consumption of services. Consumption of service is about 2/3 of consumption. Remember, we talked about that. And goods is about 1/3. What happens is the scales are different. This is for goods. That's for services. But what you see here is that was a trend. So consumption in services was growing at a steady pace, then COVID came and collapsed. I mean, you couldn't go to a restaurant, you couldn't travel, couldn't do anything. So consumption in services collapsed. And now, it has been recovering. And that recovery picked up pace last year, actually, in 2021, already picked up pace, and by now, we're above the trend. So service consumption that collapsed during COVID now has fully recovered. While, at the same time, the capacity to produce in the service sector hasn't recovered equally. But we'll get to that after quiz 1. What happens to goods consumption? Well. Also initially collapsed, but then, well, people were bored at home. They couldn't do anything. They bought lots of gadgets and stuff like that. So goods consumption went up very sharply during COVID, way above the trend, you see? There is COVID collapse, and then, people began to buy all sorts of gadgets. Now, it's slowing down. But still, if you look relative to trend, consumption of goods is way above what it would have been in the absence of this episode. So the sum of the two things tells you that you have an economy with a lot of consumption. And that, at this moment, the Fed wants to cool down. It's too much for the economy to take. So the Fed wants to cool it down. And we're going to see how you do that. OK, so now, let's get into this set of lectures. And please, stop me if there's anything that is unclear, because as I said, this is probably-- if I hear over the summer you have forgotten everything you have learned in this course, but you remember these two lectures well, I'll be happy. So stop me if you do. In fact, normally, I have taught this lecture in one. I decided to try to slow it down as much as I can because, again, I think it's particularly important for this course, and for your stock of knowledge. So one of the main things we're going to be able to do with this model, as I've been saying, is we're going to be able to discuss the main macroeconomic policy tools, which are monetary policy-- monetary policy is main anti-cyclical tool, but we're also going to be able to understand fiscal policy. And fiscal policy is not exactly equivalent to monetary policy. It works through the mechanism that allows you to do things that are more targeted, transfer resources to a specific group of people, and so on. And sometimes monetary policy is just not enough. And the COVID-19 initial recession was clearly a case of that, and you had to go all in, and we'll see what we did there. It was pretty dramatic as an intervention. The COVID-19 recession led probably to what-- no, not probably. Surely to the largest combined package in history of policy support in terms of monetary policy and fiscal policy. After these two lectures you're going to get to understand essentially the joint determination of output and interest rate, and we're going to be able to study, as I said before, the impact of monetary and fiscal policy. And this framework that we're going to use to develop, to study this is what Hicks and Hansen initially called the IS-LM model. I already hinted that this was coming, but why do you think the name? I notice that you separate IS from LM. Remember what we're trying to do here. We're trying to look at the joint determination of output and interest rate. That is, we're trying to determine at the joint equilibrium of goods markets and financial markets. When we describe the equilibrium in the goods market, we said there is an alternative way of describing it. Remember, I said as an investment equal to savings-- I equal to S. So the IS part of the name comes from the part that has to do with equilibrium in the goods market. IS-- investment equal to savings. And the LM part has to do-- remember, L was that component of aggregate demand we had in the financial markets. We look at equilibrium as aggregate demand-- sorry, demand for money equal to supply of money. Supply of money was M. Demand for money was y times L of I, and therefore the LM part. That's the reason. That's a mnemonic for why this model is called the IS-LM. IS stands for the part that has to do with the equilibrium in the goods market. LM has to do with the part that has to do with the equilibrium in financial markets. This model is a model that combines those two equilibrium. So we're going to be interested in points in which both markets are in equilibrium. That's the name of the game here. So let's first develop the IS relation. And the IS relation is really going back to lecture three. We're going to go back to lecture three, use the same model we used in lecture three with one change. And that change is-- remember in lecture three, we worked a lot on our consumption. The only endogenous-- the only function we had was a consumption function, remember? And then all the rest we took as given-- government expenditure was given, investment was given. All that was given. We're going to relax one of those here, and we're going to flesh out a little more this investment here, make it closer to what a realistic function is. Not a constant, obviously. It's not totally exogenous to equilibrium output and so on. In fact, we do know that real investment, this physical investment-- remember, what is this I? It's investment. This is purchase of goods and services by firms for the purpose of building capital-- equipment, structures, and stuff like that. I saw in Piazza-- very quickly, I'm not into that, but I see more or less the flow-- that somebody asked, "should bonds be included in investment?" What is the answer? In that investment. I. Should purchase of bonds be included in that investment? No. This is purchase of goods and services by firms, no? Capital, machines, stuff like that. The other thing, it's a financial investment. It's nothing to do with the goods market. It's something that has to do with the financial market, not with the goods market equilibrium. So that investment is real investment. Again, purchase of capital, buildings for the purpose of production and stuff like that. And this investment is a function of two things at least. The first one is activity. When output is high, sales are high, companies tend to invest more. They buy more equipment, they buy more buildings, they expand. So investment is an increasing function of output very much like consumption, remember, was an increasing function of output because income is increasing in output So it was an increasing function of output. So these, we already had seen functions that looked like that, and we already know what it does to aggregate demand. It makes that curve steeper, remember? And it's the multiplier behind that. Investment gives you something similar there. But there is a second component which is also present in consumption but is not as important as for investment, which is the interest rate. In particular, when the interest rate goes up for any given level of income or output, then investment goes down. Why do you think that's the case? AUDIENCE: [INAUDIBLE]. RICARDO CABALLERO: Yes. Most of investment is funded with borrowing, and borrowing becomes more expensive so you don't do it. Even if you don't need to borrow, there's an opportunity cost of those funds. You can use it to build machines to produce or you can do something else like have an investment, financial investment. So whether you borrow or not, if still the interest rate is higher, the opportunity costs of building factories is higher. And so that's the reason investment is decreasing with respect to the interest rate. So now we go back to our equilibrium in the goods market, which we said production is whatever aggregate demand wants. So output is going to be equal to aggregate demand. Aggregate demand is the same old aggregate demand we had, except that now we flesh out what is inside that investment function there, which we have another function is increasing output like consumption was, but we also have something that is decreasing in the interest rate. And so this is what we call the IS relation. And so the IS relation therefore has all the combinations of output and interest rate that are consistent with equilibrium in the goods market. Listen to what I said. I said the IS relation or IS curve has all the combinations of output and interest rate that are consistent with equilibrium in the goods market. What about lecture three? We already had that, but interest rate played no role, so we found one point. We said there is one level of output which is consistent with equilibrium in the goods market. That's what we found. Now since we have an interest rate there, we have two variables for one curve, so we can trace a curve, which is not only one point. We can trace a curve. And that's what we call the IS relation. So I remember I told you when we look at the goods market equilibrium, remember this diagram because you're going to come back to it many times. And there you are. So remember when we look at the equilibrium in the goods market, we had something like that. I'm just making it curve rather than linear simply because I haven't specified the functional form of investment, but it doesn't matter, really. Make it linear. But remember, that's the way we found equilibrium in the goods market. We have an aggregate demand, and it was increasing. The slope was positive because we had a marginal propensity to consume. That's the reason this was not flat, but upward sloping. And we found equilibrium output that way. So this is lecture three. We're back in lecture three here but with two things, two differences. The first one is that this ZZ curve relative to the one we had in lecture three is a little steeper. Why is that? Why is it that it's a little steeper than that-- by steeper, I mean if income goes up, then aggregate demand goes up by more than it used to go up. AUDIENCE: Is it because investment is also not [INAUDIBLE]? RICARDO CABALLERO: Exactly, because what made it upward sloping before was the marginal propensity to consume. But now there is also a marginal propensity to invest, which is also positive, and that's the reason it's a little steeper. More interesting for this part of the lecture, though, for the construction of the IS curve is that it's a parameter that we have there in ZZ. What are the parameters we had before in that curve? We had things like government expenditure, taxes, the autonomous consumption. That's the kind of stuff that we had as parameters of that ZZ curve. By parameters I mean if we change those parameters, we shift that curve. Now, for this particular ZZ we have an extra parameter, which is a very interesting. What is that? It's there, I think. It's the interest rate. That curve holds for some given interest rate. If I move the interest rate, I'm going to move this curve around. That's very important. One of the parameters there, the star parameter I would say for this minute of the lecture, at least for this moment in the lecture, is the interest rate. I can find an equilibrium because I couldn't find an equilibrium in the goods market if you don't tell me what the interest rate is, because it's a curve. Remember I told you, it's a relationship, the curve. So if I tell you what the interest rate is, then you can find the equilibrium in the goods market because you can fix this curve. That's for one given interest rate. OK. Do you understand that? That's important. Yes? Those of you that are awake, you understand it or not? Not everyone is in the same page here. OK, good. So what we're going to do next is construct the IS curve. And how are we going to-- remember, what I want to try to do is constructing the space of interest rate and output, a curve, which we're going to call the IS curve. Here we have a point in that curve, because for one level of interest rate I found the equilibrium output. So to construct the curve, what I need to do is start moving the interest rate and see how the equilibrium output changes, and that will trace a curve. And that's going to be my IS curve or relationship. So let's do that. That's the construction of the IS curve. So in the previous chart we found point A, So point A there is that point. There we are. We had some interest rate. This interest rate. Believe me, that was a parameter of the ZZ curve I showed you before. Gave us equilibrium output A. So that's a point in the IS because that's a combination of interest rate and output, which is consistent with the equilibrium in the goods market. That's a point in the IS. That's the definition of IS. So now what I'm going to do to construct my IS is, OK, let me move the interest rate. Let me raise interest rate from I to I prime. OK, that's an increase in the interest rate. And now let me find what is the new equilibrium in the goods market for a given interest rate which is higher than the one I used to have. That amounts to shifting the ZZ curve down. Why does increasing the interest rate shift the ZZ curve down, the aggregate demand down? AUDIENCE: Because it makes borrowing lifetime. RICARDO CABALLERO: It makes investment. The client-- exactly. Borrowers will expect, therefore investment declines. So that means for any given level of output, now aggregate demand is lower because investment is lower. And then you get the multiplier to do its trick, and therefore, you end up with a decline in output which is even larger than the initial decline in investment as a result of the increase in the interest rate. That's what a multiplier does. So say interest rate increased by 100 basis points. That reduces investment by, say, $10 billion, and equilibrium output ends up falling by $15 billion because of the multiplier and so on. But the point is after I do all my convergence to this new lower equilibrium level of output, I have a second point in my IS curve, because that's a combination of a new interest rate, I prime, an output that is consistent with the equilibrium in the goods market. How do I know that it's consistent with equilibrium in the goods market? Because I'm there. I'm crossing. 45-degree line, that means output equal to aggregate demand. That's equilibrium in the goods market. And of course you can keep going and trace an entire curve. And all that you'll do is you'll change the interest rate. That will shift this curve. Then you do the multiplier and end up with a new equilibrium, and that's another point for your curve. So is it clear how we constructed that curve? Very important. OK, good. It's also very important to understand-- so why is it downward sloping? That's a question. Why is it downward sloping? What does it mean that it's downward sloping? That means that the combination of output and interest rate that are consistent when equilibrium output are negatively correlated, meaning I have a combination of high output and low interest rate-- it's consistent with [INAUDIBLE],, or high interest rate and low output. That's what I find here. But why is that? What is the logic of behind that? Or the mechanism. The way to think about that is exactly the way I did this experiment. It's to say, let me think what happens if I increase the interest rate and I keep the level of output where it was. So what happens if I increase the interest rate and I keep the level of output at the level it was? My claim is that that's not an equilibrium in the goods market. What is it? So I'm saying suppose I increase the interest rate but I keep the output constant. So output is here, higher interest rate, aggregate demand is there. So what is the problem? My claim is that's not an equilibrium in the goods market. We're going to need a lower level of output to have an equilibrium in the goods market. That's the reason it's downward sloping. But why is that not an equilibrium in the goods market? Or what is the nature of the disequilibrium in the goods market there? What do we have-- an excess demand, excess supply? Excess supply, meaning there isn't enough demand to support that supply, so supply has to fall in order to restore equilibrium in that market, in the goods market. And since one drags the other one, it has to fall by a lot. That has to do with the slope of this curve. But that's the reason is negatively. So that's the first thing you have to understand when you construct this curve. I know I'm going slowly, but it's important. Please try to understand-- another way of saying it-- when I change the equilibrium output along this IS curve by moving the interest rate around, what I'm doing is I'm moving along an IS curve. So if the only reason why equilibrium output is changing is because I'm moving the interest rate, that's a movement along the IS curve. So I'm tracing points of the IS curve. Good. And I want to draw a contrast between these movements along the IS curve versus things that shift the Is curve. For example, that. So suppose I increase taxes. I don't increase taxes. The government increases taxes. My claim is that the IS shift to the left. That is, for any given level of interest rate-- pick any interest you want. Say this one. You're going to have a lower equilibrium output consistent with that interest rate. You have a lower equilibrium consistent with the same interest rate that has shifted the IS. It has to be a different IS. And think that I can do that for any given [INAUDIBLE] event. I pick this one, but I could have picked that one. Would have been the same. I'm saying, if you increase taxes, that's going to lead to lower equilibrium output. So that means that for this higher level of taxes, I will have to trace a different IS curve. I can start moving the interest rate around, but I'm going to have a lower level of output for any given level of interest rate because I have higher taxes. So how do I know that an increase in taxes will do this? Which diagram would you go to to try to understand this? So let me ask it differently. How do I know that this stuff shift to the left? So I give you more open space. How do I know that this increase in taxes will shift this IS curve to the left? How would you go about thinking about that? AUDIENCE: Well, I think there's more taxes than people [INAUDIBLE] --disposable income, so they're not going to spend as much money, and there'll be less outcome. RICARDO CABALLERO: Their would be less aggregate demand, and less aggregate demand leads to less output because output is aggregate-demand determined. Exactly. That's what equilibrium in the goods market. So you can go back to this diagram. I could say ignore these labels here and say, for any given level of interest rate, pick any, if I increase taxes, I'm going to shift the ZZ curve down. So ignore this [INAUDIBLE]. Suppose that I fix the interest rate but I now change taxes, increase taxes. I'm going to do exactly the same here. I'm going to move this down. And it's going to be a different IS curve, though, because-- I shouldn't have used this diagram. Let me keep your answer. I should have put it on your diagram. But it's lecture three. In lecture three we did see that an increase in taxes would lead to lower equilibrium output. In fact, we know exactly by how much. If taxes increase by 100, then you know that the equilibrium output would decline by C1 times 1 over minus C1 are your change in taxes. Here would be a little different because there is also remembered investment also has a propensity to spend as a function of output. So it would be a little different, but that's the kind of calculation. What else would shift the IS this way? AUDIENCE: Decrease in government spending. RICARDO CABALLERO: Decrease in government expenditure will do that. What else? This is another thing I want you to do-- think of everything-- because for sure you're going to face that in the quiz-- anything that would shift the IS curve. What else would shift the IS curve? AUDIENCE: Decreasing export. RICARDO CABALLERO: Yeah, that's true, but that's not for this part of the course. Remember, we're in a closed economy. So here we assume x equal to M equal to zero. IM equal to zero. That comes after quiz one. What else? Things that were captured-- remember, when I began this lecture, I showed you wealth, what had happened, and so on. There's nowhere wealth in this model here. It's just output. But wealth affects how much consumers consume. So autonomous consumption. There were lots of stuff hidden in that C0, that constant C0. Remember C0 plus one and one. Well, C0 captures things like how confident were consumers, how wealthy they felt, and stuff like that. So anything that shifts C0 down, consumer sentiment declines, wealth declines, something like that, will also shift IS to the left. So that's important. Good. So we're done with IS for now. Now with IS alone I cannot find what I want. I want to find combinations of interest rate and output that are consistent with equilibrium in the goods and financial markets. This doesn't do it because it gives you only combinations that are consistent with equilibrium in the goods market. So I now need to look at financial markets, which is the other side, the LM relationship. And remember what we had. We had equilibrium in the financial markets. We had two instruments that we could use. Remember we had only two assets-- money and bonds. So we could look at the equilibrium in money or equilibrium in bonds. It's the same. But we did it all in terms of money. It's the same because given wealth, if one is in equilibrium, the other one has to be in equilibrium as well. So I only need to look at one, and we're looking at money. So money is equal to money demand. I'm going to divide both sides by P. This is not going to be very important now, but later it will be. And so we're going to have that-- This is equilibrium in financial markets. Means that real money supply equals real money demand. That's what we have here. So this, you already see it traces combinations of output and interest rate which are consistent with equilibrium in financial markets. In the past, that's the way the LM would be described. We would fix M and say, well, this will give you an upward-sloping curve. Because this is downward sloping, so if this guy goes up, I need to-- if this is constant, this guy goes up. This guy needs to come down. What does that? What does bring L down? I going up, because L prime is negative. So that's the way LM used to be described. Your life is a lot simpler today. It's a lot simpler because central banks don't target monetary aggregates. They don't target M. They target the interest rate directly. So they tell you the answer already. The central bank, when it does policy, it says, look, I tell you what I will be. Then if output moves around or whatever, it's a problem for M. We'll provide the M that the market needs in order to have an interest rate equal to the one we want. So it is true that it captures all the combinations of output and interest that are consistent with the equilibrium in the financial markets, but it's very simple because what the fed does in the US or other central banks do is they say, OK, this is the interest rate we want. And now you can put any amount of output you want. As long as we remain committed to this interest rate, it will be consistent with the equilibrium in the financial markets because we will do it so. And the way we will do it so is we'll provide as much M as the market needs so that that combination of output and interest rate is an equilibrium in the financial market. That's a very long way of saying that the fed sets I and then M is whatever is needed for this equation to be in equilibrium. So if output rises and the fed doesn't want to change the interest rate, that means you need to change M. So suppose that the fed says, I want this interest rate to be fixed at this level. Call it I0. And now output turns out to be higher. What will the fed do in order to ensure that I remains at I0? What if the fed doesn't do anything? So the fed says, I want I equals 0, and the fed is calculating that output will be about a certain level. And it turns out that output is higher. What happens if the fed doesn't react and keeps the interest rate at I0? An output tends to be higher than what they thought when they provided the M that they thought the market needed to be in equilibrium with that interest rate. What will happen? The interest rate will go up because money demand will exceed money supply. The only way to restore equilibrium is for interest rate to go up. But the fed doesn't want that, so what the Fed will do is when it feels that, it feels the interest rates are going up, they will provide more money so they can restore equilibrium in the financial markets at that level of interest rate despite the fact that output ended up being higher than they thought. So all this is a long-winded way to say that the modern LM is horizontal. A few years ago, that curve would have been upward sloping. But given the way monetary policy is conducted nowadays, your life is a lot simpler. The LM is a horizontal curve. The fed tells you, the central bank tells you what the interest rate has to be, and then it will give whatever M, will provide whatever M is needed. So that's the equilibrium interest rate. So what shifts the modern LM? And by modern, I only mean-- the book doesn't use that terminology, but by modern I mean that the fed decides what the interest rate is. AUDIENCE: Won't that change your interest rate? RICARDO CABALLERO: Exactly. The only thing that will shift-- your life is very simple. The only thing that will shift the modern LM is that the fed changes its mind. A few years back it would have been more complicated. A change in money demand. A change in money supply. All those things would be shifting the LM around. Now in this setup it's very simple. It will change only if the feds changes its mind. Now obviously the fed is not just a moody institution. It will change the mind, and sometimes it's forced to change its mind. They're not happy with the interest rate they're setting nowadays. They have been forced into that. They were very reluctant to go into this very high interest rate. But what is happening around with this very high consumption and the impact that is having on inflation, they have been forced into moving interest rate not only very high but also very fast, and that was very risky. We have been lucky that nothing has really broken. Normally when central banks raise interest rates so fast, they break something along the way. Somebody is very levered out there, some bank or something like that, and you can blow up. In the UK we had a little scare with some insurance companies, but that was for a different reason. But it's scary to move policy very fast because this is a very important price for financial markets. Everything in financial market gets priced off-- that's the starting-- any pricing model for stocks or anything will start from that policy rate, and then everything builds from there. So if this has to move fast, you can have lots of dislocation. So my goal for today is just to give you the instruments, and then we're going to all talk about combinations, things that we did in certain episodes and things of that kind. OK, good. So again, this part of the IS-LM model is very easy, and it's a lot easier now than it was a few years back. So what does the IS-LM model? The IS-LM model simply mean puts the two curves together. Now we have two curves in the space of output and interest rate and two unknowns, which is output and the interest rate. So we have one combination only, A, that is consistent with both equilibrium in the goods market and equilibrium in financial markets. That's the point A. What happens to points to the right? What happens here if I show you this point in this space? What's wrong with that point? So a point along the LM but to the right. What's wrong there? If it is along the LM I know that I'm OK with financial markets. Those points are consistent with the equilibrium in financial markets. But it's not my equilibrium, so it has to be inconsistent with the other one. It's not consistent with equilibrium in the goods markets. In fact, you know more than that. What's wrong with goods market? There's an imbalance there, but in which direction? That point here. AUDIENCE: Excess of goods. RICARDO CABALLERO: What do you mean by excess of goods? AUDIENCE: The demand is not meeting-- RICARDO CABALLERO: No, demand. Exactly. Insufficient demand. There is too much output for that demand. So that's the reason it's not consistent with equilibrium in the goods market. To the left it's the opposite. To the left we have insufficient output for the demand we have, so it's not consistent with the equilibrium in the goods market. So the only point that is consistent-- well, you can think, what happens with a point here, for example? That point, because it's in the IS curve, is consistent with equilibrium in the goods market, but it's not consistent with equilibrium in financial markets. What do we have there? Suppose I'm in that point. The interest is too high, so that means-- and the demand is low, so too much money demand for money supply. That's what you have. So at the end of the day, this is the only equilibrium point we have. And all the experiments we're going to do next have to do with moving one curve or the other and see what happens to trace new equilibrium points. But try to understand very well these diagrams, so what happens when I move up, horizontally, and so on, and convince yourself that this is the only combination. It's pretty easy to convince yourself that it's the only combination, but think a little. Try to get away from point A and see what happens. I guess the best way to do that is just to do experiments, meaning move parameters of these curves and see how equilibrium output changes and so on. So let's do the first experiment. Yeah. So let's play with this. So now you have your model, and now we can start asking interesting questions. The first thing you can ask is fiscal policy. How does it work? Well-- sorry. So this is a contractionary fiscal policy. So the same as we did before, remember, we increased taxes or we could have reduced government expenditure, whatever. That would have shifted IS. We did that. When we look at IS, we did exactly that. We shift the IS to the left. And what happens here is, well, if you shift IS to the left there is a new combination of output and interest rate that is consistent with equilibrium in both markets, and that's a lower output. So if the fed doesn't do anything, that means it keeps the LM there, and there's a contractionary fiscal policy, that will lead to a contraction in output as well. That's the reason we call it contractionary. Not only because fiscal-- not only because government expenditure declined. But if taxes increase, that's contractionary because it reduces aggregate demand, and the equilibrium, that will reduce output. So that's canonical contractionary fiscal policy. You move output to the left. Interest rate doesn't move because that's controlled by the fed, but output declines. So if somebody asks you what happens if there is a fiscal contraction, you were asking a bit the opposite side, that people may have spent-- we had perhaps a fiscal expansion that was very large. But what happens with a fiscal contraction? That will lead to lower equilibrium output. I keep pressing the wrong button. Lower equilibrium output. What happens if you have a very large fiscal expansion? What happens if you have a very large fiscal expansion? What moves? That's something you should-- that's a question-- you should always ask yourself when there is any question of IS-LM, you should ask which curve moves. Start from that always. If we ask you any question that it's obvious about IS-LM, the first thing you should ask is which curve will move. So suppose I tell you due to COVID, the COVID shock, there was a massive income transfer to low-income individuals. That is we had a very expansionary fiscal policy. First thing you should ask is OK, which curve moves-- the LM or the IS? If I do that. It's the IS. Shift to the right. Does LM move? No. Has nothing to do with monetary policy. So that's the first thing you need to do. Which curve is moving? If it is fiscal, that's a good market thing. That means it's going to move the IS. Not the LM. What is the mechanism here? What happened? Remember what we have. I told you, go always back to this diagram. If you increase taxes and you keep the interest rate constant and you start from there so the interest rate doesn't move, then that will do what increasing taxes did in lecture three- it will reduce aggregate demand, and then the multiplier will take us to a larger decline than the initial fiscal contraction. And that's a declining equilibrium output. So that y1 one there is exactly this one here. That y prime. I haven't moved the interest rate. I kept it at the same level. I had a fiscal contraction. That's what we described with that diagram. That's my new IS. I have a new IS because for the same interest rate I have a lower equilibrium output. And it happens that the fed didn't change the interest rate, so that's going to be [INAUDIBLE].. With the whole curve moved to the left, that we could tell three slides ago, but now I know more. I also know that since the fed hasn't reacted, I know exactly what is the new equilibrium output, which is this. Before, we could only tell that the curve has shift to the left. Now, since the fed didn't react to that fiscal contraction, I also know the equilibrium output will end up at y prime. OK. Good. So I'm going to stop here, and in the next lecture we'll continue with this.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_1_Introduction_to_1402_Principles_of_Macroeconomics.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: OK. Let's start. So hello, everyone. Welcome to 1402, Introduction to Macroeconomics. And I won't teach today. So that's a good news. I will start on Wednesday. So what I want to do today is essentially tell you what macro is about, macroeconomics is about, and also the rules of the game. So what a difference a single letter makes. Many of you must have taken 1401. In fact, some of you may be taking it concurrently, a lecture right before mine. And that's Microeconomics, 1401. This is macroeconomics. And it doesn't take a lot of imagination to realize that this course is about big things. We don't look at small things. That's what micro is about. Micro looks at a household, at a firm, at an industry. In macro we don't do that. We look at the whole economy. We think about the US. We think about China. We don't think about an individual price. We think about inflation, so the rate of change of all prices. We don't think about whether a particular worker is employed or unemployed. We think of whether the rate of unemployment is very high or low, things of that kind. When we look at two countries, we look at the exchange rate, which is the relative price of two currencies, not to individual goods in two different countries, but the whole currency and so on. So that's what macro is about. Now, you could think that macro is nothing else than the sum of lots of micros. After all, that's what an economy is made of. A population, a whole population is made of lots of individuals that can be analyzed with the tools of 1401 and the sequence that follows 1401. But that doesn't work. And there are parallels in physics about this and so on. The way you want to study sort of big bodies is different from the way you want to understand the movements of small elements. And that's the case in macro. In macro there's a big line of research that has to do with the micro foundations of macro economics. But even in that case, which is very close to micro, most of the action ended up happening in the non-micro part, in the interactions, in the equilibrium aspects of the system. So it's a much more complicated object. And if you were to build it from the micro, it would be an incredibly complicated object. So one of the things we need to do in macroeconomics is take some shortcuts. That's what makes macro a lot of an art. It's not a science, per se. It's some sort of a science. It has the tools of a science. But it's a lot about shortcuts and tricks and so on to capture the essence of a problem that is very complex if you were to model it in all the gory details. And in this course, we're going to exaggerate on that sense. We're not going to do anything complicated, I promise you that. Some occasionally conceptually things will be complicated. But the math will not be complicated. So we're going to keep things very, very simple. I want to communicate the essence of the big macroeconomic relationships. This is not a PhD course. If you were to take a PhD course in macro, it would be a very mathy type course. In fact, most of the people that do apply in micro in our PhD program complain against macro because they find it too mathy and so on. But that's not going to be the case here. That's not what this course is about. My goal-- so if this is a successful course, it's not that you come out being a researcher in macro out of this. Hopefully you'll have a career eventually and do all the next steps that you need to do that. But I want you to be able to do is to read something like this-- this is a World Economic Outlook. It's a publication that the IMF puts out every six months in which it tells you how it sees the world and where we're heading and so on. No equations there. Lots of tables and stuff like that. I'd like you to be able to read that kind of document very clearly. I would like you to be able to read something in say, the Wall Street Journal and read even critically, sometimes disagreeing with what is in there-- Financial Times, The Economist. That's a goal of this course. It's not a lot more than that. It's just that. If you do a summer internship in Wall Street and you work in a macro hedge fund or whatever, this is going to be a good course for that. I mean, this is what traders really know. They don't know a lot more than that. Many traders should know that. They don't. But this does a level of knowledge. If it gets to be very complicated, I'm failing. That's not what I want to do here. The typical lecture-- again, this is not a lecture. The first lecture will be on Wednesday. The typical lecture-- and not in the first part of the course because you're not going to have the tools, the definitions and so on to do it. What I want to do is spend 5 to 10 minutes early on. Again, the first part of the course, we can do that because you don't have the knowledge to do that. But as you start building tools, I want to be able to talk about current events, something that is happening out there that I find interesting or something I received that morning, the morning of the lecture, which I find interesting. And if I think you already have the tools to begin to understand it, I'm going to be repetitive. I'm going to come back two, three, four times to the same topic. Hopefully you'll be more advanced in your knowledge in the later stages and you'll be able to understand it more and more. The typical lecture will have 5 to 10 minutes in which we'll talk about some facts, something that is going on. For example, a picture like this. This I received this morning. I think this came from Goldman, I think, Goldman Sachs. Yes. And what you have in that picture-- again, don't worry about details today-- is you have two lines. One of them is a measure of wages, wage growth, compensation to workers. And another one is a measure of inflation. Again, all those definitions will come in the next lecture on inflation. So it's the rate at which-- you must have heard about inflation. It's something, prices are rising. And what that picture shows you is that these two series are very highly correlated. So when wage growth is high, inflation tends to be high. And that's a big issue on these days. There's a lot of concern about this stuff. So let me try to explain a little bit what is a concern on these days. Again, if you don't understand anything, doesn't matter. If you don't understand anything I'm saying right now in the last lecture, then it matters. But now it doesn't matter. I'm just trying to give you a flavor of the kind of things we'll be talking about. That picture there, again, a variable that will define in the next lecture, not now, shows you the unemployment rate. You don't need any specific definition to know that to feel, at least get a sense, that, well, if unemployment is high, workers aren't very happy. It's not a good thing to have lots of unemployment. And what that series shows you the shaded areas are recessions in the US. What that series shows you is that typically in recessions, unemployment goes up. So that's one of the features. One of the main features of a recession is that unemployment is high. This episode here, it's called the Great Recession-- as a parallel for the Great Depression. The US had the Great Depression in the '30s. This is the Great Recession, the biggest sort of recession outside of the Great Depression in the US. And it's also known as the Global Financial Crisis because this was a recession all around the world. And what you can see is that unemployment went very high. That's a very feature, a telltale sign of a big recession. And then it took a long time. This was in recovering. COVID was a massive shock to the labor market So not surprisingly, unemployment, the unemployment rate spike there. But then it also recover, a lot faster than it recovered from that. And today we have unemployment rates that are at historically low levels. And that's a big issue. The rate of unemployment in the US is at historically low levels-- way below what is normal. Forget recessions, obviously way below what is it happens in recessions, but even way below what is normal, what happens during normal times. Closely related to that is wage growth. I have just one measure of wages there, is a series of wage that is particularly-- what I'm about to say is particularly sharp, which is the wages of in the accommodation and food service sectors. So wages have been rising very steadily and very fast recently everywhere, particularly in sectors like this, where we have some problems which we call labor supply. But I'll get back to that. So those are two facts. We have unemployment at extremely low levels and we have wage growth at a very fast pace. Now, that sounds wonderful, no? I mean, what else do you want, an economy which is few people unemployed and wages are growing a lot. I mean, if this was micro, this would be fantastic. OK, look the guy is employed and he's getting a high wage. This is great. Well, not so fast for macro. Not so fast because I already showed you in the first picture that I showed you, [INAUDIBLE] there is a connection between wage growth and inflation. And that's what we're experiencing. The normal level of inflation for an economy like the US is around 2%. That's normal. That's what central banks target in an economy like the US in the Euro area. Japan has been dreaming with 2%, but hasn't been able for decades to get it. Although now they are. But they weren't, for a couple of decades, to get to 2%. But that's about-- and we will discuss later in the course why 2% is about right for economies of the size of the US and so on. Obviously in recessions, these things can go low. And that's why in the COVID recessions inflation went to zero, essentially. But then it began to pick up. And it's now at levels, which are unheard of in the US since the '80s. So depending on the particular measure you use of inflation, it's around 6.5% to 8%. That's a level of inflation we have, which is way, way above what is considered a normal or reasonable target for the central banks for inflation. So that's a problem. We have had some good news recently in that inflation clearly peaked already-- again, definition of inflation, formal definition of inflation happens in the next lecture. But it already peaked. And it's declining. But it's still a very, very high levels. And that's a problem. That's a big macroeconomic problem. And one of the things we want to understand in this course is, well, what to do about it. How do you deal with that? What do central banks need to do in order to deal with that? Now, I've been talking about the US. But this is not specific to the US. This episode this recovery from COVID is incredibly common across different regions of the world. I mean, you see it everywhere with a few exceptions. And I'm going to talk about one major exception in a minute. But it's widespread. It's a widespread phenomenon that we had high unemployment. Then we had sort of very high-- well, I haven't told you that part yet. But then we had sort of low inflation. Then inflation picked up enormously. And now we're all worried about these very high levels of inflation. In fact, if you look at what happened between the Great Recession and the COVID recession, it was pretty normal to have 70% to 80% of the economies in the world having inflation levels at or below 2%. So that's a norm. If they throw you into a country, they drop you into a country, the normal thing would be well, it's about 2%. That's a level of inflation. Obviously, if I dropped you in Argentina, you're going to find a much bigger number, you know, 10,000%. But the bulk of the countries were around 2% or so. Today you don't find any country with inflation below 2%. Not even Japan, for years we're in deflation and trying to get sort of above zero. That's all they wanted. Not even in Japan you have inflation below 2% today. So this thing I showed you. And for more or less the same reasons, it's happening everywhere. Not exactly the same factors. It's the same episode, for example, with differences depending on the structure of the economy or in additional shocks. In Europe, for example they had very high inflation. But the problem is not-- the origin of the problem, the bulk of the problem is the same as in the US. But at the margin they are different. In Europe, the big driver of inflation, the big recent driver of inflation, is unlike the US, which is aggregate demand, a concept you'll understand later. It's essentially the war in Ukraine. That has increased the price of energy and the price of energy has led to lots of inflation. So there are different reasons. But all of them are sort of different reasons that you add on top of what is a common story, which is that we overheated coming out of the COVID episode and now we're struggling with that. Now, the main tool-- and we're going to talk a lot about in this course-- the main tool that central banks have to deal with inflation is the interest rate. So for reasons you'll understand later, although you may have intuition about some of those now, obviously when the central bank lowers the interest rates, then that helps the economy to expand. And when it increases interest rate, then it does the opposite. Rising Interest rate makes mortgages more expensive, make everything more expensive so people tend to consume less. Firms tend to invest less and so on because it's more expensive to invest, to borrow, to do something. And there you see it. I mean, this was the level of the interest rate in the US before COVID. When COVID came, boom. They brought it all the way down. It happens that you can not bring the interest rate a lot lower than zero. That's the reason it stayed close to zero. We're going to talk about that later on. But then eventually they realized that we're behind the curve. Inflation had picked up a lot and the central banks were behind the curve. So they began to hike rates in a hurry. And that's what we have been experiencing for the last year or so, very fast increase in the interest rate. Now, this is, of course, about macroeconomics. But I happen to do a lot of research between macro and finance. So I'm going to put a little bit more of a component of finance into-- I think I'm going to do most of that in the last third of the course. But monetary policy has lots of implications for finance, for equity, values, for the stock market, and stuff like that. So what you see here, this line here, is the SPX 500. It's the main index of equity in the US of shares. There are several indices-- NASDAQ, S&P, Dow, and so on. This is the main index, the most comprehensive, the one that takes the largest companies, and so on and so forth. And what you can see what happens here is that when COVID happened, the surprise that we had, really a pandemia, then the stock market crash, declined like 30% or something like that at the time. That's interesting facets. I mean, that's one characteristic of equity that I like a lot. Other risky assets, as well, but they anticipate what happens. What happened there is the stock market, the shareholders, realized that something big was-- negative and big-- was happening in front of us. So it was time to sell. And so the equity market collapsed. What happens next is even more interesting for a macro economist, which is this big boom here. There's an enormous boom. The economy here still was at levels of activity below what it had before COVID. But the stock market, the value in the stock market had way exceeded the level we had before the pandemic. And the main driver of that, I've shown that in some papers, the main driver of that is not-- I mean people tell lots of stories of Amazon and so on, Tesla, bla, blah, blah. If you look at the aggregate, the main reason for that rise was monetary policy. You can explain all that increase in the equity value in the US of the index-- not the individual shares, of the index-- by the effect of interest rates. So monetary policy plays a big role. If you care about finance, well, it plays a huge role in the value of assets. When monetary policy is very loose, that tends to increase the value of assets. And that's one of the mechanisms the central banks use to expand aggregate demand when they want to expand aggregate demand. They want people to feel-- you are in a recession, you want people to feel richer so they spend more and so on and so forth. What happened here? This decline, you can also explain it fully with the hike in interest rate. Remember, I showed you that the interest rate began to rise very rapidly here. Well, last year the equity market in the US and most major equity markets around the world declined by 20% or more. You can explain all that decline simply by increasing the interest rate. So that's another thing we need to understand is why is it that the interest-- why is the interest rate matter so much for something like equity? So when a value assets, and when I see what is the effect of the interest rate, and then we're going to think about, well, why would the central bank worry or not worry about these things, and so on and so forth. But the truth is that financial markets and the central banks interact all the time. I mean, if you are into Wall Street type thing, you are going to be watching every day, every time that the monetary minutes. The minutes of the central banks are released, you're going to be watching because it has a big implication for the value of your equity. Actually, something very interesting of this nature happened last week. On Friday, last Friday, there was a release of payroll numbers. So it's an employment index, employment numbers. And people expected the payroll to increase, so to add nonfarm payroll-- we'll talk about these things later-- by about 190,000 workers. At 8:30-- well and this, you're seeing here, is the behavior of the same index I showed you before, but the futures, so things you can trade before the market actually opens. The market in the US opens at 9:30 AM. But you can trade futures since Asia times. Anyway, so this is the path. It's all very quiet, tranquil. Everyone is waiting the release of this news at 8:30 AM. At 8:30 AM, great news for the labor market. The actual change in the payroll was not 190k. It was over a 500,000k. So enormous addition of jobs to the economy. And look what happens to the equity market. Boom. It imploded immediately. So this is wonderful news now for the economy-- lots of jobs. The equity market imploded as a result of that. Why do you think that happened? I've already given you a little bit of the ingredients for why, for an answer in the previous slides? The reason I'm showing you this is because in 15 minutes, it summarizes all that I was talking about in the previous 30 minutes. Why do you think that happened? This is wonderful news. Why the stock market should crash like 2% from top to bottom as a result of that? AUDIENCE: There's a lot more labor because that gives a lot more supply of that thing, and thus, it decreases price because of high supply. RICARDO J. CABALLERO: No, but-- OK, that's an interesting explanation. It's not the one I have in mind. The explanation says, look, that means firms hire lots of people. So the price, that means they're going to be lots of supply of whatever goods they're producing. The price of those goods is going to decline. And that's going to be bad for profits. That's a story you had in mind. Maybe there's some of that. But I'm willing to bet that it's not the main thing. So the only clue I give you is that I already talk about these things five minutes ago. AUDIENCE: Employment is very closely related to inflation rates. [INAUDIBLE] up to 0.81. So this could be a result of expectations of continued high inflation rates. RICARDO J. CABALLERO: OK, you're very close. One step more. Yes. That means that-- AUDIENCE: [INAUDIBLE] RICARDO J. CABALLERO: OK, there you are. So what happens? The shareholders wouldn't have done anything if they thought that the Fed would not be able to see this data. But they know that the Fed also sees this data and say, whoa, these guys are going to be worried because the economy is going to keep overheating. They're going to have to hike interest rates even more in order to cool down this economy. I already showed you that what happens in the labor market is very connected to what happens with inflation. The central bank knows that. And now we get this big surprise that means they're not really being able to-- they're not being successful at really slowing down one of the main drivers of inflation. And so financial markets are very forward looking. They say, whoa, this is coming. This only means that financial markets were betting that the Fed was going to begin to cut interest rates in four months more or so. And if you look at what the forwards did there, what the market-- you can extract what the market thinks. Right after this, it got immediately pushed out to the end of the year. So it's precise. It's the anticipation that the central bank will have to do something. And so I thought it was very interesting for that point of view. Recessions-- well, look. And these are all very good news. But everyone knows that the Fed needs to cool off the economy. So despite the fact that we're getting good news now, people expect, the majority of people expect a recession in the US for this year. I'm not going to explain this bar graph here. But these are forecasts. These are professional forecasters. And more than half of them-- so the median of them thinks that there is a 65% probability that there is a recession in the US this year. And we're going to talk a lot about this and probably we're going to be getting news about this while we're taking the course. So this is going to be a picture that we're going to discuss extensively. And the reason for the recession is nothing else than-- The reason you ask this forecast, why do you think we may have a recession? Well, because the Fed is trying to fight inflation. It's going to keep hiking interest rates. And at some point, it may break something. And that's the reason. But all these things you are going to be able to understand very clearly through models. The last thing I want to say before telling you a little bit the rules of the game is that I said before that the story I told you about the US is more or less what is happening all around. I was in Chile a month ago. I'm Chilean. And they have the same story. They start hiking interest rates a little earlier because they had more inflation than the US. But they're going through the same cycle. There's one big economy, the second largest economy in the world, that has not been part of this, which is China. China was very aggressive in the COVID policy, so zero COVID policy. So they really slowed down their economy. That's a consequence. They didn't want to do that. But as a result of a very strict COVID policy, they essentially shut down big parts of the economy for a long time. That, by the way, had big impact in the rest of the world, through the network of production, the chains of production and stuff like that. That was inflationary in itself. That part is dissipating. But for their own economy, for the domestic economy, that really slowed down China, an economy that grew typically at 5 and 1/2 to 6%, a lot higher 15 years ago. We're going to try to understand why later on. But last year, I don't know, it was 3% or less. Numbers in China were difficult to figure it out. They're not equally transparent to other numbers. But in any event, it's very clear that China slowed down a lot. And that policy recently changed. The zero-COVID policy changed. And so there is great expectation that now there is going to be a big boom in China because they are lagging behind. I mean, in the US when COVID began to dissipate, we got a huge boost to growth. And that's part of the reason we got all this inflation is because we had lots of growth coming out of the recession that happened in COVID. And more or less the same is expected in China. And one of the big reasons behind those big bounce backs is when people are desperate. They want to spend on something. They want to go to restaurants and cinemas and stuff like that. And the other one is they have the means to do it because they couldn't spend on anything for a while. And so they can travel and stuff like that. So people expect-- and this is a very large economy that suddenly sort of wakes up. No, that's a big thing for China. But it's also a big thing for the world. What happens in China doesn't stay in China. It's a big giant. So it moves. And for some countries it's very, very important. And this picture here shows you what is the impact on different regions of the world on the growth rate and different regions of the world of an increase by 1% in the rate of growth of China. Obviously all the neighbors benefit a lot. But Latin America benefits even more. Why is that? Well, because Latin America produces lots of commodities and China consumes lots of commodities when it's building and stuff like that. And so that's the reason big impact on Latin America. So this is a piece of good news for the world in the sense that activity will go up. But it's good news on average. But it may be too much of a good thing, as well. Why? Because many economists are going through what we described before. They're trying to bring down inflation. They don't want more demand. They want less for now because we're going to understand that connection later on how demand connects to inflation. But you want less. And now you're going to get this impulse from China, which is going to fuel more inflation. It's OK for China because they don't have an inflation problem. But it may be a problem for many of the countries that are trying to undo the inflationary consequences of the previous expansion, the expansion that followed COVID. OK, anyways. But this is the kind of things we're going to be talking about. I said the course is not going to be mathy, but it's going to be all about models. The next lecture is the most boring lecture of the course I tell you in advance because it's definitions. I need to go through the definitions. At least I get bored. But the rest, there's always little models but it's simple models. OK, but the models are going to try to explain the kind of things I discussed today. So that's what this course is about. Ideally, if we're successful, you're going to be able to read something like the World Economic Outlook, which will have lots of pictures like this. And you're going to be able to write a little equation very simple on the side to try to understand what is going on there, and to catch the mistakes, as well. OK, WEO has less mistakes than the Wall Street Journal, but you will catch mistakes, you'll see, you'll be proud of those.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_15_Technological_Progress_and_Growth.txt
[SQUEAKING] [RUSTLING] [CLICKING] ROBERT CABALLERO: OK, let's start. So today we're going to talk about technological progress and economic growth. And that's a big topic, certainly, at MIT. Perhaps this is one of the main ways we contribute to human well-being. But before I do that, let me do a brief review of the things I did in the second half of the previous lecture. For two reasons, I want to do that brief review. First, after spring break, so I assume there is some depreciation of knowledge since the last time. And the second is that while the equations I showed you at the end with population growth are correct, I think I said something which is not correct. I think I kept saying-- I don't know why. But I kept saying, look, if x is small, 1 over 1 plus x is approximately equal to minus x. No, it's approximately equal to 1 minus x not minus x. So I wanted to correct that typo. So let me remind you what we had. So we started with a production function. One important part of economic growth is we're going to have capital accumulation will be a very important variable here. And so we haven't talked about capital in the production function in the previous part of the course. But now we were explicit about it. And we started with a production function that had constant returns to scale on capital and labor. And here, remember in this part of the course, we're not talking about unemployment or anything like that. So whenever I say labor, I also mean population. I mean labor force, all of them together. You know the distinction between each of these concepts, but they're not that important for growth matters. Mostly because all of those aggregates move in tandem over the long run. It's very difficult for a population and the labor force to diverge for a very long period of time. Maybe fluctuations and so on, but then tend to move together. So but we decided that we wanted to look at things normalized by population. And so output per person is an increasing function of capital per person but also is increasing at a decreasing rate. There is decreasing returns with respect to the capital labor ratio. And so output per capita grows as the economy becomes more capital intensive, that is you have more capital per worker, but it grows at a decreasing rate. The second key equation of our model was that we're saying this part of the course, we're going to assume that the government is not running any fiscal deficit or anything like that, and the economy is closed, which is an assumption we have maintained and we will keep assuming until three lectures from now. And so in closed economy, no fiscal deficit. We have that investment is equal to saving. And we made an extra step to assume that the saving is proportional to income, so S proportional to income. So with all these things together, putting these two things together, we got to a very important equation in any growth model, which is the capital accumulation equation. And this equation says, well, the capital stock tomorrow-- tomorrow means in the next unit time, next year, or whatever --is equal to the current stock of capital minus the depreciation of that stock of capital minus the delta times kt plus investment. But investment is equal to saving, and saving is proportional to output, OK. So that was common across all the things we did in the previous lectures. Is there any question about these equations? No, no, good. OK, so the next step was to say, OK, and I did remember all the initial derivations. I assumed that n was constant, population was constant. And so the next step was I divided by a constant here. So we did everything in terms of capital output per person. But actually, since population was constant, the per person part was just trivial. We just divided by a constant. The last thing I did, though, in the previous lecture, was to say, OK, what if that's not the case? What if population is growing over time as well? How does our analysis change? And so I did this. Now, I said, well, OK, let's start by dividing everything by nt plus 1. So then we get capital per person at t plus 1. Problem is, I said is when I divide the right hand side by nt plus 1, I don't get what I want. I want capital T divided by population of t. I want output of t divided by population at t, not at t plus 1. So what I did is I multiply and divide by nt both of these. So I multiply by 1 and t over nt is 1, so I multiply by 1 everything. And then rearrange terms, so I got expressions like this. I got what I wanted here, which is capital per person at the same point in time. But now it's multiplied by nt over nt plus 1, OK. And the same I can do for this expression here. OK so this is what I'm using the approximation here, in which x is equal to gn. This thing here, is just 1 over 1 plus gn. And I'm saying this is approximately equal. I can approximate-- if gn is a small number, this is approximately equal to 1 minus gn. So that's what we have. And this is the second approximation going from this line to this line in which we did the following. We said, OK, this is equal to 1 minus delta minus gn plus delta times gn. But the delta time gn is the multiplication of two small numbers. So I said assume that it's close to zero. And the same we have here. We have saving rate times 1 plus gn. You get you get the saving rate plus the saving rate times gn. But the saving rate times gn is also small number, so we also drop, OK. So those are the more explicit steps of what I did in the previous lecture. And I think the final equation I showed you was this, but it comes from, again, two approximations. The one down here, which I use here. And then the fact that I dropped the second order terms, OK. That's it. And then I just rearrange things. I move kt over n to the left-hand side. And so we have the change in the stock of capital per person is an increasing function of investment per person, which is this because this is saving per person. And I can replace this by the production function, which is f of k over n. And so what I have here is a difference equation in capital per person. Why is this? So this is investment. So the capital stock per person will be growing as we invest. It shrinks with the passage of time just because of depreciation. Some things break down. That reduces the stock of capital. But the new term that we introduce at the end of the last lecture, is that now this ratio also declines with population growth. And so who can explain why we get this term? I'm saying, look, suppose that we have take as given the amount of investment. We take as given depreciation. But now I say, well, if gn rises, and all the rest remains constant, then the left-hand side will start declining, or it will grow less rapidly than what's going to grow before I increase gn. Why is that the case? Sometimes it's counterintuitive. That's the reason I thought I rush in the previous lecture over that. And since it's going to be an important intermediate step into the next one, which is going to introduce technological progress, I want us to understand why that gn appears with a negative sign there. Yep. AUDIENCE: [INAUDIBLE] returns for the same amount of capital that have increased in [INAUDIBLE] ROBERT CABALLERO: That term is going to be captured here. And it's going to play a role. But this one comes from something much more mechanical than that. Hint, observe what I have on the left-hand side. I don't have on the left-hand side, the change in the stock of capital. I have the change in the stock of capital per person. So suppose I don't change the stock of capital at all from this period to the next, but population grows. What happens to this expression here? Decreases, it becomes negative because I haven't changed the capital stock, but the denominator is growing. That's the gn part. And that means this term's negative. And that's what this term is here for is to capture the fact that the denominator Now. Is also moving on the left-hand side variable. And you say, so what? At the end of the day, I care about the capital. Why do I care about capital per person? Well, my analysis, I told you it's much easier if I do it on something which has a steady state. And that's the reason I'm looking for this normalization. But once I look at the dynamic equation of accumulation of capital in this divided by population, then I need to take into account the fact that my denominator is also moving, OK. So that's the reason that gn is there. And again, the reason I wanted to pause on this is because when we introduce technological progress, we're going to have a similar effect. And so I want you to understand. It's going to be counterintuitive because it sounds like technological progress is something negative. No, it's not negative. But in this space, it turns out that if population grows very fast, then you need a lot of investment to keep up the capital-labor ratio constant, OK. That's the idea. If population is not growing, I don't need a lot of investment to keep the capital-labor ratio constant. But if population is growing very fast, then I need a lot of investment to really keep that ratio constant. That's what this is capturing there. So to repeat, if this guy is very large, then I need a lot of investment here to make this thing equal to zero. So the capital is stock per person is not declining. That's idea, OK. Good, OK, so and then I said-- OK, this is where we finished. Then I said, OK, so let's go back to our diagram. Once I have everything in this space, k over Ni, n, I can go back to our diagram. Assume that ga is equal to zero. You don't even know what a is for the time being. You will know in five minutes. But assume ga is equal to zero, then that's exactly the model we had before. So and remember, this is exactly the same diagram. It looked the same, at least, that we had when population was not growing. I'm saying I can use the same diagram when population is growing as well. But there is one important difference, which is this curve looks exactly the same. This is just output per worker, OK. That's the blue line. The green line looks exactly the same as in the basic model. It's just little s times the blue line. So that's exactly the same. But this line is different. What happens to this red line as gn goes up? So what happens to this line as gn goes up? Becomes steeper, no? So it rotates up, OK. It goes up. And that can sound counterintuitive sometimes because you say, look what happens here. Let's spend time on this. Suppose that we are at some steady state, say this one. And now population growth rises. OK, it sounds like Ireland in 2000s and so on. So population growth rises a lot. What happens in this diagram? So suppose we are at the steady state here. No, and that steady state investment saving, which is equal to investment, is exactly what you need to maintain the stock of capital per person constant, OK. That's what the red line tells us, that here. So the gap here is-- this is a gap between investment and what you need to maintain the stock of capital per person constant. So when the gap is zero, then this left-hand side is equal to zero. OK, that's the red line. That's the green line. When this is equal to zero, that's equal to that. That's exactly that point. OK, but I'm saying, suppose we are at that point, and now population growth rises? So what moves in that diagram? Does the blue line move? I don't see gn in the blue line, so the blue line doesn't move. If the blue line doesn't move, and the saving rate hasn't changed, then the green line doesn't move either. So for this diagram to be interesting, what moves? Something has to move. So the only thing that it has that can move here, is the red line. And the red line, we already said, if gn goes up, it's going to rotate upwards. That means, OK, so now we have that line there. So what happens at the previous stage stock of capital per worker at this level? What happened? Is that a new steady state? No, but what happens in particular? So I'm saying, suppose that you are here, and now I rotate the red line up. OK, so that means the red line that represents the amount of capital I need to maintain the stock of capital constant is greater than how much is society saving and therefore investing. So what will happen to capital per worker? AUDIENCE: Decrease. ROBERT CABALLERO: Decrease, exactly because you need more than you're investing. So the capital stock has to decline. And that's what will happen. The new steady state is going to be to the left of that point there. That sounds very weird. How can it be that-- after all, labor contributes to output. How can it be that we end up with a lower output per worker when we increase population growth? Is population growth bad in a sense for growth itself for output? Well, the answer is no. It's true that the new steady state will have lower output per person. So in that sense, it's bad. You have lots of population. If you don't change the saving rate or something, then output per person will be lower, but output will be higher than it used to be at any point in time. It just happens that in the transition, the growth of output-- so the growth of output in this model is going to be equal to the growth of population. OK, that's if you have a steady state, where population is growing and output per worker or per person is not growing, that means output is growing at the same rate as population. So that means that if I increase the rate of population growth, the rate of growth of output will increase together with the rate of growth of population. But in the transition, as the output per capita goes lower, output will grow less than population. And that's what is happening here. But output is growing. If population starts growing, if you increase migration, you're going to see output grow. But output per person will start declining, until you get to a new steady state. And then you'll get the same-- you'll get the higher rate of growth if you continue with population growth with the higher population growth. But output per worker will be slightly lower rate, OK. Anyways, this may have been fast, the last part. But since I'm going to repeat it now, in the context of technological progress, we should be fine, OK. So if you're a little confused now, it's OK. If you're a little confused at the end of the lecture, it's not OK because I'm counting with you sort of getting it in the second pass, OK, the second try, OK. So next step is following. So here, we assume already population growth, but we assume the technology, so the production function stay put for any combination of capital and labor. The next step is to think what happens when the technology itself is getting better over time. And that's what we call technological progress, OK. This is TFP. Let me not get into the specifics. At the end, I'll say a little more. But this TFP stands for Total Factor Productivity. And this index here captures the level of TFP in the US over time. And it's clearly growing. So technology is getting better and better over time. What that means? Well, I'll say a little more, not a lot more, but it's getting better. And so the question we have here is I'm going to address next is, how does this? So now we're going to put together our entire economic growth model. We're going to have population growth. We're going to have also technology growing. Up to now, the only reason you could grow output per worker, was because you were accumulating lots of capital. You were catching up with your steady state. That's what would make you grow faster, but then there was nothing else. TFP is going to be the only growth in technology. It's going to be the only thing that will give you sustainable growth in the long run in output per person. So this is a very important component of growth. Again, it's the only thing that will make you growth in a sustainable manner in per-person terms. The previous model didn't have that. In the previous model, we had a steady state on output per worker. So in the previous model, we didn't have growth in output per worker in the steady state. We could have transitional growth when we were catching up. If you started here, then you were going to have growth, fast growth, but eventually, we'll put it out, OK, output per worker. So up to now, we don't have a reason to explain why we see that output per worker grows in most economies in the world. And the answer will be this. This is the reason really output per worker can grow in a sustained manner. Technology is getting better and better over time. So let's see, so the question is, let's now see what this does to the model we have. Now, in practice, technological progress takes many, many forms. At the most basic level, it means that you can produce larger quantities of output, and that's really the meaning we're going to have here, larger quantities of output for the same amount of capital and labor. So you have 10 machines 10 workers, technological progress means, well, you used to produce 12 units, now we're going to produce 12, 14, 15, and so on and so forth. That's one of the main ways technological progress shows up. We can do more with the same if you will. Second dimension is better product. So it's not that you produce more cars, but you produce better cars, better computers, and so on, OK. That's another dimension of technological progress. You can produce new products, things that didn't even exist, but now you have. That counts more than having one more unit of good. It counts more because you have things that you couldn't even satisfy in the past, you can satisfy now because you have this certain kind of goods. That the nx is before. That's a very important dimension of technological progress, it just creates new sort of forms of inputs of production and technologies. Think of AI what that will do to technology in general and to consumption very directly. And that's what I mean. Even within a product, you can get more variety. And more variety, it improves welfare because you can align better the needs with the product and so on. But we're going to make it very simple. In this course, we're going to model technological progress as if it was workers. So we're going to capture technology with this variable a, which is going to be-- we're going to model it as labor equivalent. That is, if a grows, it's going to count for us as if we had more workers. That's just one way of modeling it. I mean I can do it in many different ways, and many of these are equivalent. But that's a very nice way of modeling, so we can use exactly the same diagrams we had and so on, OK. So you can think of technological progress. The way I'm going to model this here, is you can think of technological progress as if this economy was receiving more workers. Or a more accurate description is, with the same workers, it's as if it had more labor input, OK. That's one way of capturing technological progress. So now that means that I'm going to refer to this term a n as effective labor. So with the same number of n bodies, I may get more effective labor because each worker can produce more things. It's a better input of production, factor of production, OK. And I like to model it this way because now I can use exactly the same diagrams we had before, but rather than normalizing by population, I'm going to normalize by effective label, by a n. So, let me do that. So recall that we had our production function with constant returns. So this holds. I'm going to set this x now as 1 over a n. We used to have one over n. I'm going to have one over a n. And so I'm going to now have output per effective worker is going to be also the same little function f of capital per effective worker. And what is nice of this, is that now here rather than plotting y over n, I'm going to plot y over a n. Rather than plotting k over n here, I'm going to plot k over a n. And I have the blue line looks exactly like it used to look. It's just I'm dividing by a over n. Remember, the trick in all these models, is to find the right normalization, that is to find the right x, So I can find a steady state in my diagram. I don't want these curves to be moving around. I want to have a steady state, something, a point that we're going to converge to after enough time has passed. And I know that the thing that will do it in a model in which I have effective workers growing, is one in which I divide everything by effective workers. OK, so that's what I'm doing here. I'm going to build a diagram that looks like the other one that has a nice steady state as the previous one had, OK. So I have my blue line. Here I have my blue line. I know I have my green line, no, because the green line was just little s times the blue line. So I have that. The last thing I need, and I already showed you that, but I'm going to show it again, is the red line. But for the red line, I need to find the term-- remember, the red line represents the capital we need to maintain the current stock of capital per effective worker constant. That's what I need my red line for. So let's get there. And it always starts from this equation. So this equation is still the same as it used to be. That doesn't change. But what I'm going to do now, is rather than dividing by n, I'm going to divide by a times n. So the same as I did earlier in this lecture. I now I want to divide by a over n. So I get capital per effective worker on the left-hand side. I don't like what I get here, but you know that I can divide and multiply by a n, n over a n. So I can write the right-hand side after I do all my substitutions as this. So first step one, I divided everything by a t plus 1 times nt plus 1. Step two, I multiply each of these terms by a, t, and t. I multiply by a, t, and t. Divide by a, t, and t and. And then I regroup things, So I end up with that. Well, this, using the approximation we have here, is approximately equal to 1 minus gan and gan, is equal to ga plus gn. So I already showed you that case for the case in which ga was equal to zero. I'm doing now. The same thing but, since I renormalized things by effective workers, effective labor rather than actual labor, I need to use a n rather than n. And then by the same approximation I had before, which is that these products are close to zero, then I get to the equation I want. And if I write it in first difference, then I get my red line. This is my red line here. OK. Good. So this tells me that when the green line is equal to the red line, then I have a steady state. Capital perfected work is constant, and this is equal to zero. That's the way I find my steady state. If I ask you a question, find the steady state of this economy, what you'll do is you'll set this equal to zero and find the capital stock that gives you this equal to zero. That's the way you do it, OK. So that's that. And then we get back to-- well, this is the same as we had before. That's what I just said. That's the way you find the steady state, OK. And then we get back to the diagram I started with in this lecture. But now we have here a n, and now, in the first part, I said assume this ga equal to 0. Now the main actor is ga positive, OK, and we get this diagram. So now I can ask you the question that I asked you before with population growth, and see how much I can confuse you. Suppose that ga goes up. That sounds like a good thing, no. I mean, suppose that we are at the steady state here, and this diagram has too much stuff. Let me. OK, so we're here. That's our initial steady state zero. And this line here is delta plus ga plus gn times k over a n, OK. So well first, so suppose we are the steady state. Its output constant there. It's a steady state. Is output constant there? So suppose we are at that point here. Here, we know that investment is exactly how much we need to maintain the stock of capital per effective worker constant. That's what the steady state means. Question, is output constant there? It's steady state. No. This only says that capital per effective worker is constant. That means that if effective workers or labor is growing, then capital is growing at the same rate and therefore, output is growing at the same rate as effective workers, OK. Remember, the whole trick, so the curves would not be moving around, is I find the right normalization. So everything is growing at the same rate in that steady state. So let me actually show you that, and then I'm going to go over the experiment I want to have. So this is what is happening in that steady state. So capital per effective worker at the steady state, so at that point there, is zero. That's my definition of a steady state, OK. Output per effective worker is also growing at the rate zero. That's that one over there. Sorry. That's my steady state level of output per effective worker. So these are constant. That's a steady state. Those are constant. This ratio is constant. Each of those components is not. So that's what I'm plotting there. So those are not growing. Capital per worker, what about that? Well, you see there. So claim, capital per worker is growing at the rate ga. How do I know that? So the question I'm asking there, is what is the rate of growth of k over n given that I already know that the rate of growth of k over a n is equal to zero? Well, this, the rate of growth of k over n, is the rate of growth of k over a n plus the rate of growth of a, no. I mean, if a is growing and this ratio is constant, that means that k over n must be growing. And it has to be growing at exactly the same rate as this a is growing. Otherwise, I wouldn't be able to maintain that ratio constant. And the same logic applies to output per worker because in that steady state, output per effective worker is constant. But a is growing, so output per worker must be growing at the same rate as a i growing, and that's ga, OK, good. Labor, well, labor is exogenous which in population, is growing at the rate n that's given. What about capital and output? Well, claim, capital and output are growing at the rate ga plus gn. And I can do the same as I was in here. I'm asking you the question, gk, what is the rate of growth of gk? Well, it's going to be equal to the rate of growth of k over n plus the rate of growth of n. This is equal to ga. So it's ga plus gn. And the same happens for output, OK. So remember, I said earlier on, that if an economy has more population growth, it will grow more. There's no doubt of that. Obviously, output per worker will not grow more because population growth grows more in the new steady state. gn doesn't show up there. But the only thing that will make output per worker grow, is technological progress, so it;s ga. And that was my claim earlier. We're going to use this later. No, I'm not going to do this to myself now. I'm going to get back to what I wanted to now. Because I need to tell you a little bit more about the production function to growth accounting, which is what I wanted to do. So but this is clear. I mean, this is important, OK. Good. So this is the reason ga is such an important variable. What you guys do here at MIT is very important. Afterwards, it's very important, OK. That's the only thing that can drive really growth in the long run, in per capita. This gn plays also a role. You look at countries, not only the growth in per capita output. You tend to look at growth at total growth. One of the big concerns in big parts of Asia now, in Europe as well, as I said earlier in the course, is that gn is turning negative. That's not going to affect output per worker growth, but it does affect output growth in general. And you can see it here. So if dn goes down, that will reduce the rate of growth of output. Doesn't reduce the rate of growth output per worker, but it does reduce the rate of growth of output, good. So what happens-- remember, we did in the basic EOQ we did an experiment in which we increased the savings rate. So we can do the same here. What happens if we get an increase in the savings rate? Do we get more growth in the long run? And the answer is, for the same reasons we had before, no. If we increase the savings rate in this, now, this full model, all that happens is that this green line moves up. It means that at the initial steady state, now we have more savings. And therefore, more investment than we need to maintain the stock of capital per effective worker constant, which means that we're going to get transitional growth. Capital per effective worker will start growing, for a while. And as that happens, output per effective worker will also start growing. But eventually, decreasing returns will kick in here, as well. And that transitional growth will stop, and we'll end up at a higher level of output per effective worker and a higher level of capital per effective worker, but the rate of growth, in the long run, will not be affected by the saving rate. We'll get more transitional growth, but we will not get faster long-term growth. A lot of the Asian miracle, the Southeast Asian miracle in particular, we saw very fast rates of growth in many economies of Asia, was a lot of that kind, meaning was a combination of what we had before. Economies that were relatively poor had low capital per worker early on in which the saving rate increased enormously. And that combination gave them enormous transitional growth. So rate of growth of 10%, 12%. That was Japan, and then it was Korea, Taiwan, and so on. All those economies had very fast rate of growth as a result of that. China, later on, and China was a big thing for the world because it was much bigger at the same time. But it was mostly a combination of those two things. It was having a lower stock of capital early on combined with, for a variety of reasons, an increase in the savings rate. And that combination gave them very fast transitional growth. But they all getting a little stuck now. And they're very concerned with that. Well, they're fighting against this model. There's lots of concerns of what is happening to China. Are we going to follow the Japan path and so on. Well, they're following this model. That's what is happening to our first order. I'll say a little bit more later on about that. So in this particular case, what I have done is, in log space, so I can have linear things when it's growing, in log space, this economy with the low saving rate was growing here. This is output, so the slope of this was ga plus gn. Remember, in the steady state, output is growing at ga plus gn. If the saving rate now increases, then output starts growing transitionally faster than ga plus gn. And that's the reason output grows faster than-- here, it's growing faster than [INAUDIBLE].. Here, it's very fast, OK. This is what we saw in [INAUDIBLE] the rate of growth of 12% and stuff like that. We were there moving there. And but eventually, it sort of peters out. You end up with a higher level of output per capita per worker, a higher sort of path, an entire path. The rate of growth goes back to ga plus gn, but you get this transitional growth, which is very strong. And once you're here, once you run out of the high saving and the catching up growth and so on, the only way you're going to really change your rate of growth in a sustained manner is doing what? Once you have used the tool of catching up with the world, of increasing your saving rate sometimes to levels that incredibly high, you still want to keep growing very fast. What is the only option you have according to this model? Particularly, let me bring even more realism to the story. Particularly, if gn is dropping, and you still want to keep your growth high. And your gn now, you used the catching up growth. You used the higher saving rate, which gives you transitional growth, but it doesn't give you permanently higher rate of growth. And on top of that. for reasons you don't control, population growth is declining, even turning negative in some cases. But suppose you still want to keep the rate of growth very high. What is the only option you have. AUDIENCE: Increase ga. Increase ga. ROBERT CABALLERO: Exactly, technological progress, that's the only option you have. So it makes sense. You see, that in the case of China. They're obsessed about technology and so on. They understand the Solow model. If you want to maintain growth at a high pace, you're going to need to work on that side a lot. Now, it doesn't have to be you, necessarily. It's the world as a whole because technology moves around the world. But ga is at the end what puts the limits of what we can do. What I was going to do is, that's what I was drawing this diagram for, is say, well, suppose that in this situation, we are in a steady state. And we do increase ga. What happens if we increase ga? Well, this curve rotates up, no. And at that point, it's clear that if ga grows, you're going to start growing at the faster rate. But transitionally, actually, you're going to grow less than in your long-term rate of growth. Why is that the case? So my claim is, suppose we manage to increase ga. So now we know that this line here now is a bit steeper. Or say this. We were in this line, whatever, we were in this line, and now we make it steeper. So we want to start growing faster, eventually, in the new steady state. And my claim now, is that in the transition, growth is less than the new rate of growth-- in the new steady state rate of growth. It's higher than the rate of growth of the previous steady state, but it's lower than the long run. How do I see that? I need another diagram, I think. So let me just put the sy curve here. So we're here at this steady state. If I increase ga, the only thing that will move here, and remember the output equation is there but I don't want to put it, the only thing I do, is I rotate this curve up. OK, so this moves up. Do you see? Yes. So this curves moves up when ga goes up. So at the old steady state, what I have now is a gap between the savings, this economy in investment, and how much I need in order to maintain capital per effective worker constant, which means that I'm going to start moving in this direction, until I reach the new steady state. OK, during this transition, I'm growing at a lower pace than in the new steady state. In this new steady state, I'll be growing at a much faster than in this steady state. How much faster? Well, equal to delta ga. But in the transition, I will grow faster than that but not as fast as in the new steady state. That's the claim I was making. OK, good. You know, I'd rather discuss this with more time. So questions about what we have done up to now? Is it clear, or is it very unclear? Probably both. So let me keep this for the next lecture because might take a little time to explain, OK.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_21_Exchange_Rate_Regimes.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: So today, my plan is to finish the open economy part of the course. And we will talk about exchange rate regimes. But before I do that, I need to finish a few things that we didn't in the previous lecture. And that will help as an introduction for the kind of things I want to talk about today. And let me start just reviewing that last slide that we discussed, which is the Mundell-Fleming model. And the Mundell-Fleming model, essentially, is our old IS-LM model, in which the IS a little different because now we have a net export term, which is a function of new things like foreign output, foreign income, and most importantly, the real exchange rate. And the real exchange rate itself, because of the UIP, uncovered interest parity condition, is a function of expected exchange rate, the foreign interest rate. And it also gives yet another reason for why the interest rate affects domestic aggregate demand. There's the traditional investment effect of an increase in interest rate, but we also get the appreciation effect of an increase in the interest rate, which is contractionary from the point of view of aggregate demand. But this is like that without this extra net export term. And in this diagram, we have the same interest rate here. From this diagram, which portrays the uncovered interest by the condition, we can get, for any given international interest rate and expected exchange rate for the next period, we can get the current exchange rate, OK? So that was our model. And we did a few experiments here. The first one was, well, what happens if the expected exchange rate goes up? The first thing is which curves move? Well, if the expected exchange rate goes up, then I know that, for any given interest rate, the current exchange rate will go up, OK? I know that this curve, in other words, will shift to the right. Why do I know that? Well, because if the current exchange rate doesn't move by the same amount of respect to the exchange rate, now I'm going to expect-- I'm going to have expected capital gain or loss, which will be inconsistent with the previous parity of interest rate. So we have agreed that we had certain expected appreciation. So let's make it very simple. Suppose that this interest rate happens to be equal to the international interest rate. Then we know that this exchange rate has to be equal to the expected exchange rate because you cannot expect an appreciation or depreciation of the currency if the interest rates are the same. But if now the expected exchange in next period goes up, and if the exchange rate today doesn't move, that would mean that you expect also an appreciation of the currency, and which means that investing in domestic bonds would give you a higher return because the same interest rate plus expected appreciation. So we know that the uncovered interest parity condition will move to the right as a result of the increase in expected exchange rate. But that also means that, at any given interest rate, you get a higher-- an appreciated exchange relative to the previous one before the increase in the expected exchange rate, which means that the IS will shift to the left. So if the expected exchange rate goes up, that leads to an appreciation, and that leads to contraction in aggregate demand. OK, good. Next experiment was, well, what happens if foreign output comes down? Well if foreign output comes down, then this has nothing to do with the interest parity condition. It's not-- it doesn't show up in this expression. But it does shift this because it reduces our exports for any given level of interest rate and output. And so the IS shifts to the left. So that's contraction. That's the way you import the recession from the rest of the world, OK? As I said before, people around Asia and Latin America are very, very worried about the-- actually, the Europeans as well because Germany [CHUCKLES] exports a lot to China-- are very worried about contractions in China and so on because, through that channel, it's contractionary as well. Now we're in the other part of the cycle because China is reopening. And that sort of gives lots of hope to Europe and so on. And that's one of the reasons why the euro has appreciated vis a vis the dollar recently. And then the last experiment that I don't remember whether we finished or not-- I think I said it very quickly-- is, well, what happens if the international interest rate goes up? What moves? Well, the first thing that will move is this. This was a parameter, OK? So what do I know? That if I keep the interest rate constant, and the international interest rate went up, what has to happen to the exchange-- and expected exchange rate hasn't changed, what has to happen to the exchange rate today to be indifferent between the two things, the two bonds? So this is an experiment. Suppose you were at any domestic interest rate. We don't touch that. Now I increase international interest rate, and I say expected exchange rate is the same as it used to be. What has to happen to the current exchange rate in order to be indifferent between investing in the US bond or the foreign bond? AUDIENCE: It has to go down. RICARDO J. CABALLERO: Exactly. It has to depreciate. Why? AUDIENCE: So that the uncovered interest parity [INAUDIBLE] RICARDO J. CABALLERO: That's correct, but why is it that you need the exchange falls today in order to restore-- to have the interest parity condition holding? OK, so remember what happened is that you had the same interest rate. And now the international interest rate went up. That means it's nothing moves. Now you prefer-- you were indifferent before. Now you would prefer to invest in the international bond. If I don't change the US rate, then I have to compensate you by some other means. The only way I can compensate you in this model, the only thing that's endogenous is by an expected appreciation of the exchange rate because that would give you a capital gain from holding the bond, the US bond, a currency capital gain. Now, since the expected exchange rate is given, the only way I can give you that is depreciating the currency today. So then you can expect an appreciation tomorrow, from today to tomorrow. OK, that's the mechanism. OK, so that means that this curve here will shift to the right. For any given interest rate, you need a-- sorry, to the left. For any given-- I made that mistake in the previous lecture as well. So for any given interest rate, this curve will have to move to the left, OK? So if the interest rate doesn't change, and international interest rate is up, you need an exchange rate today that is lower than it used to be. So you can expect an appreciation from now to the next period, OK? So that's when it moves to the left. Now, what happens? What else moves in that case? I remember when I'm asking the question, what else moves, I mean, when a curve moves, what you need to do is just take something as given and then see whether we get the same equilibrium output or not. So I'll take an interest rate as given-- that's the easiest-- and then ask the question, well, will I get the same equilibrium output or not? If I get the same equilibrium output, that means the IS doesn't move. But if I get a different equilibrium output, it means the IS has moved because, for the same interest rate, I'm getting a different equilibrium output. So what happens in this case? Does IS move or not? When IS star goes up? Going to simplify the question-- yes, it does. [CHUCKLES] Which way? Will I get more or less output when the international interest rate goes up? And I'm taking-- look the kinds of things I'm taking as given. I'm also taking as given international output. So I'm not moving Y star. I'm not moving expected exchange rate. And I'm asking the question, if the domestic central bank-- the Fed, in the case of the US-- does not change the interest rate, what happens to equilibrium output? Does it go down or up? If the international interest rate goes up, the domestic interest does not. What has to happen to the exchange rate? You answered it before. AUDIENCE: Goes down. RICARDO J. CABALLERO: It has to go down. That means it has to depreciate. What happened to net exports when the exchange rate depreciates? What does it mean that exchange rate depreciate, especially if you have the-- in this case, we have the price is completely fixed. So now, if the nominal exchange rate depreciates, it means that the real exchange rate depreciates. What does that mean? What got cheaper? OK, you need in order to study for the quiz. [CHUCKLES] Domestic goods are cheaper. So that means that-- and equivalently, foreign goods got more expensive. That means, for any given level of domestic interest rate, now there will be less imports and more exports. That means net export will be more, which means the IS will shift to the right, OK? Good. So these things you need to control. I understand that it's a little confusing to think about exchange rates and so on. So anything that happens here with the exchange rate is just a relative price. The more expensive are your goods, the harder it will be to sell them [CHUCKLES] and the more tempted you will be to buy foreign goods. That's what it does. So that's contraction, appreciation or contraction here or not. Here, the story's a little different. It's all about equalizing expected returns. So you need to have a movement in the exchange rate today so that you are always indifferent between investing in one side or the other. It's about the return, the expected exchange in the exchange rate. So OK, good. OK, I got this a little unclear, but [CHUCKLES] we'll keep trying. Is there anything particularly unclear? Or it's all a blur? [CHUCKLES] OK, got it. Well, let me-- so all of this I describe here is allowing the exchange rate to move. We're saying, look, if we move something or the foreigners move something, then we ask the question, well, what does the exchange rate has to do today here? And typically, when that's done, we call those regimes floating exchange rate systems, meaning the exchange rate can float, can move around. As interest rates in different parts of the world change, then the exchange rate moves around. We typically call that flexible exchange rate. I think the distinction is a lot harder to make in practice, but-- for reasons I'll explain later. But that's what is meant as a floating exchange rate system, one in which really you're doing-- each country is doing its own policies and so on, and exchange it does what it needs to do so the financial markets clear. Many countries, however, do something which is the polar opposite of that, which is called a fixed exchange rate regime. So some countries really peg their currencies to a major currency. An extreme case is the eurozone, where they gave up their individual currencies and they have a common currency. So Germany and Italy have an ultra peg exchange rate because they have the same currency, OK? Now, most of the times, fixed exchange rates are a little weaker than that. For example, the Hong Kong dollar has been pegged to the dollar for a long time, for the US dollar for a long time. And I'll show you a few others. Many countries go through some phase where they tried to peg the currency, and it typically fails at some point. But they have periods in which the currency is pegged. So let me-- suppose that you have a pegged exchange rate. Let me show you some features of it. Suppose you are in a peg or a fixed exchange rate regime pegged to another currency. And suppose this is credible. That's a big issue with fixed exchange rate, but suppose it's credible. There are some countries that have credible fixed exchange rates. Well, if you have a fixed exchange rate with respect to some other currency, and it's credible, then you know the expected exchange rate is equal to exchange rate and equal to a constant. That's what it means to have a fixed exchange rate, OK? It's constant. But if this is constant, means you never can expect an appreciation or depreciation because it's constant. It's fixed. And if you can't expect an appreciation or depreciation, then uncovered interest parity condition tells you that-- what does it tell you? That your interest rate has to be the same as the foreign interest rate. Why? What would happen in a credible peg, in a credible fixed exchange rate if the domestic interest rate is higher than international-- than the currency, the interest rate of the country you are pegging to? What would happen? Suppose that-- I mean, a fixed exchange rate, and we have the same interest rate, and now, you unilaterally decide to raise interest rates. What do you think would happen with capital flows? What would you do to your portfolio? If the exchange rate is pegged and is credible, it is as if they were issuing the same currency because it's the same-- it's a different currency, different name, but it has a constant in front of it, OK? So it's as if it was issued in the same currency. Two bonds that are identical and issued in the same currency cannot be paying different interest rates, because you would all invest all your money in the bond that is paying a high interest rate. And that's what happens here. So it's-- mechanically, what would happen is, for some crazy reason, a country has a fixed exchange rate, credible fixed exchange rate decides to have an interest rate higher than the currency, the interest rate in the currency it's being pegged to, then you will see massive capital flows to that country. So there will be an enormous pressure for an appreciation of that currency. But what the central bank would have to do is start buying massive amounts of-- supplying massive amounts of currency for those that want to buy it because there would be an infinite demand for that, OK? So in practice, what that means-- and sometimes you can do that for a little while, but not in a sustained manner. So what happens in practice is that, if you really have a pegged exchange rate, and you have free capital mobility, which is people can move in and out of your bonds-- China doesn't, for example. So it can allow itself to both control a little bit the currency and-- eh, eh be semi-pegged. And it still can move its domestic interest rate because they have capital controls. But if you don't have capital controls, and people can move money in and out, it can do portfolio investment, as it happened with most of the advanced economies, then, effectively, you give up domestic monetary policy because whatever the other country does, the country you're pegging to does, you have to follow. So that's what it means. You peg-- you give up your domestic monetary policy if you choose to peg to another country. A little later, I'm going to tell you why countries may choose to do that. But that's what you do. And the uncovered interest parity tells you that's what you do. You're not going to be able to deviate very significantly from the interest rate that the other country's setting if you want to maintain your fixed exchange rate. Now, in practice, there are many hybrid regimes. There are very, very few pure float regimes-- a few. I mean, maybe five or something like that. So there are all sorts of degrees of exchange rate regimes which are hybrids between fixed exchange rates and fully flexible exchange rates. Let me show you just a few randomly-- more or less randomly selected in Bloomberg. So there you have in white is the US-euro. That's a float. That's a cleanest float you can imagine. I mean, there is no-- then another which is a very clean float is the dollar-yen, Japanese yen. Now, that's a currency that has that freely float. But if there is a major dislocation, central banks do intervene to [INAUDIBLE].. It means, in normal circumstances, they float. And the same is true with the euro. But if there's a big dislocation, some major bank collapses or something like that, then there are major dislocations in financial markets. They become very segmented. Arbitrage is not that easy and so on. Then central banks intervene. But for the normal business cycle and so on, they do not. They don't intervene in the currency market. The intervene in different ways. And that's the reason-- I'll get there. And the other one is the pound. Now the US dollar versus the British pound, then it's also as a pure float. These are also pure float. This is the US versus the Aussie dollar and against the Canadian dollar and against the Swedish krona. Those are pretty floating regimes. They are a little different from the previous ones I showed you because these are currencies that are much more prone to sell off during risk-off environments. And that's the reason you see these spikes here, OK? This was COVID-- biggest spike. You didn't see it in the dollar-euro and so on and so forth. So these are currencies that are free floaters, but they're very exposed to risk, the risk environment in the market. But it's still they're free floaters. The Swedes are a little bit more independent-minded. But they do control a bit more of the currency. But still, I consider those free floaters. These are currencies that are a little different. This is the Brazilian real. The ZAR is the South African Rand. And this is the Colombian peso. And you see several things here. They do move, so they have a big component of flexible exchange rates. They do intervene a lot more, though, because they are exposed to much more risk-off type environments and so on and they need to intervene fairly frequently to control movements in the exchange rate. But you also see a trend in these things. So their currencies are becoming chronically weaker relative to the dollar. And the reason for that is because there are countries that have higher inflation. So if you want to maintain the real exchange rate constant, and you have higher inflation than the other country, then your nominal exchange rate has to be depreciated, because your prices are rising at a faster pace than the other one. Well, if the exchange rate was not appreciating on-- depreciating on average, then it would mean that you were becoming more and more expensive, OK? So that's the reason countries that have higher inflation, they tend to have these trends as well, OK? But it's still fairly floating. Here, these are all currencies that are, to a different degree, targeted, in the sense that they're contained in terms of they're not free to float at will. The scale here will mislead you. If I had put it in the same scale as the euro-dollar or the euro-yen and the dollar-yen or the euro-yen as well, then these things would have been-- looked very small, OK? So I should have put a real floater there so you would have seen that these guys are moving a lot less. And these are different kinds of countries. This is the Hong Kong dollar that, for all practical purpose, is pegged. This little wiggles is just technical things that happen overnight and stuff like that. But they're pegged to the dollar, OK? So the Hong Kong dollar is pegged to the dollar. And that means they really don't have independent monetary policy relative to the US, vis a vis the US. This is the CNH. This is the Chinese renmimbi. And it's a currency. Again, I should have put it with a real floater there. It's a lot more controlled. So it moves around, but in a much tighter range. And they are thinking about exchange rate. When part of their policy programs and so on, the exchange rate is something they are thinking about. This one here, the blue one, is an interesting one. That's the Singaporean dollar. And the Singaporean dollar, they have a very interesting regime. They have a target zone, meaning they let the exchange rate move within a range only. But it's not pegged against a single currency. It's pegged against a basket of currencies. And the recipe is secret. So everyone is always guessing what they're doing and so on. They do change the weights a little bit to keep the markets confused. But the currency is very stable. We all understand it's a weighted average of the euro, the renminbi, and the dollar. But they don't disclose exactly the thing. But you can filter out what they're doing. And they keep things in a range, and they occasionally change the slope of that range. But it's very regulated in that constraint. And in fact, they state their monetary policy in terms of the effects. They said, that's our policy. Interest rate is whatever it needs to be, so the exchange rate remains in that range. That's the way they state the monetary policy. They don't even think about-- so I let the markets determine the interest rate. We determine the exchange rate here in that range, and it's a narrow range. Again, I should have put a real floater that you would have seen that. OK, so the point is that everything goes. There are all sorts of arrangements happening around the world. These are different kind of currencies, no? This is the Turkish lira and the Argentinian peso. I think, through this sample, this has been called pesos. Since they have this very high inflation, they keep changing the name of the currency and so on because you have to remove zeros from things. But I think, through all that period, this is still the Argentinian peso. So I mean, look at the scale, though. [LAUGHS] So you cannot see it, but all-- these two countries are all the time fighting against the exchange rate. In fact, Argentina today has, like, five different exchange rates. There is the official exchange rate. There is the blue exchange rate. There is the purple exchange rate. There are all sorts of things. You should never pay with a credit card if you go to Argentina if you do tourism, because you don't want to pay the official exchange rate. You can get three times that in the blue market. They don't call it the black market. Since everyone does it, they think blue is fine. So there are all sorts of exchanges. But still this is the official one. And even the official one you see sort of as completely exploded. The Turkish lira looks pretty good here just because I put it next to the Argentinian peso. Otherwise, it also would look pretty bad. But most of these countries are all the time paying the exchange rate because they use that to stabilize inflation. The whole thing breaks up. And then, boom, they go through big spikes. You see this, for example. They're trying to stabilize-- there you see that they're trying to stabilize the currency. They're not floating there, that range. They were a little successful and [WHISTLES] And that happens all the time today. And now, obviously-- I mean, look at the size of this. This is an appreciation of the dollar, so it's a depreciation of the Argentinian peso. What do you think is happening here? It looks very smooth, by the way. It's not that it's moving around. It's just [WHISTLES] What do you think is happening. Very high inflation, in the thousands. And that's what is happening here. But again, this is a hybrid system. They try to stabilize frequently the exchange rate. The thing goes, and then they stabilize it again and so on and so forth. But you can't fight just having much higher inflation than the rest of the world. You have higher inflation, then there's no way around that your currency is going to depreciate. They try to, but they can't. Anyways, so let me go back to this model and think a little bit more about the decision to have one kind of exchange or the other one and therefore everything that goes in between. So remember, just to remind you, that that's the model we have. So let me think about policy first, and then let's think how do you deal with policy in the different exchange rate regimes. And then we'll see why would countries would want one thing or the other. So suppose a country is in a recession. We're in this model. And suppose that we are in the flexible exchange rate regime. So what should the fiscal policy do? Suppose you are in a recession. What should fiscal policy do? There's nothing unique of closed economy here-- in open economy. In closed economy, you would have given me the same answer. If you are in a recession, what will you do with fiscal policy? Have expansionary fiscal policy, increase [INAUDIBLE].. So that means you move the IS to the right. Nothing changes in the open economy. You keep doing that. The only thing that you get is a little smaller multiplier because part of that will go to imports. But still, it moves you in the right direction, OK? And yes, and countries rely on other countries doing also their own expansionary fiscal policy. But suppose we're talking about a recession that is unique to this country. Then you're going to do an expansionary fiscal policy. What would the central bank do? In closed economy? What-- AUDIENCE: Drop interest rates. RICARDO J. CABALLERO: Drop interest rate. Well, in open economy, it does the same. You just drop the interest rate. It turns out that will depreciate your currency, which will help you, [CHUCKLES] as well. So it's very expansionary because of that, because your currency depreciates, so net export goes up as a result of that. So you get the investment kick. You lose a little bit because part of it goes to imports. But then you also get the effect of net exports that comes from the exchange rate, OK? So monetary policy is a great policy in open economy because it gets reinforced by the exchange rate. It's even better than fiscal-- other things equal, when you compare the two. The two policies lose power relative to the closed economy because the multiplier is smaller. But the difference is that the interest rate policy gets an extra kick that comes from the depreciation of the currency. So it's a very powerful tool. So that's what you do if you have a flexible exchange rate. And that's what countries do in practice when they are free-floaters. Suppose you have a fixed exchange rate regime. And it's a credible fixed exchange rate regime. Then I asked you, again, the question, what kind of fiscal policy would you run in that country? The same, expansionary. That's what you would do. And it's effective, as it was in closed economy, a little less because the multiplier is a little less. That's it. But no difference in the analysis. In fact, fiscal policy has exactly the same effect as fiscal policy in the flexible exchanges in this case because I haven't moved the exchanges in any event, in either of the two cases. What should the central bank do? That's a trickier question. AUDIENCE: If there's a [INAUDIBLE] does central bank know naturally what the foreign country's doing? RICARDO J. CABALLERO: Yes, central bank knows. So it will have too much. Yeah, exactly. So the central bank cannot do anything. I'm saying suppose it is an [INAUDIBLE] recession. This country's in recession. Now it wants to use his policy tools to deal with that. It has fiscal, but it doesn't have monetary policy. Unless the cycle of the other country coincides with your cycle-- so it's a global recession or something like that-- then you're doomed because other country's doing the monetary policy for what they're doing, what they need, not for what you need. And therefore, you don't have monetary policy. So that's a costly thing of a fixed exchange rate. We already said, but now we're making it very concrete, because we are in a recession, and you realize now that you don't have a tool that you had before. So that's a cost of a fixed exchange rate. Here is an example. Here, what I'm plotting is the policy rate in the US. That's the blue one. And that's the policy rate in Hong Kong. Very small difference, but you can see that-- these are technical things. But you can see that Hong Kong has to follow the US, essentially. It's exactly the same shape. So Hong Kong doesn't have independent monetary policy, OK? Again, those are technical gaps and not really-- just look at the shape. It's exactly the same, not moving around. So Hong Kong doesn't have monetary policy, period. Not something they have. If they get a recession that has to do with their own cycle and that is not a result of something that is happening in the US, they don't have the tool to deal with that. Of course, during COVID, and during the global financial crisis, they were aligned. So they [CHUCKLES] would have moved in the same direction. That worked. But if there is a shock that is Chinese-centric that is affecting Hong Kong, the US monetary policy is not going to react to that. And that that's a problem for Hong Kong. Still, they choose to do it. A good question is, why? Always there is politics. There is more than the kind of thing. But there are also economic arguments for why you may want to do these things. Another situation that I mentioned happens all the time, every other day in Argentina, for example, is speculative attacks on the currency. So you want to have a fixed exchange rate, but the markets don't believe you, that you're going to be able to keep it there. And so what happens? So look at this equation here. Suppose that you have a fixed exchange rate, but now the market thinks you're not going to be able to sustain it. So that means suppose that this guy starts going down. This happens, again, in Argentina, every other day-- probably today, every single day. They want to say that they want [INAUDIBLE] exchange rate, but the markets don't believe you. And they expect your currency to lose value in the next few hours, in the case of Argentina. So this guy is going down. [WHISTLES] What happens to the current exchange rate? So expected exchange goes down. Everyone expects your currency to drop. What will tend to happen to the currency today, to the Argentinian peso today? AUDIENCE: It also drops. RICARDO J. CABALLERO: It drops. But if you have a fixed exchange rate, you can't let it drop. And so if you're going to maintain an exchange-- a fixed exchange rate, and now you have a speculative attack, people think your currency is going to drop. And you want to maintain your peg. That's called defending the peg. If you want to defend the peg, then the only option you have as this guy is dropping to keep the exchange rate is to raise interest rates a lot. That's the way you fight-- the main way you fight it. I mean, you fight it by closing capital accounts and so on, but that's the last resort. You first try to fight it with monetary policy. So if this thing is dropping, you fight it by increasing interest rate a lot. And that's the way you stabilize the currency. But what happens when you raise interest rates a lot to defend the parity, the peg? What is the problem of that? AUDIENCE: Increases output. RICARDO J. CABALLERO: Yeah, you generate a domestic recession because just to defend your currency, your peg, you had to raise interest rates a lot. So it means you're going to have a recession at home, OK? So that's another problem of fixed exchange. It's a problem-- that's not a problem for Hong Kong. It was in 1997. They did have a speculative attack despite the fact that they had twice the number of reserves relative to their money base. But still, they had speculative attack. But it rarely happens in Hong Kong. In Argentina, again, every other day, but in-- same in Turkey. In Turkey, it's every 15 days. But it's happening all the time. So that's a problem, as well, because if you have to spend a lot of energy defending your peg, then you're going to be causing lots of recessions at home just to stabilize the currency, OK? Thus, bigger economies say, well, OK, well, Argentina, Turkey, and so on, but these are bigger boys. Here we have-- this is the ERM crisis. So before the euro, more or less, the eurozone plus the UK had a system called the ERM-- EMS, ERM. ERM is the model. Anyway, EMS is European Monetary System. ERM is Exchange Rate Mechanism, or something like that. And they're both linked. But let's call it the European Monetary System. And the basic idea of the European Monetary System was that they behave very much like Singapore with respect to each other, meaning they allow themselves to move around, but only in narrow bands. The countries that have the more stable domestic monetary position, like France, [INAUDIBLE] Germany, the Deutschmark, and the French franc, where they have is bands of 2.5% up and down. And they move within those ranges. They didn't have a full peg, but they allowed themselves to move a little bit. Portugal, which was a little-- had a little less discipline, they had 5% for each side and stuff like that. But the point is they would have narrow bands, OK? And they moved around in those narrow bands, and they kept their-- kept it for quite a while before the euro. Now, here the whole system came under a speculative attack. What happened around there? You probably have no idea. [LAUGHS] AUDIENCE: Maybe the end of the Cold War? RICARDO J. CABALLERO: Well, it's really linked to that, yes. Yeah, it's the German reunification. So what happens is, when East Germany and West Germany unify, they had to have a massive fiscal policy, massive expansionary fiscal policy. And that big expansion put lots of upward pressure on German interest rates. And that led to big appreciations of the Deutschmark. And the other countries tried to fight it because it had to be in this very narrow band. But they were experiencing these big speculative attacks. And so they had to raise the interest rate enormously. The UK tried to do it for a while and, essentially, said, ah, we give up. And then they abandoned the system. The French tried to stay in there for quite a bit, OK? You can see the French franc. They didn't move. They didn't move. But it was extremely costly for them because the interest rate has to go up a lot. And so they got into a big recession and so on. Eventually, the whole system broke up, broke down. I mean, everyone left. And eventually, they rejoined, but now in the euro. And the euro is a little different because, in the euro, you gave up-- there's no space for speculative attacks, because it's a single currency, OK? So that's the most extreme form. Speculative attacks nowadays in Europe happen through different means. It's the-- well, anyways let me not get into that. But here you have-- so I'm saying having a fixed exchange rate is not easy, even for countries that have sort of very well developed financial markets and so on and so forth. Now, it would seem, given all that I said, that-- I mean, there is no reason to have a fixed exchange rate. It is something-- you give up an instrument. And on top of that, you're subject to speculative attacks all the time-- not all the time. Well, it depends on how bad you are. But you have to be very well behaved, because otherwise, you're subject to speculative attacks all the time. So why not do flexible exchange? Why-- what is wrong with flexible exchange rate? Well, I think the main problem of flexible exchange rate is that it tends to move a lot. I mean, we know that it moves a lot more than fundamentals, meaning productivity is a little higher in one country than in the other. Demand is a little higher than a country. But the exchange rate moves a lot more than those little differences justify. And the reason-- one way of understanding this is the following. And this will serve as an introduction to the next topic of the course, which will be asset pricing and things like that. So let's look at-- revisit our interest parity condition, but now let's not assume that the next-- the expected exchange rate is fixed. I mean, that was an assumption just to make our life simpler, but it needs not be. So that's the uncovered interest parity condition is this. Well, you see, I can replace this guy here for what will happen next period. It's the same thing shifted by a period with an expectation there. So E t plus 1 expected-- this guy here is equal to 1 plus expected domestic interest rate one period from now divided by 1 plus international interest rate expected one period from now times the expected exchange rate for t plus 2, two years from now. And I can keep doing this. I can replace this by something equivalent to that with all the subindex shifted by one year, blah, blah, blah, blah, blah, blah. And so I can end up writing this exchange rate as this product of lots of things that can happen in the future, the monetary policy path at home, the monetary policy path-- not the next period, the path for years to come, of a monetary policy in the other country. And there is always an expected exchange rate at the end there that is free, that can move around. So the problem of this exchange is that the future matters too much, in a sense. And people have lots of imagination, so all sorts of weird things they imagine. And when people have lots of imagination, then these things are moving a lot. And that's the reason you see enormous fluctuations in nominal exchange rate. Now-- and that's a problem. It's a problem to have a very volatile exchange rate because it makes transactions more difficult. I mean, if the price of things are-- relative price of things are changing all the time, it's a little bit more difficult to plan. Financial investments become more-- because you get all this exchange rate volatility in between. So that's one of the main reasons you would prefer, if you could, to have a more managed exchange rate. It's because you don't want this all this artificial volatility that comes from behavioral traits and things of that kind, OK? That's the main reason. Look at this example, for example-- example for example, sorry about that. But this is Russia during the war. This was the ruble, the Russian currency. And when they invaded, of course, this thing collapsed. The currency collapsed, OK? And this period is a little longer than you think. But it collapsed for quite a while. And then it recovered a lot-- actually overshot and came down. So this is not because the central bank said, we're going to devalue the currency or anything. It's just people said, well, this is a country going into war. It's going to be a mess, [MUTTERS] So all that future I talk about, essentially, destroy the currency, OK? Now, a lot of that recovery happened there not because people now began to see the future as a better future or anything like that. It's because they have to hike interest rates massively. They were around 4% or 5%, and they had to go to 20% interest rates to defend the exchange rate. Remember I told you, you have a speculative attack, and you have enormous pressure on your currency. Well, the main tool you have to offset that is to raise interest rates. So they hiked interest rates massively. They did a lot of other things as well. They put capital controls on lots of things. But this was the main thing they did. And so they dragged the recession-- the economy into recession for war-related reasons and because of the monetary policy response they had to do with that, OK? So that's an extreme case of a war. But that's the kind of things that can happen in a floating exchange rate. Even when-- I mean, the main constraint in Argentina and Turkey and so on is reserves. They don't have enough reserves. So if you have to defend your currency by intervening in the FX market, if you don't have enough, then you're not credible. I mean, if you have massive capital outflows, and you have a few billion dollars there, it's not going to work. That's not the case of Russia. They had massive amount of reserves. So that was not the issue. It was all about expectations of things that happen in the future. It was all about these kind of terms, OK? Anyway, so that added to the cost they had. So how do we choose these things, then, in practice? Again, there are lots of things. And politics play a role and so on. But there is a case-- so again, I would put it even the other way around. I think that, if you could, you would like to have a fixed exchange rate. If you could, you would like to have a fixed exchange because then you remove all this spurious noise that happens every single day because of exchange rate volatility that complicates your life. So when can you do that? Well, first, you can do it with respect to some other country where the shocks are very similar because if you know that-- say your Mexico, but-- or the north of Mexico, something like that, and you know that all your shocks are really shocks to the US. Then the US can do the monetary policy for you because you have the same shocks. [CHUCKLES] I'm exaggerating. So if you are very similar, then it makes sense to have a fixed exchange rate because why pay for all that volatility when you're going to be doing the same policies in both countries, more or less at the same time because you're exposed to the same shocks? So that's one thing. That's one reason why the eurozone is the eurozone, because there are European countries that have very similar business cycles and so on. Germany is a little different. That's the reason they always have some problems. I mean, the north and the south, they're a little different. But they're much more similar than other countries. And so that's what they have one. Another option is when you have lots of fiscal capacity because, if you have lots of fiscal capacity, then the cost of not having monetary policy is not that large, because you can fight your business cycle with fiscal policy. That's the case of Hong Kong, for example. Hong Kong-- Hong Kong, first of all, is not subject to speculated attacks because they have massive amount of reserves. So anyone that dares attacking them is going to lose their shirt. So they're safe. Soros tried many years back, and he didn't do as well as he did attacking the British pound. But they also have lots of fiscal resources, so they can fight their domestic recessions and so on with fiscal policy. The other factor, which also applies to Hong Kong, if you have very flexible factor markets. So if wages move very easily, prices move very easily domestically, then you don't care about having a fixed nominal exchange rate because a fixed nominal exchange rate is not the same as a fixed real exchange rate, which is what you really need to move around. If your prices are flexible, it doesn't matter that the nominal exchange doesn't move, because the prices are moving around. And so you still have lots of flexibility in the real exchange. And that's the reason I would say-- it's one of the reasons-- political reasons, as well-- but why they can afford it. I think, in the case of Hong Kong, it's the other way around. There were some political reasons. And then they built a system so that it's a coherent system because they have lots of fiscal capacity, they can defend the currency well, and they have very flexible market. This is-- well, again, this is what I said before. If you-- this is what I said before. If you don't like that noise, if you-- especially if you're going to be trading a lot with people and so on, in Europe, many people cross the border many times a day, and you want to go shopping one way or the other. It's a pain if the exchange rate is moving all the time. It's much easier if things are stable. And the same applies to financial transactions. People have deposits in different banks and stuff like that. It's better if you don't have all that fluctuation. And in the case of the euro area, they decided that the advantages of having a very fixed exchange rate were more than the cost for individual countries of not having independent monetary policy. It's still a work in progress. They're building that. It's not finished, but they're working at it. The other reason why countries fix exchange rate-- and that's the Argentinian reason and so on-- is when they have no control on inflation, they have no credibility. And so if you peg to another currency that has credibility, then the idea, the hope at least, is that you will inherit the credibility of the other currency. And that's what they tried to do. Argentina had a currency board like Hong Kong for a while. The whole idea was to control inflation. Well, let's peg to someone. And if the markets believe you, then it would work because then you inherit the credibility of the-- you're saying, when you take a fixed exchange rate, I'm not going to run monetary policy, which is the main source of inflation. So I'm going to let the credible country run the monetary policy for me. And that's what gives you credibility. As long as somebody believes it, you're not going to [LAUGHS] quit the thing. But that's the reason countries do it, often to stabilize inflation, as well.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_11_Deep_Learning_II.txt
Welcome to Lecture 11 of CS 229. Um, the plan today is to wrap up deep learning, um, and we- we left off at the end of backpropagation last lecture and we probably- the last part was probably a little bit hurried, so we're gonna cover it again just to make sure you all understood it, uh, uh, properly. And then, uh, once we wrap up deep learning, we're gonna- um, the goal is to, uh, cover regularization today. And before we start regularization, you will give a fairly informal introduction to bias-variance and, um, just enough so that we can kind of cover regularization in a meaningful way. And we'll go deeper into bias and variance again on- on, uh, the Friday lecture. So to recap where we left off, uh, last lecture was, uh, we- we discussed neural networks. A neural network is basically a composition of multiple layers of computation where each layer, you can think of them as a vector of neurons, and each neuron, uh, which- which is represented as a circle in each of these, um, layers is- takes us input the entire, uh, previous- the output of the neurons from the previous layer performs some kind of a computation, and the output of this neuron becomes the input for the neurons of the next layer, right? So, uh, networks are- are structured in this compositional way where the output of one function becomes the input to the next function and the- the output of that function becomes the input to the next function and so on. And, um, in the case of, uh, binary classification, where the- the goal of this, uh, network, which is also called a neural network, is to perform classification. The last layer, which is also called the output layer. The output layer, uh, will have just one neuron, and the output of this neuron is considered the output of the network. Okay. And we take the output of this network, and the output- the value of the output of the network depends on the- on the parameter value within the network and also what input we fed into the network, right? So the input of the network gets translated to the output of the network and corresponding to the input, there is a- a correct answer y^i, right? We take y^i and the prediction y- y-hat^i and construct the loss, right? And the goal is to minimize the loss for every x^i, y^i pair, and the way we go about doing that is through gradient descent, right? And gradient descent, we have seen gradient descent for linear models before, where, uh, the- the idea is to take the- the gradient or the derivative of the loss with respect to parameters, not with respect to the input. We take the lo- uh, the gradient of the loss with respect to the parameters, and update the parameters to take a small step in the direction of the negative of the gradient, right? And we're gonna do the same thing with the neural network, and the big challenge with the neural network, or rather the only challenge, not really a big challenge, is how are we gonna compute the gradients of all the parameters at the same time simultaneously. The gradient of the loss with respect to all the parameters simultaneously, right? So the- the- and that's where the backpropagation algorithm comes into picture. And backpropagation is essentially just the chain rule. So if you're familiar with chain rule then backpropagation will- will seem very natural. There's no magic in backpropagation. It's just the chain rule. Um, so the same network. Um, so this is the network view and this is the computation graph view of the same network. And before we go into the computation graph view, let's remind ourselves. If we have a function f that takes an input in R^n and outputs a- a vector in R^m. So vector input, vector output kind of a function. Now the- the derivative of the output of that function with respect to its input will be a matrix. A matrix which- which has as many number of rows, as the number of outputs, and as many number of rows- uh, uh, as many number of columns, as the number of inputs. And this is called the Jacobian of the function. Yes, question? [inaudible] Yeah, so- so the question is, uh, when we covered all the, uh, gradient, uh, ascent and descent techniques, maximum likelihood techniques, uh, we're seeing methods where we take the- the gradient of the loss with respect to parameters and take a small step and- and perform gradient ascent or descent. Why not do, why not take the gradient and set it equal to 0 and solve for the parameters directly like how we did with, uh, the maximum- uh, with- with respect to linear regression, right? And, um, the answer for that is, you can do that whenever it's possible, right? So whenever possible, whenever you can take a gradient, set it equal to 0, and solve for the right, uh, parameter values. Absolutely do that. Uh, however, for a lot of models such as, even including logistic regression, there is no analytical solution. You can- you can take the gradient, set it equal to 0, and try to solve for the parameter values, you just won't get, uh, a closed form solution. And that's the only reason. And if it is possible, you can absolutely do it. And the other reason is that, um, in cases when your loss function is not convex, there maybe lots of local minimas. And when you set, uh, a gradient equal to 0, in theory, you could land, you know, in- in, uh, any of the local minimas, or you could even la- land at a local maxima, if- if your loss function has, um, um, you know, local maximas. So just setting the gradient equal to 0 just takes you to a stationary point. Whereas with gradient descent, we can end up in local minimas, but it's, you know, um, at least you stay away from local maximas. [inaudible] So, um, the question is how- how, um, how frequently do we- um, um, w- why not numerically calculate what value of the gradient or what value of the parameters will set the gradient equal to 0? And gradient descent is the algorithm you would use to perform that numerically. All right. So, um, this is the- this is the, um, um, Jacobian, where if you have, um, uh, a vector-valued output and a vector-valued input, the derivative of the output with respect to input is going to be a matrix. A matrix whose number of rows equals the number of elements in the output vector and the number of columns equals the number of elements in the input vector, right? And now we're going to use, uh, we're going to see how the Jacobian can be used here because we are now gonna, uh, start thinking of this network as these blobs of computation, right? Every blob over here, which is- which is, uh, you know, not a rectangle, you can think of it as some function that takes some inputs. So incoming arrows into this blob are the, um, are the inputs to the function and outgoing arrows are the outputs of the function. So for example, consider this- this function f, which has output, um, which we call as, uh, a1 and input as- as z1. So the derivative of this function, which takes- whose output is some vector- input is some vector, is going to be a matrix. The- the derivative of this function is gonna be a matrix, right? And the, uh, derivative of a with respect to- the vector a with respect to z is therefore, the Jacobian of- of- of this function. Okay. And with this computation view, basically we have represented the same network, what happens in this networks- in this network as a computation graph, so we take the input. The input goes into the first function, whose functions are the- the input itself and two parameters. So all the blue blocks are the parameters with respect to which we want to perform gradient ascent or descent. And, um, the first layer, um, takes- takes, uh, the- the input example to parameters compute z equals w_x plus b, and we call that z. And after that, it applies a nonlinearity. It takes, uh, the nonlinearity is applied in an element-wise fashion. And we get, uh, a nonlinear output, we- which we call it as a, right? And this entire set of operations, we call it as one layer. So this makes up a layer, right? And then we take the output of this layer and run it through this, um, uh, this function, which is again, uh, uh, you know, w2 a plus b, and we get, uh, um, a vector z. And from z, we apply another function, which gives us a vector a. So we're, kind of, hopping from vector-to-vector-to-vector-to-vector, right? And some the- there are some function that transforms one vector to another, and another function that transforms this vector to another, and so on. And this can go on for, you know, uh, many number of layers, er, you know, any number of layers. And in general, we- we, you know, uh, um, with such networks, we- we generally use the term capital L to denote the number of layers. So this- this story repeats all the way till we get- we come to a_L minus 1, so the L minus 1th layer. We take the output of the L minus 1th layer. And now, because we are trying to do binary classification, our output needs to be scalar valued, right? And- and so the- the, uh, the functions, uh, this function will have a scalar output. So take this- this, uh, vector as the input, dot it with this w. So w- w_a will now be a scalar. We add another scalar, so we get a scalar, and then apply a nonlinearity again, and we get y hat. So this is the output of the network, right? And once we have the output of the network, we take the y_i corresponding to the x_i [NOISE] to compute a loss. So in case of binary classification losses, y log y hat, uh, plus 1 minus y log, 1 minus y hat, we take the negative of that, um, fo- for the loss, and that's our loss. Now the goal is to perform gradient descent, we need to take the derivative of the loss with respect to b_L, the loss with respect to w_L, the derivative of the loss with respect to b2, and w2, and b1, and w1, and so on, right? And once we calculate those- those, uh, gradients or derivatives, we can perform gradient descent, right? So that's- that's the, er, that's a setting in which, you know, backdrop comes into picture. And if you remember the chain rule of calculus, when we need to compute the gradient of say, the loss, [NOISE] a scalar with respect to some element, [NOISE] let's call it w2i_j, so this is a matrix. And let's call, uh, you know, think of this as the i_jth element of- of that matrix. If we want to, um, calculate, uh, let me use a different color, agent of loss with respect to say, w2i_j, right? And the chain rule of calculus tells us that we- because all these composition- all these functions are compositions where the output of one becomes the input of the other and so on, we can take the local derivatives of each of these and multiply them up, right? That's basically the chain rule of, uh, multivariate chain rule of calculus. And, um, the- the reason why the multivariate chain rule of calculus is different from the regular, you know, chain rule of calculus is because the intermediate values can be vectors, right? So here, L is a scalar, w_i_j is a scalar, right? But the intermediate values are all vectors, which is why we use the multivariate, uh, chain rule of calculus. And what does this come out to? So first, let us calculate all the- all the intermediate pieces and then we'll just join them together, right? So first, so- so the- so the picture in mind, uh, you want to have is, you're jumping from, you know, boxes-to-boxes in our computation graph, so we take the derivatives also, you know, boxes-to-boxes, right? So first, we start from the very end, the loss with respect to a2, right? So this- we're gonna call this, uh, loss with respect to a2. Scalar, scalar, what's gonna be the dimension of this? [OVERLAPPING] Scalar, right? So this is gonna give us the gradient, uh, of the loss with respect to a2. And then a2 with respect to z, plus with respect to z. [OVERLAPPING] So this was a scalar, this is a scalar, what's the derivative going to be? Scalar again, very good. Um, so that, uh, first derivative be, uh, with respect to a_L Yeah. This is- you're- you're right. That's not a^2, you're right. Thank you. Thank you. a^L, right? And now from z, we need to take, um, um, what's the derivative of z^L with respect to b^L? What- what dimensions is it going to be? This is scalar, this is scalar and then z^L with respect to partial of b^L. What's the dimension of this? Scalar. Scalar, very good. And now, what's the derivative of- of z^L with respect to w^L? This is a scalar and this is a vector. [BACKGROUND] Right? So this is going to be, uh, according to the Jacobian view, m, which is the output is a scalar, and n, which is w, is the- is a vector, so it's going to be a row vector because we have one output and you know some number of inputs, right? So this- this is going to be a row vector partial of z^L over partial of [NOISE] w^L, right? And now- so that was one branch and we're going to continue down the, you know, main trunk of the network, so at each layer, we're going to branch off into, you know, the parameters, but we're going to continue down the trunk of the network to reach- to get to the previous layer's parameters, right? So from z to a, so z is a scalar, a is a vector. So the derivative is going to be? [BACKGROUND] It's going to be a vector again, uh, but it's still going to be a row vector. Uh, so, in this case, we- I- I've just, um, you know, written them the same, but, you know, scalar output, vector input, the Jacobian is going to be a row vector. Right? Follow this convention, scalar output, vector input it's going to be a row- think of it as a row vector. So this will be del z^L with respect to del a^L minus 1. This will be a row vector. All right, and now- and so on. Uh, you know, uh, we go from layer to layer, and let's say we reach here from whatever the next layer was, and from here to here, vector-valued output, vector-valued input. So the Jacobian is going to be a? [BACKGROUND] Matrix. Exactly. So here we- no, this will be a matrix; a^2, this is 2 over del z^2. [NOISE] Right? And similarly from- from here to here, uh, the derivative of z with respect to b- b^2. So this is a vector, this is a vector, so the Jacobian will be a? [BACKGROUND] Matrix. However, before, at- at- at the last layer, before we, uh, calculate the- the, um, um, the derivatives as a matrix, here we are going to make use of the fact that we go- we want to calculate it with respect to each of the elements of the matrix or on this vector. And the reason why we want to, uh, calculate the- the derivative of the z vector with respect to each of the elements separately is purely for computation reasons. Right? You could have followed the same, uh, approach and calculated the Jacobian over here and taken the derivative of the vector with respect to a matrix, and you would have gotten something in three-dimensions. And mathematically that's perfectly fine, but computationally that is going to be very inefficient, right? So for computation purposes, we're going to take the derivative of each cell in this grid or each cell in this vector separately with respect to, um, um, the- the z vector. Right? And so the goal is to calculate it with respect to w_ ij. Okay? We want to start from a scalar, end with a scalar, and take the derivative, which is going to give us a scalar. That- that's- that's- that's the idea here. So, uh, let's see how that works out. So, uh, here we want to take the- consider say the ij element [NOISE] and from z [NOISE]. And what's the dimension going to be here? So z is, uh, z is a vector and this is a scalar, so the derivative will be a? [BACKGROUND]. Row vector or column vector? [BACKGROUND] So the number of outputs, uh is the number of rows. So here the number of outputs is multiple, so the number of rows in that matrix is going to be multiple. But the number of columns in the matrix is the number of inputs we have is just- just a scalar. So this will be a column vector. Right? So this is del z^2 over del w^2_ij. All right? Now this recipe can be a pla- so we have pretty much- so we're missing this one more. Right? So, uh, the z to a is again going to be a? So this is vector output, vector input, so this is also going to be a? [BACKGROUND] Matrix, Jacobian matrix. [NOISE] Okay? Now, a few- few observations that we- you want to make; the first is that what we do in each layer repeats in every layer, right? So once we figure out what each of these, uh, uh, values are, we can just repeat them at every layer, because each layer is just a repetition. Just the number of dimensions will be different, but the math that we use to calculate the ij element is going to be the same at each layer, right? And we are essentially kind of toggling between, uh, z to a, a to z, z to a, a to z, z to a, a to z, and so on until we come to the, uh, L minus 1 layers a, and then we have, you know, something different, right? So we have, you know, something that's non-repetitive at the last layer, followed by something that repeats alternatively, L number of times or L minus 1 number of times all the way until we reach the input. Right? So it's sufficient to see, um, how this pattern works for one example. And once we do that, it is very easy to just repeat the same recipe to any given- any given, um, um, parameter in the network. Yes. Question? So for every w within the intermediate left [inaudible] that derivative with respect to each element? With respect to each element. Yes, that's right. So yes, so the question is, um, um, are we going to take the, uh, derivative with respect to each element in w? Uh, for the p- for, uh, yes, that's- that is what we will do. Uh, the- the reason is because it's computationally efficient, you can take the, uh, derivative of- with respect to the entire matrix directly, uh, and that will- uh, the- the intermediate step to go from a matrix to a vector will give you something in three dimensions. And it turns out that, you know, in that entire three-dimension volume, most of it, you know, uh, pretty much everything will be just zeros except a small fraction of them will be non-zeros. Um, which is- is, um, for- whi- which is the same value we will get if we start from, you know, take the, uh, um, cell point of view. So think of it as one parameter- scalar parameter to scalar loss with a whole bunch of Jacobians in the middle. Uh, but what do we do for, uh, layer k, [inaudible] the derivative of the [inaudible] , w k, b k, will we- I'm guessing we plan to use that for the k minus 1 layer, is that correct? How- if we- if we that how can we reuse that, uh, computation like if we have the derivative with respect to each. So let's- let's- let's work out one example and maybe that'll answer your question. Uh, so- so what we're gonna do now is try to calculate the gradient of the loss L scalar with respect to w_i_j and we'll see what happens for all the, uh- uh, you know, with the layers in between. So this we're gonna compose it as loss with respect to a_l times loss with respect to z_l times loss with respect to a_l minus 1, times, all right so I'm just gonna take it down, [LAUGHTER] L minus 1 times loss with respect to z_l minus 1. Could you use a black pen, please sir? Can I use a black pen? The only black pen here is, alright, it's writing now alright, let me use the black pen. Um, so, okay, I'm just gonna start all over again. So, uh, just to keep this in view, I'm not gonna push it all the way up. So the, uh, derivative of the loss with respect to w_2_i_j is equal to, now using the chain rule you break it up into components, loss with respect to a_l, times root of the loss with respect to z_l, times of the loss with respect to a_l minus 1, times 2z_I minus 1 and so on until we reach, um, let's continue from here until derivative of a_2 times derivative of z_2 times derivative of z_2 times derivative of w_2_i_j. Right? So this is just the chain rule. What we, a_l_l, um, derivative of l with respect to a was a scalar. So this is 1 cross 1, right? l with respect to z. Sorry, this should have been a_l. Oops, sorry- sorry about that. So this would be a_l, and this would be z_l, and this would be a_l minus 1 to z_l minus 1 and so on. Okay? So l_2 loss to a_l is 1 cross 1, loss to z_l, a_ l to z_l was again 1 cross 1. Right? a_l to z_l was 1 cross 1 and then z_l to a_l minus 1 was a 1 cross. So for the sake of simplicity, let's assume that all the, all the, um, vectors are- have dimension m. Alright? So I'm just going to call this 1 cross m, right? And a_l minus 1 to z_l minus 1, a_l minus 1 to z_l minus 1, was just like this, was again m by m. And so on until a_2 to z_2, a_2 to z_2, was again m by m and z- z_2 to w_2_i_j was. m cross 1. m cross 1, exactly. This is m cross 1. And it so happens that if we have, each- at each layer, if we have a different dimension then this would have been say m_l minus 1 and this will be m_l minus 1, till m_l minus 2. You know, but, you know, they're still gonna be Jacobians of matrices. Right? And now we can see that, um, l, the- the derivative of loss with respect to w_i_j this must be 1- 1- 1 cross 1 with a scalar output, scalar input, right? But in the, in the middle, we have all these matrices, um, and vectors and whatnot. And what we will see is that when you start multiplying them out, so this together is 1 cross 1. And this and this together is 1 cross m_l minus 1. And this together is 1 cross l minus 1 and l minus 1, you know they- they go out so this will be 1 cross m, l minus 2 and so on until you reach here. Right? And this will be 1 cross m and this is times m cross 1, will give us 1 cross 1. Alright? So this- this long daisy chain of Jacobians will always, will always, always when you multiply them out, condense into a scalar. And if it doesn't, there's a bug in your math somewhere because it always will condense to a scalar. Because it's the scalar loss with respect to scalar parameter, and all of these need to just, you know, collapse into a scalar. Right? And what we see here now is we did this experiment for, um, you know, in terms of, you know, writing them out as just these partials. We can simplify this a little further. So we can, we can make two observations here. Now in place of w_2 what if we wanted to calculate loss with respect to w_i_j from 1? Right? We did it for layer- up till layer two until w_2. What if we wanted to do it till layer 1? Right? [BACKGROUND] there's a respective p_1, z_1. Exactly. So what we do is to calculate the derivative of loss with respect to W1 i, j. Everything we did till Z2 was the same. At Z2, we branch it off. Instead for W1, we continue on the trunk, right? We take this Jacobian and this Jacobian and we end up with Z1. And from Z1 we again branch off just the way we did it here, right? In order to compute the derivative of W1 i,j with respect to L, we reused everything until Z2. And, you know, this is- is quite a long list of- it could be possibly a pretty long list of computation. And then we just ignore what we did in the- in that specific to that branch of that layer. And then get two more Jacobians and branch off into this layer. That's how we reuse computation. If we were to perform- if you were to calculate the loss with respect to W i, j and then ignore it, forget it, and then start all the way again with respect to W2 i, j we would be re-computing all these intermediate Jacobians a whole number of times. And gradient- backpropagation is an algorithm that tells you what is the most optimal or efficient way to compute these, compute these gradients by reusing the intermediate computation as much as possible. So where were we? [NOISE] So if we wanted to calculate with respect to W1 j, then this would be- no, I'm going to call this L with respect to e z 2 times Z2 A2, A1, A1 times A1 times A1 times Z1 times dl W i, j 1. So this whole thing is- all these got reused over here. We would have computed this. Just save that value and plug it in here, and just compute these extra Jacobians and derivatives, right? So this will still be one cross m. This would be m cross m, m cross m, m cross 1, right? And you combine these, you get 1 cross 1 and this is also 1 cross 1. Yes, question? [BACKGROUND] Yeah. So the- so the derivative of the final loss with respect to W1 ij, W1 i j will be the same until- we're gonna reuse everything we used for W2 i j until Z2 layer. And- and then we branch off for W2 i,j. Instead, we will continue by calculating two more Jacobians and branch off towards W1 i,j, right? So everything until Z2 is written here, and the- the thing highlighted in the blue- blue square is- this blue square they are the same. So we're going to just reuse everything we did until here. Just discard the last component which was in the branch. And now this part, the reuse part, and these two extra Jacobians is- is now like the new trunk, the new uh, uh, we extend down the trunk and the last part is the branch that's specific to the layer 1. [BACKGROUND] Yes, question. You said that we use dL over dC. So you have dL over dW, right? What's dL over? So dL over dW is what- is what we calculated for the second layer. And in order to calculate it, this was an intermediate result in the computation, right? And we're going to reuse this intermediate computation for performing uh, for calculating uh, loss of L with respect to W uh, W1. So yeah, that's- that's important. We are not using the gradient of- the gradient of the loss with respect to W2 to calculate the gradient of loss with respect to W1, right? Because this branch, what happens here, has no bearing on the loss with respect to W1, right? You're only gonna use, you know, um, we're only gonna use the parts until Z2 and then calculate the rest. And we're gonna just discard uh, discard everything in this branch that's specific to W2. You're gonna discard it for calculating the derivative with respect to W1. Any- any questions on this before we see how each of the individual components look? All right. So let's look at this again. Now, there are lots of- lots of uh, intermediate Jacobians here. One observation that we can do is right until here, everything to the right of the last L minus 1 output. This should be familiar to you. It may not look familiar, but you have seen this before. This is just logistic regression, right? If this was your x. Assume you're just feeding your Xs, the inputs at this layer. What you have here is logistic regression. So the- the way to think of this as a logistic regression is now combine these into a theta vector, right? And extend this by adding a one at the intercept term and you will theta transpose x, you get a scalar, take it through the sigmoid function, you get your y-hat, and you take your y and y hat and compute the log-likelihood, right? So everything to the right of this blue line over here is logistic regression. There is no difference whatsoever. The difference is only notational. Computationally, it's just logistic regression. [BACKGROUND] After the last layer, right? So- you know, this is the- you think of this as your theta vector, like this, like theta naught and theta one till theta d, and this is your x, x i. And assume that is an extra intercept term, take the dot product between theta and x, you get theta transpose x. Take it through sigmoid, you get your y-hat. Yes, question. [inaudible] So I've just drawn this as a row vector here uh, but- think of this as- you know, think of this as just some vector- you know, don't worry too much whether it is a row vector or column vector, it's just some vector. [inaudible] So we take the inner product between- or the dot product between this vector and this vector. Yeah. So the output is a scalar, yeah. [inaudible] So if this was a vector. It doubles the vector. [inaudible] If this was a matrix. Yeah, okay. So if this was a matrix and this was a- a vector, then the output would also be a vector. And why is it a row vector here? Is- is it because you want a scalar and so how do you choose that? Yeah, be- bec- we want- we want a scalar here exactly. Because we're doing binary classification, we want y hat. In fact, uh, if you wanna do multi-class classification with a neural network, then this would be a matrix. And you would have one, uh- uh, one- one element per class, and you- you would have a softmax and the sigmoid, right? So- so think of everything to the right of this line to be a GLM. If you're doing classification, it is logistic regression. If you're doing regression, then this is- this is linear regression. If you're doing, you know, accounts, then this is Poisson regression. So a GLM, you know, you can attach a GLM at the- at al-1 minus 1. Right. So now, in order to compute these- [NOISE] in order to compute these individual parts, you're gonna make the observation that [NOISE] Dl to da^l minus 1. So all the way till here, this whole thing is basically what we saw in case of logistic regression. So this is gonna be y minus. So in case of, uh, logistic regression, it was y minus h Theta of x times x^i. So the- the- the equivalent thing, uh, that we're gonna have here will be y^i minus a^l times a^l minus a^l times l minus 1. Does that makes sense? So everything until here is gonna- is gonna be just, ah, up. And now. Sir, can you explain the reason why [inaudible]? So if you, if you think of [NOISE] a^l minus 1 as- as like the input of logistic regression. Oh, wait a second. I think [NOISE] I flipped it here. So this would not be a^ l minus 1, this will be w^l. [NOISE] So in logistic regression, you had a Theta vector and an x vector, right? And the derivative of the loss with respect to Theta had x in it, but x and Theta are symmetric. So the derivative of loss with respect to x would have been y minus h Theta of x times Theta. Right, so here we're trying to take the derivative with respect to- Sir, doesn't the loss of y minus h Theta of x times x not. Yeah, so in- in case of logistic regression, [NOISE] Del of L with respect to Del of- to Theta was y minus h Theta of x times xl with respect to x will be y minus h Theta of x times Theta. Because x and Theta are- are- are symmetric here. Right? So this whole thing over here evaluates to just- right? So you see the- you see the parts that are getting used here, right? So pretend this was x, pretend this was Theta of logistic regression, and calculate the loss with respect to, uh, you know, this Theta and you get this, so, uh- uh, you get this. [NOISE] Right? And in the notes we have, you know, worked out the details of this. You can see the step-by-step of how you get this for- for the initial part. And then we, you know, so once we- once we compute this, the rest over here are the repeating patterns in our chain of Jacobians, right? So this was the part that's specific to the- the end of the network, which is like a logistic regression or, you know, any other GLM. What- what- what, uh, happens, uh, further are basically a to z, z to a, a to z, z to a. The repeating pattern per layer. Yes, question. So why does, uh, the weight instead of [inaudible]. Oh, here? Yeah, yeah, yeah. So in case of logistic regression, see- see the, uh, kind of a similarity with logistic regression. If you're taking the derivative of the loss with respect to theta, you get an x. If you're taking the loss with respect to x, you get a Theta. So similarly, if you're taking the derivative of- of the loss with respect to a, you will have the w show up here. And if you're taking, uh, the derivative of the loss with respect to w, an a will show up here. Okay? [NOISE] Right. So in this- in this long chain, we have, you know, condensed the first three. And by observing that this is essentially just logistic regression. And we know the- the gradient for logistic regression. And this is the one cross m_l minus 1. So this is our row vector. Does it make sense? All the way until here is this. And this is, ah, a row vector, so this is a scalar and this is a vector. So this is a row vector. Now, until here, now we've got the row vector someone. And- and now we're going to calculate these- these matrices, right? And the matrices are- are again, are pretty straightforward. So d z by d. So let- let's- let's, uh, let- so let's try this. [NOISE] So Del a^l minus 1, dl z^l minus 1. So how do we go from z to a? We perform an element-wise non-linearity. Right? So in this matrix, where the- the output corresponds to the a's and the input corresponds to the z's. And this will be a square matrix because we're just performing an element-wise multiplication. And because we're doing an element wise multiplication, the effect of the ith element of z is- has no effect on the jth element of a if i and j are different, right? Because there's the the, uh- uh, element-wise. So this will be a diagonal matrix. The Jacobian is going to be a diagonal matrix because it's an element-wise operation. There is no interaction between, um- um, ai and zj if i and j are not the same. And so here, this will just end up being g prime, g prime, g prime. A and z are both vectors with the same derative?. Yeah, so a_i equals g_z_i, right? It's an element-wise operation. So the Jacobian of the a vector with respect to the z vector will be, uh, will be diagonal because a_j with respect to Del z_i is equal to 0 if i is not equal to j. [inaudible] b a 1, 1, with respect to b01. Right. Exactly. So, uh, so- so if this was not an element-wise operation, then here we would have, you know, da_1 by dz_1, d_1, da_2 by dz_1 and so on till da whatever m by [NOISE] or, uh, n by dz_n and so on. So you- you take every possible partial derivative of the output with respect to the input and- and that's your Jacobian here and here, just because we're doing an element wise operation, all these are 0s. Right? So. [inaudible] Yeah, so this would be, you know, if it's m by n, then this would be, uh, a- yeah, da_1 by dz_n. Yeah, you're right. So yeah, th- the basically the- the m cross n of all the partials, right? So what we observe is that in this chain of Jacobians, every other Jacobian is going to be a diagonal matrix, right, because it's an element-wise operation happening every other step. And so this is a diagonal matrix of g prime, okay? And now, what- what- what about Del z, say, uh, uh, l over del a_l minus 1? So by- by z_l to a_l minus 1, we are asking, [NOISE] you know, z_l to a minus 1. How do we go- how do we get z_l from a_l minus 1, we do, w- w_a plus- [NOISE] w_a plus b. So here, so z_l was w_l times a_l minus 1 plus b_l. So, uh, z_l by, uh, a_l minus 1. If you do it element-wise, you will see that this- this, uh, turns out to be exactly this, w_l. Why is that? Um, take the ith element of z and the ith element of z. So z_i equals sum over j w_ij a_j. So the, uh, partial element of the- sorry, partial derivative of z_i with respect to a_ j is w_ij, which means the partial of- of- of this entire Jacobian is just the w matrix, right? [inaudible]. Yeah, so this- so this is a matrix multiplication. So here I'm seeing this matrix multiplication for just one element. [inaudible]. So the plus b is- is going to cancel because we're taking with respect to a. All right, so, yeah there is a plus b and, you know, and you're taking the derivative of this with respect to a, you know b is- b is just. [inaudible]. Yeah. Sure- sure- sure- sure. Yeah. All right, thanks. Yeah. Right, so what you saw, so the- the initial part is a linear model followed by a diagonal Jacobian, followed by a Jacobian which is the l minus 1th layer times a diagonal g prime times w_l minus 2 and so on. Right? So this long, you know, uh, multivariate chain rule chain has this, has this pattern of starting off with a linear model followed by alternating between a diagonal matrix and the w matrix- diagonal matrix and the w matrix until we reach- until we reach the z layer of, uh, until the z vector of the corresponding layer, right? And what- what happens here? Partial of, let's call it z_2 with respect to w_ij_2. So this is the- the last part. [NOISE] So here we make the observation, right? We- we start from here again, er, Del, so the partial of z_2 with respect to w_ij, we see that w_ij only interacts with a_ j. So this actually is- is, uh, so after making this observation, make this other observation that the numerator here is a vector and the denominator is a scalar. So this Jacobian is a column matrix, right? Output is vector, input is scalar, for the rows is a vector, input is a scalar. Okay? And we also observe that the role played by w_ij only impact z_i. So w_ij has no effect on any other, um, uh, any other z. So what does this mean? So, uh, it's an easy, z equals w times a. So w_ij, so, uh, you- you can think of the ith element of z as the dot product between the ith row of w and the a column vector. Right? And this is equal to the dot product of this times a. So any w_ij, any cell can only impact the z_i element of- or the ith element of z. It has no- no impact on any other element of z, right? So this- this, uh, Jacobian will have 0s everywhere except the ith- ith element. And this ith element is going to be a_j- a_j. So, uh, w_ij is the, uh, uh, is the parameter with res- with respect to which we are, uh, taking the- the derivative. [NOISE] So z_i is w_ij times a_ j. So a_j comes here. And for every other z_k, you know, w_ij has no role in it. So it's going to be zeros everywhere except a_j at the ith position. Does that make sense? Right. So this- this, uh, vector over here is a column vector with a_j in the ith position and 0s everywhere. Now what does this mean? So we have a row vector over here and a column vector over here. Right? Now, if you take the dot product between this row vector and this column vector, we- we get the answer that we're looking for. We also observe that the column vector has only one non-zero element and everything else was 0s. Right? And when you take the dot product between such a column vector and this row vector because most of them are 0s and only the ith element over here is non-zero, this dot product will be the ith element of this row vector times a_j. Does that make sense? So we have some vector timeshttps://hitomi.la/reader/458551.html#1- 0 everywhere a_j at the ith position 0. So this dot product is, you know, let me call this b_1 through say b- b_k. So this, uh, the- the, uh, dot product of these two will be just b_i a_j. Is this clear? So p- partial of l with respect to w_ij is the ith element of this row vector times a_j. So let me write that here. [NOISE] So this whole thing, let me call it, uh, say Del_2. [NOISE] Dot product with- dot product with 0, a_j, 0. And del 2 was basically partial of loss with respect to z_2 times 0, a_j. And this is partial of l with respect to z_2. So this was the row vector, this was a column vector. These row vectors, i'th element times a_j. Please explain the derivative of z_i with respect to w_ij. So this is the, um, derivative that we are interested in calculating, right? What we saw was because of the chain rule, uh, it was this row vector times this column vector. But this column vector was mostly zeros except- you know, the i'th term. Can you explain that calculation? Uh, the calculation going from here to here, or this entire thing? Just that one. Oh, just this one. Okay. So the, uh, the derivative of z with respect to w_ij, right? So z, what we saw, the z vector is w matrix times a column, right? So the- the derivative of z with respect to w_ij some i'th row and, uh, j'th, uh, uh, and ij'th element of w will be- because it's just a matrix multiplication, w_ij can only impact the i'th row of z. Yeah, so only the i'th element of this- of this, uh, gradient vector will- will be non-zero. Everything else is 0. [inaudible] Yeah, exactly. But just following the formula you wrote there w _ i a _1 w _i 1, w _ i. I don't know how you [inaudible] Yeah, so w, so if you take the derivative of this with respect to w _ i j, right? So only a _ j will remain. Okay. Yes. So that's a _j- that's why a _j comes here and everything else is zeros. So um, so the- the partial of l with respect to w_ i j is this long row vector times this special column vector that has only a _j in the i'th element. And this is, you know, when you take the dot product because this is the i'th element, we get the i'th element of this row vector times a _ j, right? So a _j was a scalar in this vector, now it's just a scalar. Now what does- what does- what does this pattern tell us? If dl- l with respect to w _2 is now therefore is the outer product between this and this. So this is going to be del z _2 times, I guess I should have been putting uh, superscript of two everywhere, a_2 transpose, right? So this- from here to here is just- you know, linear algebra notation. Is this clear? Let me explain this part again. The partial of l with respect to w_ i, j was this row vector times this column vector that we got from here. This row vector times this column vector, but this column vector happens to be a special column vector that has only the i'th element to be non-zero. Which is why we can- we can- we can write this dot-product as the i'th element of this vector times a_ j. And because now uh, this ij'th element of this matrix is the i'th element of this vector times the j'th element of this vector, then the- the full matrix is basically the outer product. Yes, question. [BACKGROUND] Yeah, so this- so this is with respect to w_2. Yeah, so this is probably, yeah. This is respect to one. Thanks you. Yes. And this also should be 1, and there's the- the main idea here is how we went from this special column vector having just one non-zero element into this notation. And from this notation, we- when we extend this into the full matrix, you know, this just becomes the outer product. All right. So once we are able to calculate the derivatives of the loss with respect to all the weight matrices. We can do something exactly similar to the bias terms to the b vectors. We repeat this for every layer. We- we get the collection of all the derivatives of the loss with respect to the corresponding matrices. And we take a gradient descent step in the direction of negative of- of this. So what we get is w_ l equals column equals w_ l minus alpha, times partial of l over partial of w. Where the partial of l with respect to partial w_l was calculated as this outer product at the end. There are a lot of moving parts. But it's- it's basically just, you know, multivariate calculus, our chain rule applied in a mechanical fashion. There are a lot of moving parts, but, you know, it's just chain rule and- and- and calculus there's- yes, question. [inaudible] Yeah. So I guess the question is, um, you know, is there a preferred order in which we perform this with respect to the different l-layers? The- the, uh, the way we generally do it is, first, we calculate the L, uh, the partial L with respect to w^l for every layer. And in order to do that, in order to reuse computation, you wanna start with the last layer first, and then work your way backwards because you're using laws of computation. And then once you have the full collection of your partials, uh, these can be done in parallel. Because, um, once you have this- this, uh, this matrix, you know, you're only updating the lth layer's parameters using this matrix. Okay. So this update can be done in parallel for all the layers. Uh, but to calculate the updates itself, you- you wanna do it serially, starting from the, eh, the final layer and working your way backwards because you can reuse a lot of the intermediate computations. [inaudible] Exactly. So, uh, we- we're gonna reuse, um, we're gonna reuse of- in order to calculate, uh, Del L by, uh, w_i_j^1, you're going to reuse everything until z^2, and get a few more- and two more Jacobians and calculate, uh, specific, uh, uh, calculate the branch specific to the first layer. So because we are reusing a lot of these intermediate values, you wanna start at the end and work your way backwards, and that's why it's called backpropagation. [inaudible] Yeah. So what I meant by calculate two more Jacobians is, in order to calculate w^2ij, you have calculated everything until z^2. And now, in order to calculate with respect to w^1ij, we reuse everything we did until here and get two more Jacobians. And the- [inaudible] The first- the first one will be, uh, what do you mean by first one was up till z^1? [inaudible] Of the- the- yeah, the two Jacobians are a^1 to z^1. Uh, the- the- the- well, the two Jacobians are z^2 to a^1 and a^1 to z^1. [inaudible] Exactly and the branch will be z^1 to w^ij, exactly. And the same thing with b also, it will go up to the previous layer then- Yeah, the same with respect to b. The- the- the, uh, in fact, b's a little simpler than what we did for w. w is the harder case, which is why we worked through it. B is- is going to be very simple. You know, what is the- the, um, partial of z with respect to b^1, partial of z with respect to b^2, and then work it out. And in fact, it's going to be much simpler than w. Right. So, again for b, in order to calculate for b^2i, everything until z^2 from the loss is the same and you just have a different branch. [NOISE] Right. So any- any- any- any, uh, immediate questions with respect to backpropagation? Yes, question? [inaudible] Yeah. [inaudible] So the question is, we start with the input layer, which is of dimension d. And this is like, you know, your d-dimensional input, if it's like price- you know if you're- your- your, um, like if you're trying to do some kind of uh, uh, uh, say a linear regression then these are the features of your xs. [inaudible] One example. This is one example, that's why I write x superscript i. So this is one example. [inaudible] Yeah, so I'm coming to the multiple examples next. So all of this was, we've seen it with respect to one example. Right. Now, what do we do when there are multiple examples? I'll- I'll uh, before- before I jump on to that, I wanna make sure there are no outstanding questions on backpropagation. Right. Uh, it is fine if you haven't understood all the details right now. I would recommend you to, you know, go back and work through the notes, and work out these parts yourselves. The- and while you're working out through the math step-by-step, the key steps- the key points to have in mind is to break down the big problem into this- into this smaller steps where there are these two steps, a to z and z to a that repeat over and over. And the Jacobians of those are very simple. So z to a is a diagonal Jacobian, and a to z is just the w matrix. So this will always be just a w matrix, and this will always be diagonal of g prime. [inaudible] No, a to z is the w matrix, z to a is g prime, the Jacobians. Because you went through g, so the derivative of a with respect to z will just be the diagonal of g prime. [inaudible] Yeah, so, yeah, a to z will be always a diagonal matrix, z to a will always be just a w matrix. Right. And this you can, you know, repeat it depending on how many layers you have. And once you reach the final layer, this part is just like logistic regression or any GLM in general. So GLM, diagonal w, diagonal w, diagonal w. And then you have something that's specific to the branch, and the part that's specific to the branch is this- this special column vector that has only a_j at the ith position for, for w^ij. Right. And you take the row vector and you take the column vector, you get, you evaluate it- you- you- you get the value for Del l, the derivative of l with respect to w^ij. And then you observe that this just the ith element of one vector times the jth element of the vector. And so the derivative with respect to the matrix is just the outer product between these two. Right. There are a lot of moving parts but, you know, pretty much straightforward. [inaudible] So should there be a transpose where? Over here? [inaudible] So this is, you know, i- [inaudible] Um, I'm not sure I understood. [inaudible] Yeah, this a matrix, yes. [inaudible] Oh, so this should be- yeah. So- so- so- yeah, so think of- so dl by z^2 in our case was row vector, so yes- so you can- you can have, um, but- but, uh, the idea here is the outer product between two vectors. Think of it as the outer product within these two vectors, whether they are row or column, you know, calculate the outer product, ith element of your jth element, of Del that becomes the i jth element of the matrix. Right. Okay. So this was with respect to one example. When we have- the way we generally train these neural networks in practice is with gradient descent. But we don't always do stochastic gradient descent. Right. What we do is something that's called as mini-batch gradient descent. And in mini-batch gradient descent it is similar to stochastic gradient descent, where except, instead of sampling one example at random, we sample a batch of examples at random. Right. So, in- in practice to train we define J of ws and bs, where this is the set of all ws across all layers and set of all bs across all layers. Right. And we define this as the sum from i equals 1 to capital B, loss of y^i, y^i. Right. So where B is size of mini-batch. Right. So when we- when we, uh, when we are now working with, so- so far we- we considered x to be a vector, but if you are working with a mini-batch, let's see what happens. Let's see what happens in the first layer. So in the first layer, we have z^i, z^1 was equal to w^1 times x^i plus b^1. Right. So this was a, let's call it m by m or- so m by d, if assuming the first la- the output of the first layer has dimension mx, because if the input has d dimension, and b also, therefore, has m. And z is also a vector of size m. So and this of dimension d. Right. So, w times x will give us a vector of size m. Right. And this is a vector of size m, and output is a vector of size m. Right. And this was all fine when we were- let's put across 1 to just- [NOISE] and this was all fine with respect to one example. Now, what happens when in place of xi we take a mini-batch of xs? So what happens if we do w of 1, which is still m by d. And in case of x, we have a mini-batch of x^1, x^2, x^b. So this is now d by capital B. Right. Plus b^1, and this is still m by 1. So this- this matrix multiplication will give us something that is m by b. But now we are trying to add something of dimension m by 1 to an entity of dimension m by b. Right, but this doesn't work. You- you can't add two- two, um, a vector and a matrix, or two matrices or two vectors whose dimensions don't match exactly. Right. And over here, um, what is, uh, most of the- most of the, uh, uh, softwares, uh, that you use for training machine learning, uh, uh, especially neural networks, you know, like TensorFlow, Python, NumPy, all these software applications deal with such a scenario of adding, uh, an m by 1 to m by b, by something called Broadcasting. So when you're implementing your, you know, neural network in your homework, which is there in your homework two, you might wonder how you're going to take this m by B dimension and add B, uh, and add a m dimension vector to it. And as long as your- as long as your dimensions match up in this way where you have m on the left and B and 1 on the right. Most of the computer software, including NumPy, will perform something called broadcasting, which means it's going to make- uh which is- it is going to broadcast this m by 1 vector into an m by B vector by making multiple virtual copies of itself uh, as multiple columns. Next question? Is it equivalent to just multiplying B by a row vector of 1s? So the question is- is this- would it be equivalent to multiplying B by- by what? A row vector of 1s. A row vector of 1s. So B by 1 times 1 by- 1 by B. Um- All filled with 1s. Filled with 1s? Uh, yes, exactly. So you can- you can do it explicitly and make multiple copies of your B and that will consume more memory in your computer. Whereas, you know, broadcasting is this- this, you know, uh, you- you probably going to come across, you know, broadcasting errors when you're doing homework, you know, when, you know, things are not, you know. So, uh, broadcasting is this feature in- in most, uh, computer software which- which, uh, work on, um, matrices where it- it makes multiple virtual copies of this vector to match up the dimensions and that's something you just need to be, uh, aware of. What is equivalent to that? It is mathematically- it is equivalent to multiplying, uh- uh, a row vector of 1s and making multiple copies, yes? Um, mathematically it's equivalent, but actually performing this is very algorithmically expensive. You're going to use a lot of memory, and use a lot of computation. Um, broadcasting is a feature that, you know, these softwares implement to do it virtually for you. Next question? Um, in the methods we use stochasticity, uh, stochastic gradient descent and we say that we could do that because [inaudible] Yeah. How do we justify the minibatch [inaudible] Yeah, so a good question. So in the other, um, other, um, algorithms, we use stochastic gradient descent and, um, and because the problems were convex, we kind of, you know, knew that it would go to the- the- the, um, global minimum or nearby the global minimum. But neural networks are not convex. Um, and what's- and- and- and, uh, the stochastic gradient descent makes sense in neural networks I guess is a very good valid question, right, and the answer there is, um, it is valid. The- the- if- if you look at the theory of using stochastic gradient methods, um, so there are- there are certain conditions. So again, you know, this is for your reference. So the- the- the- if you want to go deeper into this, lookup stochastic approximation- approximation; and, uh, one of the, uh, very first papers that introduced this idea was, uh, from, uh, a pair of authors. I think it was uh Robbins and Monro, Robbins and Monro. So, um, when they, uh, introduced this whole idea of stochastic, uh- uh- um, stochastic optimization or stochastic gradient descent, uh, it was introduced in the context of stochastic approximations and what the theory says is, if you perform this gradient descent with your learning rate, you know, we used in- in all our methods, we use a- a constant learning rate Alpha. But if, uh, in this paper they give a proof that as long as your learning rate is also decreasing over time, and if it is decreasing at a rate such that, um, so supposing if Alpha 1 is the learning rate used at time step one or- or the step size one. As long as you, um, as long as you decay your learning rate over time, such that sum over time, Alpha t is, uh- um, infinity, but sum over t, Alpha t square is less than infinity, which means the sum of the learning rate values explodes to infinity, but the sum of the squares of the learning rates, you know, converges. As long as your learning rate follows this condition, then stochastic gradient descent will converge to some local optima and that local optima in case of neural networks could be any local optima, right? Uh, gradient descent will- will also take you to some local optima in a neural network. But we don't know what it is anyways and, you know, then, you know, why the extra concern when you start using stochastic gradient descent is, you know, is- is- is a response. In terms of when exactly stochastic gradient descent will converge to a local minima, as long as your learning rate follows the- these set of conditions, stochastic gradient descent will converge to a local minima, even if the problem is non-convex, it'll take you to a local minimum, right? Uh, in practice, we don't, uh, uh, follow learning rates in this, uh, uh, that meet this condition. Uh, but, uh, it is actually common to anneal your learning rate in some way. May not be in this way, but it's- it's common to decrease a learning rate in neural networks, uh, um, it- it's a commonly done practice though not in- in that, uh, fashion. So SGD will take you to a local optima, um, but, uh, you know, you have no guarantees- just the way you have no guarantees with gradient descent itself. [inaudible] Why don't we do stochastic instead of minibatch? So the question is, um, you know, the- the- these models are very expensive to compute, uh, you know, calculating all this backpropagation. So why do we do minibatch instead of just stochastic? Um, you- so the- the, uh, the reason why you want to do minibatches, because most of these software's are implemented on GPUs where you get parallelism for free, kind of. So if you're not doing, you know, at least some amount of parallelism, you're kind of leaving money on the table. So you want to use, you know, uh, make use of the parallelism somehow. And- and also when you- when you move from stochastic gradient descent to mini-batch gradient descent, the noise in your gradient, uh, direction comes down significantly. So if you're averaging across multiple examples, you know, the noisiness in it- in- in- in each step that you take, uh, is, uh, reduces significantly. Next question? So that's, uh, uh, Alpha t, uh, let's say you change your learning rate two times. Yep. Uh, so why- why should restrictions in the values of, uh, uh, learning rate Alpha- why should that have anything to do with, uh, whether or not we can work the question in that depends on the nature of our loss function- So I'm not gonna go into the details of this. I'm happy to go into, you know, more details of this after the lecture. If you're interested in, I would highly recommend, you know, reading up this paper, it gives you all the details of, you know, what are the necessary conditions. Um, probably there's also a sufficient condition for your training process to converge. [OVERLAPPING] [inaudible]. Yeah. Just those alphas will make sure you find- Exactly. Yes- Yes. So the papers are- the-the idea here is any set of learning rates you take, no matter what your loss function is, if they follow these- these conditions, then you will converge to a local optimum. Exactly. [inaudible] Well, not all the alphas, some of the initial alphas could be very big, but, ah, you know, but the sum of the square of the alpha should converge to some finite value. All right, ah, a few more comments about, ah, neural networks before we move on. Um, so one view of neural network is to start with this picture, and, you know, remember we- we first consider the last layer as a linear model and everything else has something else. This view basically tells you that what a neural network is doing is transforming your x's into some phi of x. Alright? So you can think of this whole network, this whole thing to be some phi of x where the phi are parameterized by w and b. I think of it as a feature map, right? We- we- we came up with some hard-coded features in problem set one in the last question like, you know, polynomials and, you know, sinusoidal features, et cetera. But essentially what this neural network is doing is taking your input and transforming it to some other space, and then you apply your GLM or any linear model that we were- that we've seen before. So neural networks, in a way, are way to learn representations. So everything that happened from x until a of l minus 1 was learning a way to represent x so that the GLM could consume that representation, right? So that is one view of neural networks where your neural network is some learn-able feature map [NOISE] plus linear model. Okay. In- in case of linear models, we would only learn the parameters over here, but in the case of neural networks, we are jointly learning the parameters of the linear model and the parameters of the feature map. Yes, question? When you say that do you mean everything simultaneous? Everything simultaneously exactly when we do gradient- gradient ascent or gradient descent, we're updating the parameters of the last layer and the initial layers all at the same time. So in a way you're- you're trying to learn what the right parameters of the linear model needs to be, and also at the same time, learning what are good representations that are gonna help my linear model. Right? So linear- In fact, one of the leading, ah, conferences of deep learning is called ICLR. No, it's-it's, ah, ah, um a- academic conference that's kind of dedicated to deep learning, and the name of ICLR stands for International Conference of Learning Representations. Okay, so that's what deep learning is doing. It is learning representations so that you can apply linear models that you are already familiar with. Which- which- which gives some interesting, um, um, additional perspectives. That is, once you have a feature map, I guess what I'm- what I'm about to say, you know, beyond this point about neural networks are- um, um, are not some things we are gonna ask you in your exam, but are, you know, good to know, especially if you wanna get into machine learning research. Um, associated with every neural network is a kernel. A kernel with given two examples, ixi, xj is the phi of xi transpose phi of xj. Where the phi are- is the thing that takes you from x2 to here, right? So every neural network kind of comes with an implicit kernel. Where the kernel is represented by this explicit feature map. Okay. Which means now this gives you a way in which you can use neural networks with Gaussian processes, right? In Gaussian processes, you needed a kernel to- to tell you how similar or dissimilar two examples were, right? And over there you can use neural networks. You can- you can- there are ways in which you can learn kernels, which means you can learn these neural networks for the- for the GP process. And-and, you know, that's- that's a very elegant way of combining Gaussian processes and neural networks, so a neural network becomes the kernel of your GP. There are also a few other interesting- um, few other interesting results, ah, with neural networks. So there is another, um, um, interesting, I guess, um, um, linked to Gaussian processes where there's another paper which- which shows that a single layer of neural network, which means one hidden layer and one output layer. Where the input are your x's and you have one hidden layer and one output layer. As the number of neurons in the hidden layer tend to infinity, then the neural network becomes a Gaussian process itself, which is another interesting link between Gaussian process and neural networks. [NOISE] If you're interested in those papers, you know, ah, make a post on the address happy to, ah, share links to those papers. However, ah, the- the other, um, interesting, um, interesting idea of neural networks is this thing called the Universal Approximation Theorem. Again, the- No, we're not gonna ask you about this in your exams or homework or anything. But these are good to know, you know, especially if you want to get into machine learning research. So the Universal Approximation Theorem says, if you have a function, say, ah, y equals f of x, where x, you know, belongs to rd and y is in, you know, some r, let's call it k. Right? And you're interested in some bounded region of x. Which means, say, if your x's are, for example, some- some image right there, then you are only interested in the region of x where the pixel values between some range, right? That's- that's the kind of limits our region of interest of the inputs. So given a bounded region of, ah, interest, then there exists- exists a neural network of one hidden layer, which means the input layer is x in rd. Your final is y in rk, and you have a hidden layer- a fully connected hidden layer. [NOISE] And the Universal Approximation Theorem says if you have a function with a given region of interest of your inputs, then for any smooth- uh, any continuous f, any, uh, uh, function f that is continuous, there exists some neural network with a finite number of hidden layers that can approximate f to an arbitrary degree of precision. Right. And- and that could be surprising, uh, initially, so any f that is continuous if you define a region of interest in the input space of x. No matter what, uh, uh- no- no matter what f you have, there exists a neural network with a single hidden layer, with a finite number of hidden dimensions. But that number of dimensions can be exponentially large, but it's still finite. And that neural network can approximate y to an arbitrary degree of precision. So if you come up with a degree of tolerance Epsilon, say, call it, you know, 10^minus 6. Then you can construct a neural network f hat of x, such that f hat of x minus f of x is less than Epsilon for all values of x in the bounded region. Yes, question? How is that value [inaudible] is doing best for everything, so that's just exactly a violation of no free lunch? So I guess I wouldn't call it a violation of no free lunch because a Gaussian process could also do this. You know, it can approximate anything if you have a non-parametric thing. But here, uh, the results is, in a- in a- in a, uh, parametric setting, uh, this can- this can, um, this can basically mimic f to an arbitrary degree of precision, right? You- you throw in more, uh, input layers, then your degree of precision improves. But, uh- yes, question? Sorry I didn't say the- yeah does [inaudible]? No, no, it's just this, you know, uh, consider the- the output, uh, over here non- nonlinear. So that's already the prediction? Yeah, if you think of that. You know, you- you can- you can- you can construct a network like this where, um, um- so if you- if you think of your y as- as a scalar value, then I guess, you know, you would just have one cell over here. Why- why bother with more than one given layer and all that? So- so, yeah, I am coming to that. So, right- so it tells you- the universal approximation tells you that any function- continuous function f can be approximated to any arbitrary degree of precision with a one-layer neural network, as long as you're willing to, you know, have lots of hidden- hidden units. However, um, so- so tha- that's basically, you know, a theoretically or mathematically, you know, true valid result. And- and, you know, that's probably, you know, a big reason why there's a lot of interest in neural networks because it has, you know, such good expressivity power. However, the catch that comes with this is it does not tell you that- it does not tell you or give you a recipe of how to recover that function or how to recover that configuration. And it doesn't tell you that with gradient descent you can- you can reach that configuration of the network. Okay. And it also, um, doesn't tell you how many samples of x and y pairs you need to mimic it, right? There are lots of- lots of, um, um, um, um, catches that come with it. But it- it's still, er, a valid mathematical result to that by adjusting your parameters in some way, it doesn't tell you how, but a configuration exists where it mimics any continuous function f to an arbitrary degree of precision. Yes, question? But does it need to be continuous with any further requirements on the matter? Uh, I don't remember the details, but, you know, it's- it's- uh, it's not a very, uh, I don't think it needs to be, uh, Lipschitz. It probably needs to be. I- I- I don't remember, but, you know, it's- it's- it's- um, for most practical purposes, all the kind of functions that you may want to learn are- are- are covered under that. Yes, question? Uh, the question is like [inaudible]? Yeah, I- I guess, you know, this is just a mathematical result, uh, which- which tells you that there's a way in which you can adjust the weights and biases of your network to mimic f of x to an arbitrary degree of precision. Um, you know, that's just a mathematical result, and you probably wanna view it outside the context of, you know, your data, and test set, and whatnot. Uh, but, you know, um, that's still- that's still a valid result, and that's basically, you know, uh, kind of spurred a lot of interest in neural networks because, hey, they are, you know, universal function approximators. And as long as I have a good enough dataset that, you know, that- that- that's kind of representative of, you know, your, uh your function and maybe gradient descent will get us there, you know, but there's no proof that grad- gradient descent will get you there, but, you know, maybe it does. Can we extract [inaudible]? There's no algorithm of how to extract such f hat. All it tells you is that there exists an f hat. So within those new predictions [inaudible]? Yeah- again, I would- I would- I would not think of it in the context of- of- of prediction versus learning versus inference. Um, it- all it's telling you is there exists some configuration, doesn't tell you how to reach that configuration. Right. So- so now, you know, uh, neural networks are therefore in theory, you know, the- the, uh, hypothesis class of neural networks. If you're able to af- you know, um, afford a pretty large intermediate, uh, uh, hidden layers, the hypothesis class is pretty big. You can approximate any function, uh, you can- you can- you know, or continuous function you can think of, you know, in- in some, uh, bounded region of input. Um, and- and which- which kind of, um, forms, and I segue, to our next set of topics of, you know, bias-variance tradeoff. I don't think we have time to finish it today. We're going to pick it up again on- on- um, on Friday, but maybe just, you know, um, lay out the setting so that we can- we can pick it up from, uh, Friday. Yes, question? So before you move on, can you introduce neural networks to introduce [inaudible]? Good question. So, um- so the question is, use neural networks to come up with nonlinear hypotheses and a decade ago or so, uh, SVMs, you know, with kernels, could also do nonlinear, um, um, um, decision boundaries. And why is the recent, you know, trend or shift in interest towards neural networks and away from SVMs. I guess, um, um, few- few, uh, uh, possible reasons, uh, likely reasons. One is SVMs require you to hand craft a kernel. It's, you know- you need to use your intuition to come up with, um, you know, a good mathematical kernel for it, um- for it to work well. The second is, um, uh, whereas in neural networks, the feature map is kind of learned from data, right? You- you don't have to construct a mathematically valid kernel. You know, the data and the gradient descent will kinda figure out a- a- a reasonable implicit kernel in this case. The other reason is because, uh, neural networks are very computational intensive, right? They are extremely computationally intensive, especially as you add more, uh, deeper layers. And they're also very sample inefficient. Sa- by sample inefficient, let's see, what I mean is you need a lot of data for the neural network to work well, right? Whereas other algorithms are lot more sample efficient, they kind of work at their maximum capacity with a- a lot fewer, um, um, uh, datasets. And recently, the trend in computing is that a lot of data is now digitized, like pictures or, you know, images or, you know, you get a lot of, uh, images, uh, which are digitized, you know. And also there's a huge jump in computation with GPUs and stuff, right? So the two obstacles that were kind of holding back neural networks, you know, lots of computation and lots of data are kind of now met. Um, another, um, uh- in terms of algorithmically, there hasn't been a lot of- or there hasn't been a whole lot of fundamental changes to neural network algorithmically, um, except probably the- the introduction of the activation function called the ReLU, which- which we saw the other day, no it's equal to, um, max of 0, x. Right? Uh, so ReLU addressed one, uh, I would say, uh, problem with neural networks by, uh, uh, uh, uh- of making it very deep, which was, uh, something that's called a vanishing gradient, um, we probably won't be covering vanishing gradients. But the idea there is if you have an activation function that looks like this, and if your activation function, the input to your activation function is some, you know, z value here, then the a value will be, whatever, 1, but the- the derivative of- over there is pretty much 0. So the Jacobian will just be, you know, it has zeros everywhere else, but, you know, these elements will also be- end up being 0. So you take the- the- the gradient of- of- of the next layer multiplied by as, uh, a Jacobian of 0 then, you know, uh, everything behind are- are pretty much gonna get 0, um, um, gradients. So this- this, um, um- this problem was kind of solved with ReLUs. And it's- it's kind of a, you know, if you look at it, it's kind of hacky, right? It's just doesn't, you know- but, you know, in practice it works- it works pretty well. And- and- so the- the, you know, trend in- increasing, you know, computation power and increasing data and with very few algorithmic improvements like this, you know, is- is probably why there's- there's, uh, uh, a renewed interest in- in, uh, deep learning. So there is a kernel associated with single every neural network computation and it seems that basically depended on kernels. So if you just design a neural network, will anyone ever need to go back to SVMs now or will people have to use [inaudible]? So, yeah, so the question is, you know, uh, there is a kernel that we- you know, uh, that we showed that, uh, every neural network kind of implicitly defines a kernel, you know, that's associated with its feature map and SVMs use kernels. So what's- why is, you know, um, is there a reason to use SVMs again? And the answer is actually, uh, very few people use SVMs, uh, except, um, there- there are still probably a few cases where you don't have a lot of data, you know, that's kinda one regime where neural networks don't work well, if you don't have a lot of data. And other- you know, others, you know, more- more simpler methods are still commonly used. Uh, another, uh, interesting thing, uh, that might be interesting is you can train a neural network with- with- with, you know- in- in this way and just use it as a feature extractor to map your- your examples to some higher level feature space. And then you can feed that into other algorithms, you know, that's- that- that's something that people do as well. So train a network on your full training set, right? And learn this feature map and then on other exam, you know, um, on- on the same examples, you know, kind of chop off your network at this stage, extract the features that the network learned, and then set- train, um, you know, like a random forest or some- some other algorithm on this representation of your inputs. You know, that's- that's, uh- people do that sometimes as well. Is there another question? Yes. Uh, so you were talking about the ReLU. Uh, isn't- so doesn't it have a bigger vanishing gradient problem because everything on the left, like everything that is less than 0- Yeah. -like at least the sigmoid has some sensitivity on both sides, this guy has like all vanishing gradients on everything, right? Yes. So the question is, you know, in ReLU, wouldn't you have vanishing gradient problems here? Uh, the answer is- the answer is, uh, yes. Over here the, uh, uh, uh, gradients do vanish. Uh, however, the input that goes into activation is actually the output of some linear transformation, right? Which means there's going to be different for different values of x. So, um, uh, yes, uh, over here, the one- the gradient is- is- is, um- is 0. And that is a problem of ReLU. And people have come up with kind of, again, hacky fixes for it. They call it leaky ReLU, where leaky ReLU is, you know- it's like this until zero and then it has a much smaller gradient over here, right? So, you know, it's like instead- instead of being exactly zero everything, it- it has us small negative slope so there are fixes, or- or, you know, ways around it, but in practice, ReLU seems to work. Why would it do something like that? Like there are infinitely many nonlinearities. Uh, that seems so crazy. I mean, it's- it's easy to compute, so it's inexpensive to compute. All right. And going back to, uh, this question about SVMs, uh, is that a universal approximation theorem for SVMs also? I'm not familiar with the universal approximation theorem for SVMs. But tec- so technically, that's in literally any- theoretically any- any possible mapping you can have x to y, a neural network can learn it? As- As long as it's- It's continuous and you're interested in some bounded region of input space. Okay. And you said something about, uh, it- it doesn't- that theorem doesn't get it that you can actually learn that mapping- that feature map. Uh, but I mean, like you can- uh, yeah, you want learn those w's and b's which have the correct region map. Uh, why- can you elaborate more about, what- what does that mean? Why- why can't you do that? So all- all- all I said was the approximation theorem doesn't offer you a recipe of how to recover that network. It just tells you- it's a- it's an existence theorem. It says, you know, that- that such a network exists. That- that's all the theorem says. And why would it not be- oh, so you're saying the only problem, uh, with actually achieving that, you know, some ideal optimal, uh, configuration is that we, uh- we might get like something like ba- mini-batch gradient descent might get interactive with local minima or something? So there are many challenges in order to even get it, right? So how many samples of x and y pairs do you need? You know, like, right there you have like an obstacle if you want to use- train it with gradient descent. Okay? Anyways, so that's- that's, um- that's- that's fine. So the- the main- the main idea that you want to take away from this is that neural networks are extremely expressive. The set of all hypotheses that a neural network can represent can be extremely large, right? Uh, linear, uh- linear regression was limited to just straight lines, right? But neural networks are in a way, you know, much- much, much bigger. And which- which kind of, you know, um, will lead us to the next topic of bias-variance tradeoff of, you know, is this always a good idea? You know, it's- it's more expressivity always a desired thing? And we'll- we'll jump into, uh, bias-variance tradeoffs on- on, um- on Friday. And to just wrap up today's lecture, uh, I would- I would kind of stress upon the idea of backpropagation. You know, think about it in terms of this daisy chain of Jacobians, where each of the Jacobians are pretty easy to compute. It's either a diagonal matrix or just the w itself. The end of a neural network, we've seen it before. They're just linear models. We've seen GLMs and- and the gradient over here follows just like a linear model. And then you have a daisy chain of Jacobians. And the branches to each of the w's and b's are, again, very simple math. The notation may be a little confusing because you have these zero- vectors full of zeros with only some, uh, elements non-zeros, and you kinda view them as outer products, etc, or like, that's, again, straightforward linear algebra, work through it, you know, um, um, ones from the notes. And hopefully that should be clear. If not, you know, post on Piazza. All right. We'll stop for today. Thanks.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_19_Maximum_Entropy_and_Calibration.txt
So today we're going to start Lecture 19 of CS229. And the, uh, topics for today, uh, we're gonna talk about the maximum entropy principle and how the exponential family uh, distributions that we saw earlier in the- in the uh, course can be derived from the maximum entropy principle. And we will show how, uh, you know, under what conditions maximizing entropy is equal to maximizing the likelihood MLE, and also this will kind of, uh, lead us into some interesting topics such as calibration. We'll briefly talk about calibration and, you know, what- what calibration means and how it relates to maximizing entropy, etc., they're very interesting topics. And then, uh, depending on, uh, how- how, uh, how much we cover and how much time we have left. We will talk about a few variants of the expectation maximization algorithm and how these variants kind of leads itself into some of the more recent advances in unsupervised learning, such as variational inference and, uh, the variational autoencoder. Uh, I'm not sure, uh, uh, whether we'll be able to finish variational autoencoders today or even start it, but, uh, if we don't finish it today, we'll pick it up, uh, in the next class. All right, so before we dive into today's topics, a quick recap of what we covered in the previous class. So in the previous class we covered PCA and ICA. PCA is principal components analysis. In PCA, we are given data X^i, which lives in a d-dimensional space, and we are to find a low dimensional subspace Z^i, which- which has, uh, which lives in a space of dimension K, right? For every Z^i, we want to find its projection onto the low-dimensional space that we call Z^i. And the way we go about doing that is to map X to Xu, where u is a column vector of- of K- of- of K columns, where each of the d column are the top k eigenvectors of x transpose x. And by- by- by doing this operation, we effectively, uh, uh, get the projections onto this, uh, subspace. Um, and then we talked about independent components analysis. In PCA, our goal was to find a low dimensional subspace, whereas in ICA, our goals are very different. In ICA, our goal is to find independent sources that explain our data. And the assumption that we made where there are D different independent sources, we call them SJ, where J indexes the source number. So, um, the- each of the SJ is sampled from some probability, uh, uh, distribution PS, which has to be non-Gaussian. You can either use, you know, the Laplace distribution or the- the logistic distribution, any distribution which is kind of, uh, has mean 0 and it's non-Gaussian. We- we saw some intuitions of what happens if we have a Gaussian- if you make an assumption about Gaussian distributions as being the source, you know, we get this problem of, uh, rotational invariance. And so as long as the sources are non-Gaussian and we have some square matrix A, which we call as the mixing matrix, which mixes the S's into observations. So S's are unobserved Xs are observed, right? And our goal is to start with X and recover S, which are the independent sources. And the way we want to do it is to, um, the relation here- we assume is that A is a square invertible matrix. And we want to invert A to get W. And W is the matrix that we want to estimate. So that given observations we can multiply it through w and recover the original independent sources. Right? And the way we go about doing that is with maximum likelihood where we directly estimate W, and the, uh, and the- and the uh, uh, we saw that the- we saw that the probability of- of X can be represented in this way where we map X back to W of x back to S using w, and we need a corresponding Jacobian term, log- or the log determinant. Uh, and by performing this, uh, uh, maximization using gradient ascent, there is no close form solution for this. By performing gradient ascent, we can- we can- we can recover upon convergence the unmixing, uh, matrix W hat. All right? And we also saw this in the homework. Hopefully, you guys have finished this question in your homework. If not, no problem. You know- you know we're just stating the results here. Um, the- the homework question has a small introduction about, you know, uh, about some explanations about KL divergence and entropy and so on. You don't need- you don't need to know that for the purpose of this lecture. But if you've already seen it, it's- it's helpful. Uh, so the- we define the KL divergence between two probability distributions, right? So this- this is something important, right? So far, we've- we've seen things like, uh, you know the distance between two points in a vector space, right? Uh, here the- the- the two things between which we are measuring are probability distributions, entire distributions, right? So you can think of, uh, you know, p being some distribution like this, and Q being another distribution like this. And we're trying to ask, you know, what's kind of the difference between these two distributions, right? They're not single points but entire distributions. So the KL divergence, uh, from P to Q is defined as, uh, the expectation with respect to p of log p by q. You should note that this is not symmetric in the sense the KL divergence from P to Q is not the same as from Q to P. If they were the same, we could have called it a distance like KL distance, but, um, it is- it is, um, it is not symmetric. And, um, this- this is the definition of KL divergence. This is closely related to concepts, uh, such as entropy. So we define entropy of a probability distribution, um, as- as the expectation under P of log 1 over p itself. So the entropy kind of measures how spread out the distribution is. Uh, so if we have a distribution or, you know, some finite support, a dist- a probability distribution that is uniform over them is considered to have high entropy, right? Whereas if another distribution over the same support puts all its mass on any one point, then this distribution is considered to have low entropy, right? So entropy, you can think of entropy as a synonym for uncertainty, right? High entropy means high uncertainty. It can, uh, you know, you believe that the point can, I know some data observation can come from any of these. You're not- you're not sure. Whereas a low entropy means low uncertainty. Right? If this distribution tells us that the point that we're gonna observe next, if you were to sample from this will be from- will be this particular point alone, right? There's almost 0 uncertainty there. So th- this is considered to have low entropy. Right? And then there is also this- this, uh, nice interpretation about- of cross entropy that, uh, that's there in the- in the homework description. You can- you can, uh, see that where we replace p over here by a different distribution Q, right? And- and the cross-entropy is, you know, gets this form, expectation under P log of 1 over q. And the difference between the cross entropy and entropy is basically actually just equal to the KL divergence itself. So the KL divergence between two distributions, P Q is just the cross entropy minus the entropy. There are some nice interpretations about- about, uh, this in terms of coding theory that's there in the homework. But, you know, that's not really necessary for- for our lecture today, as long as you kind of, uh, uh, remember these, uh, these- these formulas. And also earlier in the lecture- earlier in the course, we came across exponential family. The exponential family is- is a- is- is a way to characterize a family of probability distributions. Where the probability distribution over Y with some parameter called natural parameter eta takes this form. Right? So here, b of y is called the base measure. Correct? And there is the exponent of some of- of some term over here. That term has two parts. One is eta, the natural parameter, dot-product with p of y. So t is called the sufficient statistics over y minus A of eta, where A is called the log partition function. Right? And we also saw, uh, again in- in your homework 1 that the mean of this probability distribution can be represented as the derivative of the log partition function, uh, at- at eta. So if- if we change eta to two different values, we get different means. And the way you go from the parameter value eta to the mean of the corresponding distribution is to calculate A prime or the first derivative of A, of first derivative of the log partition function at the parameter value eta. And it so happens that this- this function, A prime will always be invertible. And so we can go from the mean of a distribution back to the, uh, natural parameter eta this way, right? So, uh, the first thing we're gonna do today is to derive maximum likelihood estimates of eta given some observations Y. And then- and then we'll jump over to maximum entropy and- and start seeing its, uh, equivalence. Any questions on this so far? Yes, question. I think I see that the result is the log determined to derivative, are you going to [inaudible] So the question is, um, in, ah, in ICA, is the log determinant acting like a regularizer or are we going to add regularization on top? ah, so that's a very interesting question. In ICA it so happens that most of the time we are not really interested in generalization error, right? So we're, ah, ICA is kind of, ah, in a way different from the rest of the algorithms that we've seen, where in the rest of the algorithms you're interested in generalization error. Whereas in ICA, ah, we take some examples, fit a model and just, you know, um, recover the sources of the given data itself. So, um, it's- it's not common to use regularization in ICA. And this log determinant is actually not the regularizer. So this log determinant is a necessary component to make the distribution a valid probability distribution. Good question. Any other question before we jump into MLE of exponential families? Great. Okay, so, um, MLE of exponential family. [NOISE] So in- in- in the, um, when we studied exponential families, ah, earlier in the course and- and when you studied GLMs, we made this, ah, simplifying assumption that y would- was always a scalar in our case. So in case of, um, regression, y was just real values in- in case of our logistic regression, y was, you know, a binary 0 or 1. We saw some generalizations, ah, with Softmax where y was- would- would- can- could- could take, ah, multiple values. But now, we're going to take a- an even more, um, ah, generalized form where y can be, you know, some- some, um, y could be anything, and t of y, which we always most of the times in- in- when we studied GLMs t of y was just taken to be identity as y itself. But now, we will again relax that and make it even more general, um, where t of y could be anything. So suppose, ah, we have an exponential family, right exponential family phi of y, given Eta equals b of y times the exponent. I'm just rewriting, ah, the same thing over there. Eta a product t of y. So this is the dot-product, Eta can be a vector and t of y can also be a vector of the same dimension, right? And this is just the inner product. If there- if these two are scalars, then this is just a simple product minus a of Eta, right? And we are given some data, S equal to y_1, y small n, right? So we're given- given, um, ah, this data and we want to find Eta, you know, um, Eta hat MLE equal to arg max. So here, we assume i.i.d- i.i.d, arg max cross Eta log i equals 1-10, p of y_i Eta. There's a standard, ah, you know, maximum likelihood, um, ah, procedure that we follow throughout the course. There's nothing different here. Okay? Now, let's- let's, um, simplify this further. So this is arg max, with respect to Eta. Now, log of the product is the sum of the logs. So we're gonna take the sum, i equals 1 to n- 1 to n, and the log goes in. So the log of this, ah, thing over here, will give us log b of y_i, right, plus log and the exponent cancel, you get Eta dot t of y_i minus a of Eta. And in order to do, ah, maximum likelihood, we take the derivative of this with respect to Eta, set that equal to 0, and solve for- solve for Eta, the same process. So for this, we now take the gradient with respect to Eta of this whole thing is equal to, um, so this is gonna be sum i equals 1 to n. The derivative of b of y with respect to Eta is? 0. 0. So this would be just 0 plus the derivative of Eta dot p of y_i with respect to Eta will be just t of y_i plus t of y_i minus derivative of a of Eta with respect to Eta is we just call it a prime of Eta, right? And this gives us, so, ah, this- so we have a sum over i from 1 to n of this term, and we need- we set this equal to 0. This gives us n times a prime of Eta. So this is n, this is Eta, right? This is n, the number of examples, and this is Eta, they look kind of similar equal to i equals 1 to n t of y_i, or a prime of Eta is equal to 1 over n, i equals 1 to n p of y_i. And this tells us Eta hat MLE is equal to a prime inverse of 1 over n, i equals 1 to n t of y_i. So what just happened here? The maximum likelihood estimate of the natural parameter of any probability, ah, distribution in the exponential family can be calculated as the- the mean or the sample mean of the observed sufficient statistics taken through the- the, ah, the inverse of a prime. If you remember in the exponential family notes, we- we called g equals a prime to be the canonical link function. And we call it g inverse, which will be a prime inverse to be the, or I think we called this the canonical response function. And this to be the canonical link function. All right? So this is just the canonical link function. So take the sample- sample mean of the observed y_i's, run it through the canonical link function, and that gives you the, ah, um, maximum likelihood estimate of- of Eta, where this is in the most generic form, right? So when we- when we- when we take specific members of the exponential family, like the Poisson or the Bernoulli or, you know, what have you, a prime will take some particular form, t of y will take some particular form. But in the most generic, ah, um, in the most genetic setting, this is the estimate for the maximum likelihood, ah, estimate of the natural parameter. Okay, yes question. [inaudible] So, good question. So in- in the last lecture we were doing ICA and- and PCA, which was unsupervised learning. So, uh, what's the- what's the goal here? That's a very good question. So today what we are doing is, uh, trying to understand this, you know, pretty generic concept called maximum entropy principle and how that- that, uh, results in- in concepts such as calibration and KL divergence and we will use KL divergence as a concept to, uh, kind of understand variational inference and variational autoencoders. So, uh, this, I would not call it either supervised or unsupervised because these are just, uh, you know, maximum likelihood estimates about- of, you know, general probability distributions. This has- um, yeah, I- I wouldn't call this either supervised or unsupervised, these are just exponential family distributions. Good question. All right, so this was the, um, maximum likelihood estimate of- of, um, the natural parameter and let's just save it because we will refer to this, um, in- shortly in a few minutes. [NOISE] Now, let's switch gears and talk about maximum entropy, right? So in, um, so the maximum entropy principle, um, is a principle that tells us that, uh, whenever we want to estimate a probability, uh, distribution with some observed data, let's suppose, uh, we have a- a probability, a probability distribution over the real values, right? So the- the, uh, this is R and X and this is P of x, right? So, uh, and suppose we- we make some observations on x, right? So we observe this, we observe this, and we observe this as samples from this probability distribution. And our goal is to now estimate, you know, what the density of what P of x is, right? And given just three points, it's very hard to tell, you know, what the density of P of x is, right? Ideally, what we would like to have is, you know, an- an, uh, infinite number of observations, you know, prob- preferably, you know an uncountably infinite number of observations. So that we can precisely calculate the fraction at each point and construct some kind of a density, right? But we're only given, um, a finite number of points, uh, you know, they could be large, they could be small, but we're still given a finite number of points and we need to estimate the value of P of X at every, uh, point x. And the maximum entropy principle, uh, tells us that the way we want to go about it is to first translate these, uh, data points into constraints, right? So for example, uh, the mean of, uh, this- this set of points could be, you know, say Mu hat, right? So Mu hat, you know, is just the sample mean. Similarly, the, uh, variance could be I/n equals 1 to n, x_i minus Mu squared, right? So this is just the- the, uh, and if we take this to be Mu hat, this is just the sample standard deviation, right? And the maximum entropy principle tells us that the, uh, our estimate for P of X should be a probability, uh, distribution that is subject to these constraints and these are hard constraints. And within the family of probability distributions that satisfy these constraints, choose the one that has the highest entropy. Right? So the constraints, we can call these constraints, the- the constraints define a set of all probability distributions that are what is also called as feasible. Any probability distribution that- that satisfies these is a, uh, feasible candidate for our estimate and then among this, um, large set of feasible probability distributions, the feasible probability distribution will almost certainly, uh, you know, be more than one because there are an infinite number of probability distribution that satisfy in this example, a particular mean and- and standard deviation. Among the set of all, uh, feasible distributions, choose the one that has the highest entropy. That's what the maximum entropy principle tells us. And the way we go about formulating it is- the way we go about formulating it is, um, P, which is the, uh, estimated probability, uh, distribution over Y. To clarify some notations, uh, let's call this and, you know, we'll be using instead of X, all right? So P is equal to arg max over all Ps, maybe we can just call it P star, arg max over Ps, which maximizes H of P, which is the entropy, right? H of P is just the entropy uh, that we defined here, such that we have some constraints. So the constraints, we will write them in the form, uh, we will, uh, limit ourselves to linear constraints. By linear constraints, what I mean is such that, um, sum over- sum over j, sum over i, T_j of p_i is equal to some C_i. What does that mean? So we will assume that the constraints, uh, for our estimation problem are given in the form of some functions, T's which are indexed by j. Yeah T's are indexed by j. So T is some function that takes Y to R, right? So, uh, the- the constraints that we are given, you know, constraints such as this, will be given in the form of these, uh, these T functions where we have n such things or m such things. So, uh, j equals one through m, T_j of Y to R. What does this exactly mean? It means that, uh, for every corresponding, uh, T_j, there's a corresponding C_j, the constraint value where we need to satisfy sum over i equals 1 through n T- T_j of y_i equal to C_i. Now if T_j, if- if for example, one particular T, uh, of Y, if we consider this as equal to just Y itself, then this gives us a constraint for the mean. Because this basically boils down to for sum i equals one through n, T of y times oh, I also forgot P of i, t of y times p of y is equal to C_i- C- just C. So this is just, um, if- if T of y is equal to just y, then this becomes i equals 1 through n. Y times P of y is equal to the expectation of Y. Right? So for different choices of T's, or rather the different choice of sufficient statistics, we- we can express constraints in this form. And, um, we're also going to assume that the support of Y is finite, right? The support of for Y being finite is, uh, is a simplifying assumption and we don't really need this, um, to show, to show, uh, the results, but we're going to assume it is finite so that we can represent things as, you know, inner products and sums rather than integrals. Right? And we assume that N is kind of size of the second Y. Yes, was there a question? [inaudible] Like how can there be any other kinds of constraints [inaudible]? So how can there be other kinds of constraints? So generally the constraints are- are constraints given are- are of the form where the, you know, some kind of a moment of the observed data is equal to some value, right? So the- the moments of the distribution should match, you know, some- the moments of the observed, um, samples. Generally those are the kind of constraints we're interested in. Why don't we directly use these constraints in this problem because we can just take the form of our exponential family and then using these constraints try to make constraints on the parameters v and eta?- Well, so, when we're- so the question is, why don't we start with here? So the, the exercise that we're trying to, uh, do right now is to start from, uh, P, from the maximum entropy principle. Make no assumptions about the form of P right here. So far, you know P could be any kind of a probability distribution. We are just assuming some kind of a support, and what we're gonna show is the maximum entropy principle will recover the exponential family. That's, that's, that's our goal over here, right? And intentionally we are using the same symbols, you know our T's and Y's that are gonna end up being, you know, the corresponding terms in in, uh, the exponential family. So, um, the, the, the T's over here are trying to satisfy some constraint. Okay? Eventually, you know, by the end of this exercise, we will see that t's will be the sufficient statistics. Um, but for now, you know just assume T's are, are, are you know, some kind of a function whose linear combination with P must satisfy a constraint value. Right? So we have, so N is the, N is the, um, the cardinality of the set Y, and we have M constraints, right? [BACKGROUND] Which means, um, we have M such T functions, and now our task is to perform this optimization, all right? So we want to maximize the entropy of, uh, the probability distribution such that, um, some kind of, uh, uh, a linear constraints over P are, are, are, are satisfied. So, uh, this, I think, write this more clearly. I t of Y I, J times P of I is equal to C of J and J equals 1 [NOISE] through M. So there are M such T functions, and all these T functions must satisfy this, uh, uh, this kind of a constraint where the expectation of T is equal to some constraint value C, and most of the times these constraints will be arising from data, from the observed data, right? Where C-J will be equal to, um, will be equal to 1 over N, where N is the number of examples. I equals one through N, T of Y-I-J. Yes, was there a question? [inaudible]. So there's an I after, you know the Y bracket. There's an I, over the Y. Oh, oh, oh, yeah. You're right. Sorry. Thank you. So this is- so, so yeah. So to, uh, to, to, um, um, clarify the notations, there are M such T functions, and each of those M is indexed by J, right? And T-J tells us, you know it's the Jth T function, and I is the, uh, uh, I is the index we use to scan the support of y. Right? So Y there are- there are capital N elements in Y and I equals 1 through N, T of Y I. So every possible value of I will get a probability of P-I. That's the probability of, of, um, P-I is equal to P of Y-I, right? So we're going to switch it, you know, it's, it's very common that, you know we switch between, uh, the, the, uh, functional notation and the vector notation, right? They mean the same thing. Okay? So we are given a set of linear constraints, and satisfying these linear constraints, we want to maximize the entropy. Yes, question. [inaudible] [OVERLAPPING] Oh, this, oh, sorry. Yes. And also is support, um, for Y, basically each individual enough point over here. Yeah, so question, so what is the support of Y? So here we are assuming Y is- has finite support, which means, um, Y can take on a finite number of values. All right, So Y itself can live in a high dimensional space, but we assume Y can take on a finite number of values, right? And P of Y I tells us the probability that our distribution assigns to that point Y I. Our observed Y I, Y superscript, you know parenthesis I are the actual observations. So Y superscript I tells us what the Ith observation was. Right if you, if you sample, um, many of them, but each of these YI's will be one of the finite numbers, uh, of, of elements in script Y. Okay. [NOISE]. You are saying that we're not [inaudible] discrete distribution. Exactly. So this is just a way of saying that we are only interested in discrete distribution. Next question. After continuous, would we require compact support or? We would, uh, a compact support or Y. Yeah. On- so, so for example, so, so for continuous this could be an entire R-line, real-valued line it, it. [inaudible]. Yeah. Yeah. All right. Any, any, any questions in this? Let me repeat it. There were, uh, uh, a few, uh, uh, typos, so just, just to, um, um, just to repeat it so that everything is clear. We, all right, So, so we, we, uh, we assume that Y has finite support, which means there are N. Uh, N is the size of the cardinality of script Y, right?, and our probability distributions are assigned over this space. Now, we are given some observations, small N number of observations of YI's, and for out of these observations we can construct constraints. Okay. So the constraints can be, uh, for, for example, that the, uh, mean of the observed data has some particular mean. It could be that the, um, um, um, the- the square, the sum of the squares of the observations is, you know, some value. That could be another constraint. It could be the sum of the cubes of the observed values is some, um, value that could be another constraint, right? So T of Y can be things like, you know, a T of Y equals Y, T of Y equals Y square. They are, they are different possible, uh, uh, functions of Y, and out of the- those functions, we construct these constraints. That, uh, them, uh, this, this constraint basically tells us that the mean of, uh, of the distribution P will have to be the mean of the, um, um, of the, of the T function, of a- a particular a T function should be, you know some particular value and so on, right? Now, subject to these constraints, now, we want to solve this, uh, optimization problem. The way we go about solving this problem is to construct what is called the, uh, Lagrangian. [BACKGROUND] So if T, the- the T function is- if that is still a little confusing, you can assume T to be- to generally take on two values. The most common values for T, are T of y equals y, and T of y equals y squared. Those are like the most common, uh, kind of constraints that we come up with where we want to say the mean is something that, you know, the first moment is some particular value and the second moment is some- some particular value. But here we are trying to derive it in the most general form where T could be anything. And so using these constraints and the entropy objective, we will construct the Lagrangian. So the Lagrangian L of P, eta, and lambda. So this is the probability- the variable for the probability distribution itself. And eta and lambda are Lagrange multipliers. And the Lagrangian will be H of P plus the inner product of eta, comma, TP minus C plus lambda times the product of 1, P minus 1. What is this? So here, P is our probability, uh, distribution. You can think of P as you know P_1, P_2, P_n. Where n is the cardinality of the set Y. And T is a matrix where we have you know, T_1, a row corresponding to T1. So that would be T_1 of y_1, T_1 of y_2, T_1 of y_N. And similarly, we have m such T_m of y_1, T_m of y_2, T_m of y_N. So this is small m, the number of constraints times N. N is the cardinality of y, and this is N. And our- our constraints say that the inner product of this with this should be equal to C_1 until C_N. Right? So this matrix times this vector equals this vector is capturing the set of all constraints under which we want to operate. Right? So this inner product is like the expectation. So each- each row times a column is basically the expectation under P of T- T1 of y expectation under PT_2 of y and expectation P, T_n of y. Right? So this vector is equal to C_1 through C_N. Right? So these two are equivalent ways of- of writing. Yes, question. So the C's and the P's are like [inaudible] So P is- so the question is, T's and the C's are the constraints that we choose to impose. That's correct. And the C values are generally evaluated from data, right? So given some data, we- we see what the empirical values of- of the- the C's are, and make that the constraints, what is P? So P is the probability distribution itself that we're trying to estimate, right? So P must satisfy. P_i is greater than equal to 0 and sum over P_i, i equals 1 to n should be equal to 1. Right? So P is the probability vector that we're trying to estimate. And the way we want to estimate P is such that it- it maximizes the entropy and satisfies these constraints. So let's say P_2 in that [inaudible] is the number of times we observe y_2 in all of your values. So is P_2, the number of times we observe Y_2 and all of your y's in the long-running average? Yes, the number of times we- we will observe y- y_2 across all y's. So P_2 is just the probability of- of y_2. You know, think of P_2 as equal to P of y_2. And is it possible for- most of the times we want like, we take a discrete sample of thetas and then process, like [BACKGROUND] but is it possible to put the intervals on the C's instead of like, hard-wire it? Now- for now, we're going to- we're going to talk about only point values of C's, right? Right, and so this is the set of constraints, right? So TP minus C is the- is the, uh, set of constraints that should be equal to 0. And this basically tells us-p so this is the entropy that we want to maximize. This is for the constraints, you know from data generally. Right? And this is to make- to make P a valid- valid probability distribution. Right? So we construct this Lagrangian and we- we pretty much just you know solve this optimization problem from this Lagrangian. All right, so to solve this Lagrangian, we do- first, we start with taking the derivative of the Lagrangian with respect to P's. And we get a partial of- a partial with respect to P_i of the Lagrangian. And this is equal to partial- partial with respect to P_i of H of P plus eta minus TP minus C plus lambda times 1, P minus 1. All right, and now we, uh- uh, differentiate this with respect to P_i. So H of P_i is sum over all- sum over i, P of i times log P of i, right? So all the terms except the ith term can be eliminated. So P_i this will be minus P_i log P_i. We are ignoring the non i terms because they're gonna, you know, cancel with- with a Pi anyways, plus TP minus C and 1, P minus 1. All right? And when we, um- uh, take the derivative, what we get is, apply the product rule. Taking the derivative of this with respect to Pi first we differentiate the first term times the second term, and then we differentiate the second term times the first term. So that will give us minus uh, log P_i plus, the derivative of log p is just 1 over Pi, uh, 1 over TP_i times Pi is just 1. Plus this will be just the inner product of eta times T of y_i plus lambda. Right? So what- what- what we're doing is when we are- in this- in this- uh, in this expression which is basically TP minus C, P_i will appear only in the- in the ith column here. Right? So this ends up being eta and- and C will just cancel. It will get canceled with- with the P_i. So this will be just the inner product of eta with T of y_i. Right? And from this we basically get that, uh, we want to take this derivative and set it equal to 0. Which means we get, you know, I'll take, uh- uh, so log P_i is equal to lambda minus 1 plus inner product of eta times T of y_i. And this gives us P_i is equal to e to the lambda minus 1 times exponential of eta times T of y_i. Right? Right away we see that P of i seems to be taking some kind of an exponential form, right? And the- the, uh, the reason why we get this exponential form is because in the entropy we have this log term and when you set log of p equals, you know, some value, p will be exponent of, you know, some value. That's the reason why, um, you know with entropy- when you're trying to maximize entropy, the presence of this log term in the entropy will kind of naturally result in an exponent coming when we solve for P_i. Now we have this and we want to solve for Lambda and solving for Lambda is pretty easy. We have P_i equal to this and sum over i equals 1 to n, P_i equals 1, which means the sum over this entire thing, e to the Lambda minus 1 times exponent theta times T of y_i. This should be equal to 1 [NOISE] and so e to the Lambda minus 1 will be 1 over sum over Y in script y exponent eta times T of y, right? So, uh, we got P to be in this form and we can eliminate Lambda by, by making use of the fact that probability distribution should sum up to 1 and that gives, um, e to the lambda, uh, lambda minus 1 to be, uh, that value and so [NOISE] we get the final form of [NOISE] p of y to be, p of y to be- so let's call this denominator to be [NOISE] - let's call this as Z of Eta. So we get P of y is equal to exponent of Eta times T of y divided by Z of Eta, where Z of Eta is just this, um-um, normalizing constant and so P of y parameterized by Eta is equal to the exponent of Eta times T of y minus a of Eta, where a is just [NOISE] equal to log z, right? So Z is called a partition function and a is called the log partition function, right? And by taking the log, we can bring it, uh, in into the term, right? So we-here we see that by just, uh, by-by just following the maximum entropy principle and trying to recover P where we did not assume any kind of form for P, right? We just assume P was, you know, um-um, any-any, um, can take, um, can take any, uh, values subject to it just being a valid probability distribution. We see that by, uh, by having only this constraint, maximizing the entropy of P, [NOISE] to satisfy the- the, uh, constraints that we observe from the data, we recover the exponential family that, uh, we- we recover the exponential family form for P, that P of y will always be in- in the form of the exponential family, basically. Yes. So within the inner max, why did you take [inaudible] with this types of some of the inputs finally already captured with the space? So this- what we did here is basically enforcing this constraint. Do we have to do it explicitly again? I'm sorry. Do we have to do it explicitly again or should it already be captured in the math? So the question is, should we explicitly do this or is it already, uh-uh, you know, um, captured in the math? So if we don't have this constraint, then, you know, P's can be any you know, arbitrary value, right? Also, and why don't we use entropy as an added value of those? [inaudible] Yes. So the question is, uh, why are we maximizing entropy? And, uh, there is-there is, uh, reference in the notes that's, that's been posted, which, which goes into why maximizing entropy is kind of a natural thing to do. So basically, this maximum entropy principle has-has, you know has also a connection with statistical physics, where you know, um -um, where the entropy has actually the same mathematical form and in fact, there's a paper by, you know, um, uh, by an author his name is [NOISE] in fact, the same author of the link that's given here, Edwin Jaynes. He has an old paper called, um, uh, Information Theory and Statistical Physics and that basically you know, draws this nice connection between information theory or you know, the Shannon's entropy, uh, the-the entropy that we have here in information theory and, uh, you know drawing connections to the, um, uh, thermodynamics entropy in statistical physics and both of them kind of follow the same maximum entropy principle. So it's-it's-it's, uh, it's kind of a natural thing to do. Yes, was there another question, yes. Are you also going to check this principle of parameterization? So here- so the parameterization was always there, I just didn't write it here. But, uh, where is a of Eta? So a of Eta- so, uh, we-we, um, so P of i is basically equal to P of y_i, right? So P of y_i is equal to this times this, and this is equal to 1 over Z of Eta, right? So P of y is equal to this divided by 1 over Eta, 1-1,uh, over Z of Eta, right? And Z of Eta is the partition function and in place of Z, if we instead take a of-, uh, a is equal to log of Z, then a will just go in-in into the numerator. Next question. Why do you want to maximize the entropy? Yeah, so maximizing the- so the question is why do you want to maximize the entropy? So maximizing the entropy. You know, um, there are arguments to- to, uh, to say that by maximizing the entropy, we are kind of being least committal in terms of our, um, uh, assumptions which means the only constraints that we are, uh, enforcing is that, um, you know, the- the, um, the data that- the- the observed data that we have are the only constraints that we are doing. Any other objective if- if you maximize, uh, any other objective? It can be shown that we are kind of imposing further assumptions, um, that may not be, you know may or may not be, uh-uh, may or may not be valid. In order to get the most unbiased estimate of p, then the, uh, then the right thing to do is to maximize the entropy, which means make everything else as uncertain as possible, subject to the only constra- the observed constraints that we have. Is there a link between this and the maximum likelihood? Yeah, we're gonna come to the link to maximum likelihood shortly, right? Next question. So you said uh, entropy can be thought of as the variability of the uncertainty, can it directly then be thought about as the variance? So you want to capture as much variance from. Yes. Yeah, so- so the question is- is can you think of uncertainty as the variance? The short answer is no and there are good reasons why variance is- is not always, um, equivalent to uncertainty. So the, you know, there is, uh, a link to a chapter, uh, in-in the notes that's given, you know Chapter 11 from the book Probability Theory. And that actually has some pretty convincing reasons why entro- variance is not right, uh, thing to maximize for. Okay, and two more questions, in L, what are the last two terms? So these two are the Lagrange multipliers. No, on the right-hand side. These two- so these two are the- are the, uh, terms in the, uh, Lagrangian that corresponds to the constraints, right? So one constraint is based on, um, based on, uh, the-the data that we have, and the other constraint is that P should sum to 1. Why is, why is that [inaudible] like inner product of 1 and P? So inner product of 1 and P is. [NOISE] So that's what the inner product of the one vector and P. And the second one? The second one? Yeah. So the second one is, you know, T of P minus C. So we want T of P equal to C, right? So T of P minus C should be 0 and this is the Lagrange multiplier corresponding to that constraint. Next question. Far from the word N since large N is the size of all the Y_N? Yeah, so, so the large- so the large N corresponds to the support of the prob of- of y. [inaudible]. And small n is the- you can think of it as the number of observed data points. Is the support of. To every possible one? Yes, every possible Y. Okay, so large N is the cardinality of script Y. Yeah. Why do you take the inner product of T_P minus C with Eta. So I get that T_P minus C- but then like going with the last term, shouldn't it be just some like let's say some new multiplier times T_P minus C, inner product of that. Well, so, um, I'll not go into the details of this- this what- what we have here is correct. If you have any more questions, and you post it on Piazza, I can - I can help, uh, uh, clarify there. But, you know, this - this is the correct formulation of the Lagrangian, all right? So ,uh, so we get that, uh, p of- p of y will- will always take this form, where t of y, which were basically the constraints end up being the sufficient statistics of the, uh, exponential family. And, um, eta, which was the Lagrange multiplier of our constraints end up being the- the natural parameter of the exponential family. And a of eta- a of eta, which is the log partition function is essentially, the partition function that we get by applying the, uh, uh, valid probability distribution constraint, right? So maximum- maximizing entropy gives us the exponential family as a natural consequence. We made no assumptions about the functional form of p when we started out maximizing the entropy. But as a consequence of performing this maximizing entropy subject to some linear constraints, we get the exponential family as a consequence, right? So, uh, was there another question? Yes. [inaudible] So the question is, uh, for any given dataset do we get a different family? So, uh, good question. So the- the- the family that we end up getting is gonna depend on the choice of the sufficient statistics that we choose. For a different dataset, we will get different values of Cs, right? But the choice of the constraints that we- that we, uh- uh, chose to- to enforce will decide the family of the exponential family. [inaudible] are you going to get one distribution? So like how do you find eta given- So we're gonna come to estimating eta now, right? So the function and form of P will be in the exponential family if we start off by maximizing the entropy of p, subject to some constraints, right? So now, the question is of- okay, so we- we saw that the function or the- the- the- the probability distribution will belong to the exponential family. But what about the estimates of eta itself? [NOISE] So for that, now let us- to- to calculate eta. So first we solve that with respect to p_i. Next we will do it with respect to eta, right? So with respect to eta of L is equal to gradient with respect to eta. H of p is just minus sum over y, t of y log p of y, plus I'm gonna write this as, uh, the inner product between eta times T p minus eta times C plus Lambda of something. And now we will make a few more observations. So this is the gradient with respect to eta of, so here, this is basically the sum over y, p of y times log p of y. So we saw that p of y has this form. And when we take the log of this, we just get the term inside the, um, inside the flower brackets. So it is p of y times eta dot t of y minus a of eta plus eta times T p. So what is T p? We saw that T p, T p is just the expectations of the individual sufficient statistics. So this, I'm gonna just write it as expectation of P of t of y minus eta times C plus Lambda of something, right? So now take the derivative inside. And here we get, um, so this, you know, sum over y, P of y times something is like taking the expectation of this, eta and eta will cancel. So we can write this as minus expected, so take the eta out. So this will be eta times- oh, so- sorry. We still have- let's still keep the derivative with respect to eta. So eta times expectation of T of y. And here we're taking the expectation of a of eta, but there's no y term here. So this will just be a of eta. This is plus eta times expectation of P of T of y minus eta dot c plus Lambda times something, right? So this and this cancel, so we get with respect to eta of a of eta minus eta dot c plus some constant c plus some Lambda times something. And this is equal to a prime of eta minus c, and this should be equal to 0. And this gives us a prime of eta is equal to c. And these constraints are things that we generally want to estimate from data, right? And so we get, uh, eta hat is equal to a prime inverse of c, and that means A prime inverse of C, which we want to be like the empirical averages from data. So that would be one over n sum i equals 1 to n T of y_ i, right? And this estimate [NOISE] you will see, is the same as this estimate. So the last step here. So from here, here we got that a prime of eta, which is just, uh, a prime of eta, should be equal to c, right? Is it clear until this step, yeah? So eta should be a prime of inverse of c. Now what is c? C were the constraints that we started with, and usually these constraints come from data, right? So these constraints will be the actual, you know, sample means of the corresponding sufficient statistics. And you plug them in here, the, uh, the, uh- uh, sample means of the corresponding sufficient statistics and we get eta hat maximum entropy [NOISE] , maximum entropy to be equal to the same, uh, expression that we get using MLE. So here- so the question is, what is T here? Yes, T here is, you can think of it as the vector of, you know, j-functions, exactly, right? So what we basically got is that the- is that, uh, maximizing the entropy subject to some constraints of sufficient statistics that, um, are, uh, whose values are obtained from the data is in fact the same- gives you the same solution as starting off with an exponential family where the sufficient statistics are the sufficient statistics of that exponential family and performing maximum likelihood on it. So this- this, uh, equivalence between- between maximum likelihood and maximum entropy is- is, um, turns out to be, you know, one of- one of, you know, one of the few interesting results in this area. Any questions on this? Yes, question. [inaudible] So a function will- the a prime will always have an inverse. [NOISE] Yes, question? [inaudible] So a prime is the mean of the probability distribution and basically what we are telling is that we want the mean of the- of the, uh, uh, uh, probability distribution to be the sample mean, you know, is essentially what we're trying to say here. So CI's were the- were the, uh, uh, means of the, uh, uh, means of the data, so here, we are trying to say model mean equals sample mean. [BACKGROUND] Yeah, we- we- we got this from the maximum entropy, right? So we are still deriving the Lagrangian of the maximum entropy, uh, objective, and by solving for eta, we end up with the same, uh, estimate for- uh, for eta as we would have if we had started from exponential family directly and chosen our Ts to be, um, the- the- the sufficient statistics we're interested in. Yes, question. We are talking about the very last step here, so-. Here? - in the range box. Yes. So when we say Cis equals to 1 over n, sum over the Yi, aren't we constraining like all Cs to be some kinda average of like all Cs? Yes. So for- for- for, ah, in- in this setting, we are interested in linear constraints. That is linear in P, which means we are only interested in, you know, things like moments of the data, the moments of the distribution. When you do that is that like- will that like be called- so initially Cs were weighed by the Ps? Cs were what? Like we were weighing them by the Ps? Cs were not weighed by the P,. Cs were just the observed, um-. So C1 will be T1Y1, plus T1Y1 times two P2? Right. That's the constraint that we want to enforce, that C1 equals T1Y1 times P1 plus T1Y1 times P2. That's the constraint that we want to enforce. But C1, the value of C1 itself is estimated from data. And there has to be some [inaudible]. Yeah, th-this is what, you know, we would choose to choose-. Cs according to this. Cs according to this. Exactly. Yeah. All right, so- so we've seen that maximizing entropy is the same as maximizing uh, likelihood. And in your homework you've also seen that maximizing likelihood is also the same as MLEt', is also the same as minimizing the KL divergence between the sample data and- and the model, right? This is in your homework, um, um, you know, question 2 or 3, the last subpart. Uh, so- so, uh, the- and this is equal to, you know, max entropy. Now the connection between maximizing entropy and minimizing KL divergence is also, you know, called the max and duality theory-Duality theorem. For those of you who are probably more interested on the theoretical sides, you know, then to- to kind of learn more about this duality theorem, you know, you might want to look up the maximum entropy Duality theorem. All right. Now let's- let's- let's um, talk about a very related, somewhat related to our topic, which is calibration. Okay. [BACKGROUND] So calibration is the problem- is this concept that, so for example, when- when we studied logistic regression, we saw that the output of our logistic regression model was we took the- in a product between x and theta, where theta was some kind of a parameter, run it through the logistic function, you know,1 over 1 plus e to the minus x and as the output, we got a value that was always between 0 and 1 and we call that value, um, and when we did y hat equals 1 over 1 plus you know, exponent of minus Theta transpose x, right? So this will always be between 0 and 1. Which superficially looks like a probability, right, because it-it's a value between 0 and 1. But what will make it a probability, right? Superficially, it's a value between 0 and 1 and then we, ah, in order to use it as a classifier, we decided to use some threshold and, you know, call everything above the threshold as positives, everything below as negatives. But what- what will, you know- why do we want to think of this as a probability? Right? So probability has- has- has you know- all right. So let me pose a question to you. What do you think is probability? Yes? If sum of all the y hat is equal to 1, it's called superficial requirement. Yeah, so if sum of all the y hats equals 1, ah, I would still call that a superficial requirement for something to be a probability. Yes. [inaudible] Exactly. So the value of y should be somehow representative of how frequently we, uh, you know y hat ends up being this in, you know, for example, the data that we have, right? So and this- this property that the estimated probability values kind of match with observed frequency of occurrences is called calibration. Right. So it is very common and highly recommended that if you actually build a logistic regression model, that you also plot this calibration plot. Okay. So calibration plot will look something like this. Predicted probability and here this is the observed frequencies. [BACKGROUND] Now, let's say this between zero and one. Take your validation set. In your validation set, assume it's, it's reasonably large, you know, you have a few 1,000 or 10,000 examples in your validation set. Take your model and run every example in your validation set through your model. For each example, you get a particular y hat, all right? So you get a y hat. And there is a corresponding Yi, which is the ground truth, all right? Now, y hat will be a value between 0 and 1 and y will be either 0 or 1, right? And then your y hats will be, you know, points that are distributed in this interval between 0 and 1. Now, next, you know, chop this interval between 0 and 1 into say, 10 bins, maybe 100 bins, doesn't matter. So chop it into bins. Commonly, you know, it is 10 bins, but you know, it could be any number of bins and collect the Yi"s in each bin separately or the y-hat i's in each bin separately, right? So the set of all examples that have a y hat in this- in this- that- that is assigned a value in this range, take those examples and calculate the mean of the true labels of all the points that fall in this bin, right and plot the- the true mean or the mean of the true labels on the y-axis corresponding to this bin and ideally what you want is to get a plot that looks like this. Which means for- for all those examples that got assigned the probability of say, 0.99, we want 99% of them- of those examples to have the true label as Yi. For all examples that had a predicted probability of close to 0.5, we want the true frequency of the labels or the frequency of the true labels to be close to 0.5. This kind of a plot, where do you- where you plot the predicted probabilities versus observed frequencies is called a calibration plot and if your model gives you a calibration plot, which is a straight line, then you call your model to be well-calibrated, which means the predicted- the output y hats from your model, you can actually think of them as probability estimates. If- if this line is not a straight line, then the predicted probabilities, you can't really think of them as probability estimates anymore. It may still be a good classifier, right? You- you may still be able to find a separating threshold and assign 0s and 1s and you may get a high accuracy, but it may not be a good probability estimate, right? So this- this concept of calibration. So calibration tells us, you know, it's a statement about the quality of y hat being an actual probability. [BACKGROUND] So if you have a model that predicts the probability of whether it's going to rain tomorrow and take the, and you run it every day and you have a few thousand days of predictions, right? Now, take the set of all examples or the set of all days where the predicted next day probability rain was say, 80%. And then look at the data and see what fraction of those days did it actually rain? If your model is well calibrated, then the fraction of days that it rained the next day should be approximately 80%. It shouldn't be 90, it shouldn't be 70. It should be 80 for it to be well-calibrated. Yes, question. Two question. If it is a good classif- good classifier- why do we care if it is a good probability major or not? And secondly, if it is not a quantified degree what can you do to solve it? Yeah. So, uh, good, good questions. So, um, if all we care about is a good classifier, then why do we care whether the predicted y hat is a valid probability or not? And the reason is because, um, whenever you're making- uh, building a machine learning model that predicts something into the future, it is m- it is- mo- mo- most of the times you know, you don't really want, uh- uh- a binary answer from a model. You actually do want some kind of an uncertainty estimate coming out of the model. Which means you actually care, you know how certain is your model about this prediction into the future? If it has a prediction, you know say a you know, 51% percent probability that it's gonna rain, versus another, you know situation where it gives you a ninety seven percent probability that it's gonna rain. You wanna kind of treat those two predictions, you know differently. If you just think of it as a classifier, both of them will be classified as Y equals 1, right? But, um, but when you're- you're when you're trying to make decisions about- about you know, about the future or in general you- you know, many you times you actually want this uncertainty estimate as well. Right? And what do we do if it is not uh, calibrated? So there are techniques to re-calibrate the model. So there are some techniques to recalibrate a model and one of them is called Platt scaling, or uh, you can also use something called iso- isotonic regression. So these techniques are- are- uh, you can think of them as post-processing. Which means you fit your model to predict the y hats what they produce, and then you take these y hats as an input to a new model to learn some kind of an adjustment factor, to you know, adjust y hat to be you know an updated y hat and that updated y hat will- will be you know better calibrated using these techniques. Yes question. [inaudible] So the question is, you know, can you- can you- can you kind of consider this slope and incorporate the slope of- of the calibration curve into the last function somehow. Uh, so- so it so happens that the- the- calibration is a probability of both your model and the test set against which you're measuring it. So which means if you- if you train your model against in say you know, um, uh, an environment where it mostly is- is uh, say uh, um, um, a desert like an environment and you fit your model there and you wanna know, use your model in a different setting where you know it's some kind of a tropical island where the weather is very different. But you still think that your model has good discriminative ability of deciding rain versus not, then you can take the same model that was fit on a different data and just recalibrate it to you know, using these techniques to uh, data from you know say a tropical island um, um, uh, for example. So you can um, um, so you may not be- you may not have access to the eventual test set against which you wanna calibrate at the time of training. [inaudible] Yeah. So these- these- um, these techniques generally do not reorder the points. They just perform some kind of an adjustment such that the ordering of the predictions will still be the same but your model will be calibrated. Yes. Question. [inaudible] So Platt scaling is linear, isotonic regression is not linear. But they're still- they still maintain the ordering. Yes, question. [inaudible]. Does changing the threshold value? What? Threshold of what? [inaudible]. Good question. So the question is does the threshold that we use to- for the classifier? Does that affect calibration? The- in fact the answer is the other way, right? So for for calibration here we're only plotting predicted probabilities versus true observed frequency. There is no threshold here anywhere, right? Once you recalibrate your model, you will almost certainly want to choose a new threshold to- to get a classifier out of it. Yes. Question. [inaudible] So this is not the graph to choose thresholds. This is to to check whether the predicted probabilities can be thought of as probabilities. [inaudible]. So we'll get into that probably tomorrow or day after when we talk about evaluation metrics about how to choose thresholds for for now we're just we're talking about- about calibration. Yes. [inaudible]. How often is this done in practice? Looking at a calibration plot. [inaudible] In practice extremely commonly this is done in practice. Anywhere- anytime you wanna take, uh, you know, a- a prediction from a model and take some action based on it. Uh, you know, uh in fact you know um, I would call this closer to practice than closer to research. You know, you always wanna check um especially if you have- if you are constructing some kind of uh, an application pipeline where you take decisions automatically based on the model's output, then you wanna make sure that your model is- is well calibrated. Yes. So uh, I guess the- the reason we are doing this is to get, to maximize accuracy [inaudible] but accuracy is not just like the comparison of the prediction? So th- uh, the calibration is completely unrelated to accuracy and let- let me let me show you why. Um, calibration you- you may have- you may have a perfectly- so let's see. That's- so the question is [NOISE] calibration is it equal to accuracy? Right? Or is there any relation between the two? If your model is very accurate should it also be calibrated? If your model is perfectly calibrated should it also be accurate? Any guesses? So if- if your model is perfectly accurate should it be calibrated? Any guesses? No? Okay. I heard some nos and some yeses. If your model is perfectly calibrated, should it be perfectly accurate? [inaudible] So- so- so- so, uh, na- now, the question is the other way. If it's perfectly calibrated, should it be accurate? Yeah? Feels like. Feels like? Right. So the answer is no for both. So, um, here's an example. Uh, supposing, um, you have, you have a data set where [NOISE], um, you have an equal number of, of, you know, positives and negatives, and let's say your model assigns- so probability of, um, uh, correct example. Let's assume it's always, you know, um, 0.51, you know. Just about, um, about the, the, uh, uh, threshold of 0.5 and probability of wrong is equal to 0., you know, 01. Now, this model is perfectly accurate. It assigns a probability of 0.5 and for all the correct examples, and a probability of 0.01 for all the wrong examples. But if you plot the calibration plot, it will be for all the, you know, smallest bins, it will be close to here. But for all the examples that were at point- around 0.5, it will be here, but your calibration plot should be a straight line, right? So in this example, the model is perfectly accurate, but it is not calibrated. What about the other way? Let's consider, uh- again, let's consider an example where half the, half the examples are positive and half the examples are negative, and now we want to, um- we wanna ask if a model is perfectly calibrated, is it accurate, right? Now, again, you can think of this example where the model assigns p of example to be equal to 0.5, for all examples. [NOISE] This will be perfectly calibrated, right? For- when for all the perfectly- for all predictions that had a 0.5, the true frequency will be close to 0.5. Such a model will be perfectly calibrated but will have zero accuracy because it's assigning the same probability to all examples. So calibration and accuracy are, are kind of, you know, distinct concepts. They're not, they're not necessarily related to each other. Yes? [inaudible] Yeah. So this alone will not tell you whether your model is accurate or not, but this should be something that you look at, right? So generally, uh, a common thing to do is look at the calibration plot in relation to the histogram of probabilities. Okay. So take the set of all y hats and just look at the histogram of how your model assigns probabilities, right? If your model is well calibrated and your histogram of probabilities look like this, [NOISE] then you're in good shape, right? So the model is, is, uh, you, you, you call such models where you have, you know, a peak to the left and peak to the right as, uh, a model that is discriminating. Well, you know, it's, it's, it's, it's kind of sharpened its predictions. Its predictions are confident. It, it either produces, uh, you know, a very low probability or a very high probability, and if it is calibrated, then you, you have a good model, right? If your, uh- if your histogram of predicted probabilities look like this, then, you know, your model is not very confident all the time, and if it is not very confident and, you know, then you don't have good discrimination power, but it may still be, still be calibrated, right? So in, in, in practice, you wanna to look, you know, uh, look at both of them simultaneously. [inaudible]. We'll come to ROCs, uh, probably tomorrow or on Friday. All right. So calibration. So now the question is, um, is there, is there, is there an- how is this related to, um, um, you know, maximum entropy that we talked about, right? So as, as we talked- uh, uh, mentioned earlier, the maximum entropy principle tells us that, um, the choice of the probability distribution that we estimate should be as uncertain as possible, subject to some constraints that we have, and that will naturally lead to probability estimates that are well-calibrated. Why is that? Let's see. So, um, so maximum entropy, we saw earlier leads to maximum likelihood, right? So the maximum entropy implied maximum likelihood, and in maximum likelihood, what we, the, uh, objective, uh, of, of interest is this, right? So, um, [NOISE] MLE, we did- [NOISE] right. So if x is the, is the, um, is the predicted probability, then what we do is, uh, we use log p of x or the negative log p of x as the loss, [NOISE] right? So, you know, loss with respect to theta, right? We use this as the loss. Now, to understand the ro- the, the connection between calibration and MLE or maximum entropy, we want to rethink the, the notion of loss function a little bit slightly, right? Generally, we think of loss function as a function between y and y hat where y is the, is the observed or the ground truth of the correct answer and y hat is our prediction, right? However, to kind of get a better understanding of, of, um, calibration, you wanna instead think of this scoring rule. [NOISE] So scoring rule that takes two inputs, a probability, uh, uh, function p hat and y. So y is still the true observed, um, um, observed, uh, outcome and p hat is our predicted probability distribution. Okay. P hat is not a point estimate but it's a predicted probability distribution. It so happens that in logistic regression, the single-point estimate probability defines the full probability distribution. But in general, um, you know, um, if, if, if, if you are, for example, in a regression setting, think of this to be a full Gaussian distribution, like the mean and variance of a full Gaussian distribution, for example, right? So this is a probability distribution over all possible outcomes, right? And this is the, the actual observed outcome, [NOISE] right? So our model will output p-hat, a full probability distribution over all possible outcomes. So if- if we- if we are in some kind of, uh, uh, in a- in a- in a weather forecasting setting, for example, this will, uh, give a probability of- and let's say we are trying to- to, uh, measure, say, wind speed of tomorrow, for example, then this will give you a- a- a probability distribution that it's going to be- have a 5% probability that the wind will be, you know, 10 miles per hour, 7% probability that the wind speed will be, you know, 15 miles per hour, and so on. It's gonna give you a full probability distribution over the entire outcome space, right? In case of a binary decision of whether it's going to rain tomorrow or not, if the outcomes are only two, if all what you care is, will it rain or not rain? You don't care how much it rains. If that's the space of outcomes, then the outcome is, you know- you know, 30% that it will rain, and therefore, 70% that it will not rain, or 50% that it will rain, and therefore, 50% it will not rain. So it's the- it's the full probability distribution over the space of outcomes. Can we assume Bayesian setting? We're not assuming the Bayesian setting. All we- that we have is, the sum predicted probability distribution and one observed outcome, right? And- and the next day we'll again output a full probability distribution and there will be some one observed outcome. In classical setting we always have Frequentist distribution. Uh, the- the- uh, the difference between Bayesian and- and Frequentist is how we treat the parameters. Here this is just, you know, predictions, right? Um, yeah, this- this is- this is not Bayesian versus, uh, Frequentist. This is, uh, uh, this is more, uh, general. So now, uh, we- we have this notion of what is called as, uh, a proper scoring rule. A proper scoring rule is one that satisfies this requirement. So let's call a proper scoring rule, let's call it f. So f is a proper scoring rule if it satisfies, so, uh, proper scoring rule f, where [NOISE] f takes a probability distribution and outcome and maps it to a real value, right? A scoring rule is one that takes the observation made by a forecaster and then observe the actual outcome, looks at those two and assigns a score of how well the forecaster did, right? You wanna think of, uh, the scoring rule that way. So it takes a prediction, a full probability distribution, one observed outcome, and assigns a score, right? And the- uh, a proper scoring rule is one that satisfies, uh, q, uh, [NOISE] so a proper scoring rule, f, is that scoring rule which satisfies this inequality that the expectation of f of q, y, where y is sampled according to q, is less than equal to expectation of f of p, y, y sample to q for all p and q. And you call it a strictly proper scoring rule- strictly proper scoring rule. Where equality implies p equals q, right? What does- what does this mean? So let's- let's- let's, uh, try to- to, uh, uh, understand this a little better. So we have- we have, you know, two- two, uh, you know, two instances of the, uh, scoring rule being, uh, uh, called. [NOISE] The scoring rule, as I said, takes a prediction and an actual outcome, right? Prediction and an actual outcome. F is a proper scoring rule. If when the actual outcomes follow a distribution q- actual outcomes follow some distribution q, the average score assigned by f for a- a prediction q itself, where q was how the actual y's were distributed, will be less than the average score assigned by f, where the prediction was a different, uh, uh, distribution p than how y was actually distributed, right? Now, if f is used as a loss function for your model, and f is a proper scoring rule, then your models will tend to be well-calibrated. Why is that? That's because the- the- uh, the- the proper scoring- the proper scoring rule naturally rewards q to have the best possible score when q was the distribution according to which y's we're getting absorbed from, right? So y- y till the q here, and- and- and- y till the q, you wanna think of this as, you know, q being the way the weather is actually modeled, and y's are therefore sampled according to the way the weather is actually, um, um, happening in- in real life. And the- the probability distribution that's passed to f is the prediction that we're making, right? The best possible prediction is when we predict the true, uh, weather itself, uh, as our prediction, right? For any other prediction, which is not the actual weather, uh, um, uh, you know, the actual weather, then, uh, it will get a worse score according to the problem- according to this scoring rule. If this satisfies for all p and q, then f is a proper scoring rule. Next question? Uh, can you explain why the- any other resolution have a greater respective value? [OVERLAPPING] So- so- so- so here- here you can think of this as loss, a higher loss is worse. So we want f to assign, you know, low values for better predictions, right? Why can't we start with the calculation? Yeah- yeah, I'm- I'm going to come to that, right? Right? So this is a proper scoring rule, right? Now, from this, you know, you- you- you know, it's- it's- uh, it's easy to see that if we recover, you know, um, if we recover the best possible prediction, then our model is naturally calibrated because the best possible prediction is q itself, right? So the best possible, uh, prediction that we can do is to predict q, because, you know, that is the lowest, uh, loss that can be attained. And so, you know, uh, using proper scoring rules as our loss functions will encourage our models to be better calibrated. Does that make sense? Any other- any other- other- other- uh, uh, uh, any other p other than q will attain a low- a- a worse loss compared to predicting q itself. And when you compare- you know, uh, when you predict q itself, your calibration curve will be a straight line. Because, um, um, depending on, um, you know, if- if- uh, if y is our sample from q such that it is 80%- you know, 80%, then, um, we want to predict 80%, uh, and- and that would be the- the- um, um, because that's- that's the best thing to do according to the scoring rule. Does that make sense? Yes, question? Now I do understand why well calibrated model doesn't mean it accurate, because from this it seems like you're getting at the distribution at which y is being distributed- Yeah. - by [NOISE] well calibrated model, which implies accuracy. This does not necessarily imply accuracy. So- so the question is; does this imply accuracy? It need not because what we're looking for is, you know, the average scores in long-term. So- so supposing it rains 50% on average, right? Your predictions may be such that half the time you predicted it will rain but half the time you will predict it will not rain and in the long running average, you know, you may match the frequencies but on every given day you maybe end up predicting the wrong thing, right? So this is not related to accuracy. So a proper scoring rule will encourage your- your uh- your model to have the long run, um, um, long run frequency match the predicted frequency or the predicted probability. And now the question is um, and now the question is, how is related to maximum entropy? So first, first- first do we- do we have- do we have a consensus on this, that if we employ a proper scoring rule because of this property of- of proper scoring rules, the predicted probabilities will tend to be calibrated or is there a question on that? Yes, question. [inaudible] then also it will have a better estimate with the predicted [inaudible]. Why not use [inaudible] because it follows the same inequalities. I'm sorry, I- I- I didn't understand the question. So [inaudible] even in that case, our inequality wasn't the same. The loss would be less than our given loss. What do you mean by y-hat? I- You wrote about scoring loss on the left side. On the left side? Yeah. Yeah. So you have this function, right? Yeah. If you directly use this loss function whereby loss [inaudible] theta transpose x, you have the same inequality like that where- Well, it may or may not, right? So- so the question is, if- if you use a loss value like this where y hat was just Theta transpose x, uh, will we get the same inequality? So in case of logistic regression, y hat completely determine the entire distribution. But in case of regression, that's not the case. If you're just have a point estimate, it's not a distribution over all the possible y's, right? So in case of logistic regression, y hat, or- or the point estimate of the probability is the full distribution, right? But it's- that's not the case in more general settings like regression. So now the question is, um, so assuming that we have a consensus that employing a proper scoring rule encourages our model- encourages us to recover the true probability distribution, in which case the models will tend to be calibrated. Now the question is, how is this related to maximum entropy? So in maximum entropy, uh, we saw that maximum entropy gives rise to MLE. And MLE is essentially this, MLE- the loss is y coming from Q and minus log 1/p, right? So this is the, um, you know, um, so this is the, uh, the loss are- let's think of this as like the true loss. So this is the true loss of, uh, maximum likelihood, where y is distributed according to some, you know, data-generating distribution, and the loss that we have is the negative log likelihood. Oh, sorry, this should be log p hat. So the model predicts p hat, but the data is distributed according to y, the loss is the negative log-likelihood under the expectation where y hat is- is distributed according to q. Now, what remains to be shown is this is a proper scoring rule. Okay? So le- minus log P hat of y, sorry. Okay? So y is- are- are the 0s and 1s. They are sampled according to q and p hat is- is log p hat of y. So p hat is the, uh, uh, generated, uh, uh, is the predicted probability. And we want to evaluate y according to this. So what remains to be shown as this is proper scoring rule. And the way we showed this as a proper scoring rule is pretty simple. We wanna show this inequality, right? So expectation of y according to q minus log q of y should be less than or equal to expectation of y according to q of minus log p of y, right? And we take this to the right-hand side, and this, we get 0 or we take- we want to show this is less than this. Yeah, so we take, uh, is greater than equal to zero. Yeah, so we, uh, take this to the right-hand side and we get is equal to expectation y according to q log- so this is minus log. So this, when it comes here, it will be a plus log. You get log q of y over b, and this is negative, so this is 1/p or p, y. So we need to show this. And this can be easily shown because this is just a KL divergence between q and p, and that's always non-negative, right? So the maximum entropy principle is equivalent to maximum likelihood estimate- estimation, and the maximum likelihood estimation uses negative log as the proper scoring rule- as the, as the loss function. And this loss function due to KL divergence being non-negative, implies that it's a proper scoring rule. And because it's a proper scoring rule, the estimated probabilities will tend to be well calibrated. So that, that's like the multi-step connection between maximum entropy and calibration, right? So maximizing entropy is therefore, uh, therefore leads to your model being well-calibrated. Yes, question. [inaudible]. So this is the loss function that you've been using for logistic regression. This is the same thing that you've been using for logistic regression. It's no different. [inaudible] Yes, this is the cross-entropy loss and that is the same as the maximum likelihood estimation. It's the same loss. You can, you can work out the math there. They're exactly the same, right? There are other kinds of entropy measures that you can use. So here we used, you know, for maximum entropy, we use the Shannon's entropy, that's p log p. There are different kinds of entropies that are, uh, uh, that are there, not just Shannon's entropy. And by using a different kind of an entropy, you get different kinds of a divergence rather than KL divergence and you get a different loss function. And all those will still be, um, will lead to calibration models as long as the scoring rule that you get is a proper scoring rule. That's, that's the general- general takeaway in this, right? So we're out of time and we'll start with the EM variants and, and the variational auto-encoder in the next class. Thanks.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_9_Bayesian_Methods_Parametric_Non.txt
Welcome back everyone. So this is Lecture 9, and today, we're gonna talk about Bayesian methods, um, that's the plan for today. So today we're gonna, um, cover two different approaches in-, um, in Bayesian methods, a parametric approach and a non-parametric approach. And as an example of parametric approach, we're gonna look at Bayesian linear regression and as an example of a Bayesian non-parametric approach, we're gonna look at Gaussian processes. Right, and before we jump into today's topics, a quick recap of what we covered in Lecture eight. So lecture eight was all about kernels, right? And, we- we saw that a kernel, a function k of two parameters, x and z, and these are typically two examples in your training set or in your test set, et cetera. These are examples. A function of two of our examples, K is called a kernel. If K of x, z can be represented as Phi of x transpose, Phi of z for some Phi, that maps your examples to a P-dimensional uh, real space. Now, P here can be, um, infinite, so P could be um, um, infinity. Um, and some of the properties of the kernels, we saw that first, it- it needs to be symmetric, uh, in its arguments. So, um, x and z the arguments can just pop and it needs to evaluate to the same value. And for all, uh, for any finite collection of examples, um, and these examples could be any examples, not necessarily your training set for any finite collection of examples. The kernel matrix, that you construct, such that Kij equals, you know- the value of the i_jth cell in the matrix. Evaluated as the kernel function evaluated on the ith and jth example, is positive semi-definite. Right. And basically, the Mercer's theorem, tells us that, this property 2, is a necessary and sufficient condition for K to be a kernel. So if- if you have a function K that satisfies this, then it necessarily must be a Kernel, which means it can be represented in this way. So this was the definition of a kernel and these are properties, and Mercer's theorem says, you know, we can go back and forth- this set of properties is equivalent to- to, the definition of a kernel. An in- an informal way of thinking about Mercer's theorem is, a matrix, which is possibly infinite by infinite uh, dimension is PSD if and only if every sub-matrix is PSD. Right? And Y is- and the way to think of Y, a kernel K represented like this, you know, um, is PSD is, this statement is basically telling that K, which, you know, this is possibly infinite dimensional. If K can be represented as phi transpose, phi of two examples, then you have this kind of an Eigen decomposition, or some kind of a symmetric decomposition, that you can take the matrix K, the infinite dimension matrix K, and split it into, some phi, where phi is a collection of- of columns, where each column is, the function um, phi i of X. For example, if you have phi of X equals X squared XQ Log x, and some set of features. Then, we would call Phi- phi 1 of X to be X squared, phi 2 of X to be X cubed, and so on. So, each of these phi i corresponds to one such function, right? That's- that's the general idea you want to have, in terms of thinking about um, Kernel matrices that you know, these feature, the corresponding features of the feature map that is implicit to the kernel, are like the eigenbasis of that- of that PSD matrix. Right? Now today we are gonna ,um, and- and the other thing we covered, is, er, in the last class was SVMs, Is a kind of- is a kind of classification algorithm, which tends to work well with kernel based methods, right? The reason I say it tends to work well with kernel based methods is because, we saw that kernel- kernel based methods can be applied to linear regression, it can be applied to logistic regression. Where the number of parameters, that- that we save with the kernel method, is generally equal to, the number of examples that we have. Whereas, with a more classical, linear regression or logistic regression, the number of parameters we save is equal to the number of features, right? So we move from saving parameters, which used to be an order of the number of features that we have with classical linear or logistic regression, to kernel methods where we save coefficients one for example. Which means, in general, you would expect, kernel methods to not work very well, when your data set grows a lot. SVM was a special case, where the coefficients of the- the coefficients of the examples, tend to be sparse, meaning most of the, coefficients will be 0. And that's me- that makes SVM very attractive, as a kernel method because not only do you get scalability to infinite number of features, using the feature maps, but you also get scalability with number of examples because your coefficients will be sparse. Okay. Any questions about that before we move on to today's topics? Okay, cool. So let's jump in, um, Bayesian methods. So, so far, the methods that we've seen, can be loosely be called Frequentist methods, and that's because we assumed that an unknown para- unknown parameter theta, which we assume to be some unknown constant, all right, we assume that theta is some unknown constant. Vector-valued but unknown constant. Right. And the way we, uh go about estimating theta, was to, define a likelihood on- on- on theta. So we will define a likelihood, L of theta to be Log P of say, theta, parameterized by theta. And, we would consider the negative of L of theta to be our last function. And then we would, estimate theta equals arg min, oh not that, l of Theta, and this is the same as arg max Theta L of Theta, right? So this is the maximum likelihood estimate of, um, our- our, uh, parameters. And in order to do this, we- we implicitly assume that Theta is some unknown constant. Okay? Now, today we are gonna look at a- a- a different approach to attack this problem, [NOISE] where Theta is a random variable, we think of it as a random variable [NOISE] that is unobserved. [NOISE] Okay. So this- this, uh, this small little change which may seem innocent, is basically the fundamental difference between Bayesian methods and frequentist methods. In frequentist methods, there is some unknown constant, Theta, whereas in Bayesian methods there is a random variable Theta, which is unobserved. Now, the moment we think of Theta as a random variable, there needs to be some kind of a probability distribution associated with them, right? Just calling something random variable is necessarily incomplete because we wanna say what is the associated distribution with it. Okay? And when- the first thing we do in, uh, Bayesian methods is to assign some prior probability distribution on Theta. So we are- we- we believe that Theta, uh, comes from some prior distribution, right? And then we observe our x data. We- we believe that, uh, the data comes from the distribution from p of x given Theta. This is- generally has the same mathematical form as the likelihood function, [NOISE] but it is different philosophically in the sense that we- we are implicitly believing Theta as having some kind of a prior distribution. But the functional form of p of x given Theta and the functional form of p of x, you know, semicolon Theta parameterized by Theta are exactly the same. Okay. But the- the interpretation is- is, uh, different. [NOISE] And now, the, uh, the main difference between the two is, with the classical methods, we perform maximum likelihood on Theta, on- on the- on the log-likelihood to get an estimate of Theta. Okay? But in the Bayesian method, what we do is, we apply the Bayes rule. So in Bayesian methods, we come up with p of x given Theta. And this is p of x- p of Theta given x is p of x given Theta times p of Theta over p of x. Okay. And p of x, we don't know how- what p of x is, but p of x is, integral p of x given Theta times p of Theta d Theta. So basically it's the numerator. Take the numerator and integrate out Theta and that gives you p of x. Is this obvious? Why this equals this? Right. If- if it's not obvious, uh, you know, just the numerator, uh, is p of x, Theta d Theta, right? P of x, Theta is p of x given Theta times p Theta, that's p of x, Theta. And then you marginalize out, uh, marginalize out Theta from that, right? So this is Bayesian method- this is the Bayesian approach, right? So in the Bayesian approach, there is no loss function, right? We don't take maximum likelihood estimates, we don't calculate gradients, we don't do gradient ascent or descent. There are no, you know, closed form MLE estimates. All we do is apply Bayes rule and get- update our beliefs about Theta given x, given the data, right? All what- all that we do in Bayesian methods is apply Bayes' rule, which is why it's called the Bayesian method. And with this, uh, this is- is, uh, essentially, the counterpart of- of, uh, performing maximum likelihood estimate. Here we calculate the posterior distribution. So this is called [NOISE] the posterior distribution. [NOISE] Right. And in case of machine learning, the way we, uh, go about, uh, using Bayesian methods, uh, is- is- is as follows. So in machine learning, especially in- in supervised learning, [NOISE] we are- we are trying to learn a mapping between x and y's, right? So the data, uh, in- in a machine learning setting will also have y's over here. So x and y is our- our, uh, is our data. And the- the, uh, likelihood- so maybe let me look me, uh, [NOISE] so in supervised machine learning, [NOISE] you start with Theta having some prior distribution, right? And then we observe, [NOISE] x and y. And over here, [NOISE] y comes from [NOISE] p [NOISE] of y given x, common our Theta, right? Here, we make no assumptions about the distribution of x. X's are just given to us somehow. We only assume y given x follows Theta, whose prior is- is, you know, you make some assumption. So x is always given, right? We- we don't say anything about, uh, x's, right? And from this we calculate the posterior distribution p of Theta given x, y. And this will be [NOISE] p of y given [NOISE] x, Theta [NOISE] times p of Theta divided by p of [NOISE] x, Theta given x. And that is, uh, we also make the assumption that Theta is independent of x. And this becomes just p of- oh, I'm sorry, [NOISE] p of y given x. Um, [NOISE] there is something p of- [NOISE] so p of Theta given x, y is p of y given, uh, x times p of x or, [NOISE] they should, yeah, this- this correct. So, uh, yeah, this right, um, p of, uh, y given x. Now, [NOISE] um- so we- we calculate, uh, the posterior. [NOISE] The posterior is on Theta. And from the posterior distribution, we construct what is called the posterior predictive distribution. [NOISE] And the posterior predictive distribution is basically p of y star, where x, y, x star. So x star is a test example for which we only observe the x, the input. And x and y are part of your training set for which we have observed both the inputs and the outputs. Right? And this is called the posterior predictive distribution. And this is basically integral of p of y given x, theta times p of theta given x, y d theta. Right? So this is the general recipe of using Bayesian methods in machine learning. Okay? So this is the Bayesian methods if you- if you- if you, um, if you are only interested in theta and you're not interested in making predictions, this is more classical Bayesian statistics, where you come up with a posterior distribution on your theta and you're all done, right? But as in machine learning, we want to use it to make predictions on unseen examples. We are given x and y. And we want to make prediction on x star of what y star is. Yes question? For the posterior, [NOISE] should it be [inaudible]. Yeah. So um, this- the -the question is, should this be p- p of theta given x? Yes, uh, p of theta given- it should be ah p of theta given x. But because we make theta and x, as being independent, this will just be that- just p of theta. Yeah, good question. You make the assumption that x's, you know, on which we are conditioning to observe y's, those x's are independent of theta. It's a pretty common assumption because, you know, yeah. A good question, right. And the way you think about, uh, you think about- about um, this- this equation which- which- which looks pretty complex, it's an integral over the product of- of two, um, of- of- of two probabilities and then, you're integrating out theta. But um, this actually has a very- very, um, [NOISE] this is y star and x star and this is x and y, the training. And so these are the test and these- these are the- the training. But this has a very nice interpretation. So, uh the interpretation is- is uh something like this. With the posterior distribution, we are coming up with a probability estimate of how likely a given theta is the correct theta, right? So p of x theta is a probability distribution over theta. Right and theta is essentially our model. Right? Now, what's happening here? If you just look at this expression, think of theta to be some fixed, you know, um, some- some fixed vector, right? P of y given x, um, theta is like making a prediction for y for some given value of theta, right? And what we are essentially doing is [NOISE] considering the set of all models. So let say, this is model 1, model 2, model 3. And model 1 corresponds to some theta- theta 1, theta 2, theta 3. But theta 1, theta 2 theta 3 is- is- is a full vector. So if um, so maybe, let's call it theta 1, superscript 1 to indicate that's a full vector, theta 2, theta 3. Right? So what we're essentially doing is we are taking the prediction of y given x for every possible value of theta. So each of these will make a prediction of y. Right? And then, we are calculating a weighted average of all these predictions where the weights are decided by the posterior distribution, right? So the posterior distribution is giving you some kind of a distribution or your model's. Right? Theta is the model. Right? Once you know theta, you're- you- you can make a pre- you can make a prediction, for example, in case of logistic reg- uh, logistic regression. If you know theta, we do x transpose theta, get a scalar, run it through the sigmoid function and that's our, you know, uh, uh, that's our p of y, that gives you the probability of whether y is zero or one. What's happening here, is we can- we are- the- the- the interpretation here, is we are considering every possible value of theta, right? There are infinitely many, uh, possible values your theta vector can take, right? And your posterior distribution is giving you a probability distribution over all these values of thetas. What's the probability that each- that a given value of theta is- is the right value, right? That's- that's what the posterior distribution is striking. And for prediction, we are- we are essentially taking the prediction from every possible infinitely many possible values of theta and aggregating them by a weighted average where the weights are decided by the posterior distribution. Yes question? There is a true underlying distribution that this for this theta. And that is one of this three models is the closest to- to that- that actual distribution? One of these models? Yeah. So why are we going to the contribution from other model when one of them is actually going to represent the other one most closely? So the question is, if one of them is closest to the true, uh, um, true model then, why are we taking, uh, like, you know, the- the prediction from all the models? And the answer there is that we are not sure, right? The- the posterior distribution tells us that it is likely coming from this model. It may also be coming from this model but you- but, you know, my belief that it's coming from this model is, you know, 21%. When my belief that comes from this model is 20- 20.1%, you know. It- it's giving you a posterior distribution of- of- of, uh, over the parameters, which basically is like the- our belief of what the true value of theta is. So then it's highly susceptible to our our choice of prior when you start- Exactly. So the- so the choice of our prior is going to determine what our posterior distribution is going to be, right? And in a way, Bayesian methods are subjective because, you know, it- it- it takes this subjective element of the person who's modeling, right? If- if the- if the person who's modeling believes that theta comes from a prior distribution where it has Gaussian then, you get some posterior distribution and therefore you make some- some, uh, ah, kind of prediction. Whereas, you know, if all else being the same, you just believe that your theta comes from, say, a Laplace distribution then, your prediction would be something different, right? And this is probably the biggest criticism against Bayesian methods that, you know, there's a lot of subjectivism, uh, in- in play here, that the- and the subjectivism comes into picture based on what prior you're choosing for theta. In the prior you cannot choose any value of theta to be 0, right? Any probability cannot- can be? Yes. So the question is in your- can your prior assign 0 probability to any models? So if your prior distribution over theta assigns 0 probability to- to some value of theta then, necessarily in the posterior also it will be 0. So, um, in general, it's a bad idea to choose a prior which will assign 0 probability to some, right? [NOISE] So, you- you want to think of this as- so this- the- the, um, the predictive distribution, as the expected value of p of y star given x- x star, theta where theta is sampled from p of theta given x, y. So this is the training set. From the training set, you construct a posterior distribution, right? And then, you take the- the uh, prediction of p of y given x, where theta here, is a random variable that is distributed according to the posterior distribution. And- and this expectation is equal to this. Yes question. Uh, why is this one any more subjective than let's say something like generated model [inaudible]? Yeah, so in a- in a- I guess the question is, how is this different from generative models that we, uh, saw before. But why is this any more subjective [inaudible]? Yeah, the, the- it is, uh, the question is why is this more subjective? It is more subjective because we're making yet another assumption of the way Theta is, is, um, distributed. For example, in, in GDA, when we- when we studied GDA, the mu and sigma, we did not have a prior on them, right? There were just some unknown constant that we estimated using maximum likelihood. Whereas if you took a Bayesian approach there, the Mus and Sigmas of your GDA would also have prior distributions on them and you would calculate a posterior without doing maximum likelihood, right? Yeah, so, um, yeah, so this is- this is one way to think of the, uh, posterior predictive distribution that you're essentially taking the prediction from every possible model configuration that you can come up with, and then taking an average of those- of those predictions as a weighted average where the weights come from the posterior distribution. And this approach is essentially what one might call parametric. We call it parametric because p of y given x, Theta is- you know, has- has some functional form for which Theta is a parameter, right? So y given x Theta is generally, you know, for example, it could be logistic 1011 plus e _minus theta transpose x, and because of this functional form, the degree of freedom that we have is to just change theta, right? And that in a way, limits the set of all possible models that we can consider. The set of all possible predictive functions that we have is limited by this functional form, right? And different values of Theta gives us different specific predictive functions, but the set of all these predictor functions is still somewhat limited. And one way to kind of, uh, see that limitation is supposing we have, um- let's say we have a regression problem, right? Now, what's the correct hypothesis form for this? Maybe it's linear, or maybe it is quadratic, or maybe it is cubic. Or maybe as you observe more data- if you observe more data, maybe it is a much higher order polynomial because there probably is some, you know, well predictable, varying- varying- variants in it, right? And by- by limiting ourselves to certain kinds of certain family of parametric functions, we are essentially saying, no matter how much data is given to us, we just cannot be flexible to fit all this variance, even though that probably is, uh, not noise and is true Sigma, right? The- that's the fundamental limitation of parametric models is that the functional form of the hypothesis function limits us from being flexible even in the presence of a lot of data, which suggests that there are some other pattern. Which is why there is interest in non-parametric methods, right? Non-parametric methods on the other hand are- the way to think of non-parametric methods is that we consider the set of all possible functions, right? And that seems pretty expensive right? We're considering the set of all possible functions, and it's, it's hard to even imagine and write it down on on a piece of paper what that class even looks like, right? The set of all possible functions. And, and then we let data kind of, decide which function from that set we actually want, right? So in non-parametric methods, we generally, you know, think of some function f and assume no functional form whatsoever, right? We just work with a function f, which could be any kind of function. So, uh, before we get into the details, let's, let's have a look at, uh, Bayesian linear regression to get a better feel for how the Bayesian, uh, the Bayesian method works in case of regression, and then we'll move on to non-parametric methods, that is, Gaussian processes. So Bayesian linear regression.wwwwwwwwwwwwwwwwwwwww Bayesian linear regression. So again, in Bayesian linear regression, we are- we start with a training set, x_i, y_i, i equals 1 to n. This is the full training set, right? It comes from some unknown distribution, and we make the, uh, assumption that y_i equals Theta transpose x_i plus some unknown Epsilon, where Epsilon i comes from some normal distribution, Sigma square, and Theta comes from some normal distribution,wwwww Tau squared, right? So this looks very similar to the standard linear regression setting that we had, except we also apply a prior on Theta, right? This is the only extra line from standard linear regression. And now [NOISE] Good question. So what is, um, um, tau square i? So in this case, each epsilon i was a scalar, so sigma-squared was a scalar. Over here, Theta is in R_d and x is also in R_d. So theta- this, this was a standard, you know, this is just a normal distribution, but this is the multivariate Gaussian distribution where 0 over here is a zero vector and Tau square i is a covariance matrix, which is in this case a diagonal covariance matrix where the diagonal values are Tau square, right? Thanks for asking that. So we're given this, and in case of the standard or the, the frequentist approach to linear regression, we constructed a likelihood function for- so in the frequentist approach, l Theta equals, uh, log of p of y given x Theta, right? And, um, and this turned out to be, um, if you remember, log 1 over square root 2 by Sigma square exponent of y minus Theta transpose x square minus half over Sigma squared, right? And, and, uh, essentially in this, uh, it, it boils down to just the square term, everything else just, just, um, cancels out or becomes a constant factor and we end up getting the, the square loss function. And you perform- so performing maximum likelihood was like minimizing least squares. So that was the frequent setting. But what about the Bayesian setting? In the Bayesian setting, we get P of- we don't do maximum likelihood, so, you know, in Bayesian linear regression, we just construct P of Theta given S. So let's call this S. S is the full training set. We construct p of Theta given S using, using Bayes rule. And in case of linear regression, you can work it out. This will be Theta given S will be, um, distributed according to a normal distribution with mean 1 over Sigma square, A inverse, x transpose y Sigma A inverse where A is given by 1 over Sigma square, x transpose, x plus 1 over Tau square i, right? So you can work it out. The, the, uh, Theta given x is distributed according to- the correct way to write this would be Theta given S is distributed according to a normal distribution, a multivariate Gaussian, with this being a mean vector and this being the covariance vector where A, A has this 1. Yes question. [inaudible]. Yeah. Even in the posterior, uh, [inaudible] So is the, is the question, uh, is p of theta given x comma y equal to p of t, w and y? Uh, In, in general, no, uh, because independence may or may not change once, once you condition on extra variables, right? Yeah, in general, this is not true, yeah. Was there a question? Yes. [inaudible]. So this is an arrow over y which says it's, it's a vector. [inaudible]. Yes, this is from the, from the training set. This is this y, and this looks a little bit cryptic. Maybe we can simplify this a little farther. So, right? If I bring in- bring in A over here. So this is going to be x transpose x plus, because it has, um, an inverse x transpose x plus a is 1 over sigma square, square over [inaudible]. What do we have here? So here we have a normal distribution where the mean over here has this form and some variance, and you might recognize that this looks, looks, you know, um, very similar to the normal equations. In the normal equations we had x transpose x inverse x transpose y. Right this looks very similar to the normal equation, except there is a small, an extra, um, diagonal matrix that is being added to x transpose x, right? And if you- I think somebody asked the question when we were deriving the normal equation, how did we invert x transpose x? What if it's not invertible? Right? And one answer is over here. In Bayesian methods, assuming, uh, prior on our data, to have, um, some tau square and some noise epsilon squared. We, we get this extra diagonal matrix that is added to x transpose x. And you might remember, if you take a positive definite matrix or any matrix in general, and if you add a diagonal matrix, then you're essentially increasing the eigenvalues of that matrix by the corresponding diagonal terms. Right, which means now even if there were some eigenvalues, there was 0, by adding this diagonal matrix, you're essentially making all of them non-zero, and that makes it invertible, right? So this is also called regularization. We're, we're probably going to see this in, in, uh, the context of regularization as well. Adding this, uh, uh, uh, term is effectively equivalent to regularizing our uh, our model, right? And- well, maybe there's a giveaway, but this, you know, this, this observation is probably going to solve a big chunk of one of your homework problems in your next homework, but that's fine. Um, so the, the data given S in the Bayesian setting is now a distribution that is centered around this value, which is very close to the maximum likelihood estimate and has some, some, some variance over here. And so this was the posterior. And next we construct the posterior predictive distribution. The, the kind of mental math you need to have is calculating the posterior is like fitting a model, even though we are not doing gradient ascent, calculating the posterior is like somewhat similar to estimating your, uh, uh, maximum likelihood estimates and calculating your posterior predictive distribution is, is, is, is like calculating the function which will make predictions on your new data, and in case of, uh, the frequentist method there wasn't a distinction between the two. But in, in Bayesian methods there is, there is a distinction, yes. Uh, [inaudible]. A justification for this. So this has quite a lot of algebra that I don't wanna go into. You know, you can just, you know, uh, just believe me that if you work out the algebra, you are going to get this form. Work out what algebra? Uh, apply Bayes' rule to calculate the posterior. Yes, question? [inaudible]. So if, if, uh. [inaudible]. So the question is in, in the posterior predictive distribution [NOISE], yeah. Do we have a y star? So this is y star. [inaudible]. Right. So in the, in the- for- in the posterior predictive distribution, the setting is that you are given the training set x and y. Right? And on new examples, you only know x. You need to make a prediction on them. Like you don't know what it is, right? And for that we, uh, we, we, uh, estimate it by this probability distribution with a. [inaudible] or just the probability? Yes. So, um, this will give us a probability distribution over y star, and if you want a point estimate, it is quite common to take the expectation of this distribution as, as the, um, if you want to make a point prediction for y. A lot of times, um, we want just point estimates to make a prediction, and in those cases it's, you know, it's, it's, um, it's normal to take this distribution and then take yet another expectation on it as our y hat as a point estimate of the predictive y distribution. And in many other cases, you actually want to hold on to the full predictive distribution to know what is the uncertainty you have in making that prediction, right? If, uh, if you have the full predictive distribution for y star, and if that prediction has low variance, if it's kind of concentrated, then it generally means your model is pretty confident of what the value is. Whereas if it's, if it has a high variance, then it means your model has less confident. [NOISE] Which, which, which, um, brings to a good point. So one of the, the fundamental, um, points about Bayesian approaches is that Bayesian approaches are used for estimating uncertainties, right? In maximum likelihood estimation, we got a point estimate for theta hat, the parameters, but there was no uncertainty estimation that came along with it, right? Uh, generally, if you have a lot of data, you should expect your estimate theta hat to be- you know, to have more confidence in that estimate. Similarly, if your data was small, you should expect, you know, less confidence in your estimate of theta and frequentist methods don't give you that, you know, um, confidence estimation. Whereas with Bayesian approaches, we are estimating theta given S, and what you'll see is that if you have a lot of data, the posterior distribution will be more peaked on certain values of theta, right? And if you- if you have less data then your posterior distribution would be more spread out. And similarly, from- um, while making predictions on y star or on x star. If you have, um, um, if your posterior distribution is, is pretty confident, is, is pretty peaked, and if x star is in the region where your training set more or less was where you got them from then your predictions will also be pretty confident. Your p of y given x will also be pretty, uh, uh, concentrated. But as if, if your point is far away from where your training set was, then naturally with, you know, the posterior prediction distribution will also have a high variance. That's, that's one of the benefits of Bayesian approaches because you get uncertainty estimates for free, because you are estimating the full distribution of parameters and, and predictions. Yes, question. [inaudible]. So the likelihood function appears in the Bayesian, uh, Bayesian method. Uh, so p of theta given S, theta given S is P of S given theta times p of theta or p of S. So this is your likelihood function. So I can- in your frequentist approach, you can still look at the confidence interval right? Like you can look at linear [inaudible] you can say you're sure of [inaudible] . Yeah, so in, in Frequentist approaches, sure, so you can get confidence intervals on, on, on Theta., but the, the theory is much more well-developed for Bayesian approaches. Especially, you know, it's, it's, yeah, the, the theory is, is more well-developed for Bayesian approaches for getting uncertainty estimates. And the conf- the, the way you interpret a confidence interval is also pretty different from the, the variance over here. So the confidence interval, wha- what you get from a frequentist approach, tells you, if you were to repeat this experiment. Supposing you have a 95% confidence interval, right, that confidence interval tells you, if you repeat this experiment, you know, an infinite number of times, then 95% of those experiments will give you a Theta hat in this interval. It is not telling you that your estimate of Theta hat lies in that interval with 95% probability, right? But as with this, you are getting that, you know, it tells you, you know, what, what probability the, the true Theta, what is the probability of the true Theta lying in some interval given by this, right? So that's the difference between confidence intervals and this. Yes, question. You say you want to increase the variance of the Theta. Is that, is that [inaudible] What's the question again? You have the distribution over Theta- Distribution over Theta. [inaudible] versus the variance of [inaudible] Yeah. So, uh, if I, am I right saying that we need to differentiate, like find the minimum of A inverse to get the minimum variance of Theta? So, so the question is, should we differentiate and optimize this A to get low, low variance? So here there is no optimization going on. There is no loss function, there is no gradient descent, there is no optimization going on anywhere. All we're doing is conditioning. We condition on this and use the Bayes' Rule to construct a distribution, and what we get is what we get, right? It's, it's very different from the Frequentist approach. The Frequentist approach we are minimizing some kind of a loss or maximizing some kind of a likelihood to get estimates of your Theta. Over here we just apply Bayes' Rule and we get what we get. But, uh, is it like, is it what we wish that variance should be low? Isn't it what- We wish that variance is low, and the, the way the variance was-will be low is if you have a good amount of data, right? If you, if you have a lot of data, then, you know, this will naturally be low. All right, so moving on. So the posterior predictive distribution is essentially y star, given x star, comma S, follows a normal distribution of one over sigma square, x transpose A, A inverse, x transpose Y, x star transpose A inverse, x star plus sigma squared, right. Again, this looks a little cryptic, but, you know it's, it's, we can still kind of break it down and make sense of it. What's happening here is we had a distribution for Theta, which was some normal distribution. Now if you have, if you have a distribution um, uh, Theta, which is normal with some mean mu and covariance sigma, where mu here is this and sigma is this, then x transpose Theta. This is just a property of Gaussians, it's also normal, with mean mu transpose x and variance, x transpose sigma x. And this is just a property of Gaussians, okay? And now we have our Theta to have, you know, some normal distribution and to make a prediction. Here, x star, you know, take x star, and we, and we dot-product, we take a dot product with, with Theta because that's what linear regression does. And that gives us this expression, where this expression is just like x transpose mu, where mu is this, dot this with x star and you get this, and the covariance is, is this. Basically, take A inverse and, take A inverse and, and, and calculate the, and calculate the, uh, uh, quadratic form with x star just like this. And then we add this extra sigma square, because this is the posterior predictive distribution of y star. Because y star the obs- the, the, the y-value that you might observe has the same- will have the same noise embedded in it in the test example as well. So that- so to, to account for what the possible y star you might observe for the test example, you want to account for the extra, extra variance and you're adding that variance because of, and you add the variance because of the IID assumption, right? So, yeah, so this, this might look pretty complex, but it is- it follows pretty directly from Theta transpose x, Theta, Theta given, Theta given S. And again, even this, which looks cryptic, is actually very similar to the estimate that we get from the normal equation with some regularization term that comes in. Yes, question. [inaudible] Yeah, so Theta is what we have. So, right and, and S is x. Yeah, so this is, you have the extra, extra S. So if Theta has this distribution, then y star, y star is equal to this. Theta transpose x will have this distribution. [inaudible] Right? So p of- so y star given x, x star and S is basically the posterior transformed in this way. [inaudible] Good question. It this clear, right? Yeah, so this is, this is just an example of how you would apply a Bayesian method for linear regression. And this is called Bayesian linear regression, right. It, it has a few more terms and identities compared to the standard normal equations from the Frequentist method. But you get, you know, uncertainty estimates by doing this extra, extra calculation. Right? Any questions before we move on to Gaussian processes? Yes. Question. [inaudible] Sure. Yeah. So this would be the mean and y star equals y hat plus epsilon star. Right? Yeah, good point. Yeah, so this extra epsilon accounts for this extra plus sigma squared. Thank you. So in general, Bayesian methods tend to be heavy utilizers of probability theory because all what you are doing is taking probability distributions and conditioning them to apply Bayes' rule and that's about it. That's all what we- what you do in- in Bayesian methods. You assign everything, a prior probability and then you observe some and depending on what you observe, you condition those probabilities on- based on what you observe, and you update the beliefs of the unknown. And here the unknown could be your model parameters. It could be the unknown labels of new test examples but you follow the same recipe. You assign a prior probability to everything, you observe something, and based on the observation, you update your prior probabilities to a posterior probability by conditioning. And you're using the Bayes rule to do that. Yes? [NOISE] Uh, Talked about linear predictors. [OVERLAPPING] How is it that about transpose x [inaudible] Yeah. But here, x itself is transposed, so mu transpose x is x transpose y. Here x is transposed. X naught is transposed, rather. [inaudible] [NOISE] So the question is, how is this x transpo- x star transpose of this? Right. So- So- um, add an x theta star transpose and leave it as it is. And this is a scalar, just move it out and you get that. Oh, you say x transpose is same as transpose x [inaudible] Yeah. That part is always- [NOISE] All right. Gaussian processes. Before we jump into Gaussian processes, it's probably a good time to again to remind ourselves of some of the basics of functional analysis that we saw earlier on, right? In functional analysis, we saw this relation between vectors and functions, right? Now, mentally it is the same relation between vectors and functions that exists between multivariate Gaussian and Gaussian processes. [NOISE] right? So the multivariate Gaussian are defined on vectors, and Gaussian processes are defined on functions. [NOISE] And it is the same- it's in fact exactly the same- same, uh- uh, relations of- of- of how vectors and functions are related, that multivariate Gaussian are related to Gaussian processes. Okay. And to start off Gaussian process, let's review some properties of the multivariate Gaussian distribution. So [NOISE] properties of multivariate Gaussian. And also, um- um, for you to kind of expect what's to come. The functions over which we are gonna define Gaussian processes is gonna be our hypothesis function, right? So the- the idea to have in mind is that, the way we- the way we went from vectors to functions, we can make the same leap from multivariate Gaussian to Gaussian processes as probability distribution of a functions, and the functions over which we wanted to find Gaussian processes, are our hypothesis functions with- with which we want to make predictions, right? So first property of Gaussians is normalization. That is, integral with respect to X, P of X mu sigma is equal to 1. Where, ah, we are integrating over the full vector space. So if you integrate out everything, you get, uh- uh, you get one. And this is just our- uh, factor of probability. And supposing we have a vector X equals X_A, X_B, right, which has a mean. Let say this is according to a normal distribution. Mu A, mu B. And covariance of sigma AA, sigma AB, sigma BA, sigma BB. Where we call this mu, and this sigma. With this, we have the second property of Marginalization. [NOISE] So for marginalization, given, uh- uh, given a probability uh- uh, distribution that way, so P of X_A equal to integrate out X_B from P of X mu sigma dx_B. So you take the full probability distribution and you integrate out only a few components. Integrate out just XB and you get P of X_A and P of X_A will be normal with mean mu_A. Covariance sigma AA. More generally, if you have, um, a vector x equal to x_1, x_2, x_d, and if this distributed according to a normal distribution with mu_1, mu_2, mu_d and covariance Sigma_1, 1 until Sigma_1d, Sigma_d1, Sigma_dd, right? And if you want to estimate the marginal by integrating out, say, x_2, let's say we want to marginalize out only x_2. The nice property about Gaussians is that by performing this over x_2, the normal distribution that you end up getting- the- the distribution that you end up getting will also be normal with just the corresponding entry is related to mu taken out, okay? That's a very nice property of Gaussians. So if you're- if you're not- if you marginalize out x_2, then you get a mean, uh, the marginalized distribution will have a mean vector with just that component removed, and the covariance will have that corresponding row and column removed, right? That's a very nice property about multivariate Gaussians. Marginalizing is super easy, right? Just take off the things that you wanna marginalize and you get- you get the right answer. Third thing is conditioning. So in the same setting, if you have X_A and, um, x as normal, um, having two parts, X_A and X_B. In this example, A and B are not necessarily scalars, they are just, you know, sub vectors, vectors with, you know, uh, they can have multiple components. So p of r X_A given X_B given X_B is a normal distribution with mean mu_A plus Sigma_AB, Sigma_BB inverse times X_B minus mu_B. This is a scary looking expression, but, you know, let's- let's- let's-, um, let's analyze it. So minus Sigma_AB, Sigma_BB inverse, Sigma_BA. Wow, this looks- this looks pretty- pretty, um, scary, but it's actually, um, not- not as complex as- as you might, uh, it- it might appear at first. Now, to- to kind of understand this a little more, let's look at a case where X_A and X_B are scalars, right? So if you have, um, suppose we have, let's say, a- a- a normal vector a, b. That's normal with mu_a mu_b and covariance, Sigma_a squared, Rho Sigma_a, Sigma_b Rho- Rho Sigma_a, Sigma_b, and Sigma_b squared, right? Any covariance matrix can be represented as this. So take the diagonals, take the square roots, and you can write the, uh, covariance terms as the correlation coefficient times the standard deviations of the two, right? So that's just the- this definition of covariance, right? So let's see what- what conditioning looks like in case of this, right? So a given b is now, according to this formula, a normal distribution with mean mu_a plus Sigma_AB. Sigma AB is just Rho Sigma_a, Sigma_b times Sigma_BB inverse, so that is divided by Sigma_b squared times X_B, X_B is b minus mu_B, it's just mu_B, Sigma_AA, that's just, uh, Sigma_a square minus Sigma_AB, that's Rho a, Rho Sigma_a, Rho Sigma_a, Sigma_b times, uh, Sigma_BB inverse, that's divided by Sigma_b squared times Sigma_BA. That's again Rho- Rho Sigma_a, Sigma_b, right? We can simplify this further. So this is the same as normal mu_a plus- we can cancel one Sigma_b here. So you get Rho or Sigma_a times Rho of B minus mu_b over Sigma, Sigma_a square times- so one Sigma_b b, second Sigma B. Right, and this is minus Rho square, Sigma_a square, okay? And this, I'm gonna just write it as Sigma square times 1 minus Rho, right? So now, it's- it's much simpler. So what this is basically saying is if we have a Gaussian distribution of two variables and we observe one of them, b to have some specific value. Then, what- what's happening here? If you take b divided by me- by its mean and subtract its mean and divide by its standard deviation, this is like the z value of, of b, right? So this- this- this is the z value of b. Take the z value of b and multiply it by the correlation coefficient, and then re transform it back into a, okay? Multiplied by Sigma_a and add mu_a, okay? Take B, get it down to this, uh, uh, the z value, transform it using the correlation coefficient and then rescale it back into A's range, okay? And what's happening here? Take a- by observing b the variance of a reduces by a factor of 1 minus Rho square, where Rho is how much a and b are correlated. Yes, question. [inaudible] So rho is the, is the correlation coefficient. So the correlation coefficient is- um, so basically- [NOISE] basically uh, the covariance equals rho times- ah, rho times the- the standard deviation of a times standard deviation of- of b. It's also called the Poisson correlation coefficient. It's the same thing, just the correlation coeff- how- how- how well are they correlated. So if you have a Gaussian variable that looks like this, then you have a positive correlation, because the higher the value of- of one variable, the more likely the second one is also higher. If it is like this, then it has a negative correlation. If it's a perfect sphere, then there is zero correlation, right? It's telling you how much information of a lies in b. By observing b, how much can you learn about a, right? So, yeah, that's basically what conditioning is does- is doing. And this is the- you can think of this as the multivariate version of- of this, right? So this is just um, um calculating like the z value of- of the b vector and then pre-scaling it back into the a's scale. And over here, from the variance of a, you're subtracting away some- some variance. Basically you are getting information by observing b, right? And- and this is the same as- as this, but there's just a- a multivariate version of this, okay? And if you're- if you're- um, um, if you've done some advanced linear algebra, this term, what you see here, is also called the shared complement. But if you don't know what that is, it's not necessary. But if you know the- this basically the shared complement of- of, ah, of a, all right? And then we have one more property, which is the sum of two independent Gaussian variables. So if you have x distributed as mu_1, sigma_1, y as mu_2, sigma_2, then x plus y is just sphered as a normal, mu_1 plus mu_2, and sigma_1 plus sigma_2. And this is called properties summation. With these properties, believe it or not, we are more than halfway into Gaussian processes. You're just going to apply them and [NOISE] derive Gaussian processes. Of these properties, the one that looks the scariest, is probably the conditioning property. And you can make it- I guess you can approach it or make it less scary by seeing the analogy to the two vari- to the two, uh, variable case. And just believe that, that monstrous expression is, it's just a multivariate generalization of that. [NOISE]. Yeah, question. [inaudible] How do you go about proving something like that? Uh, the property three. Property three, how do you go about proving that? Ah, I would say that's beyond the scope of this course. You know, just take it for granted. And- and you can- there's a- there's a- there is uh, a book called, if you're interested, it's called Gaussian Processes for Machine Learning. And I would- I would point you to that. There's also some notes that I have linked to in- in- uh, in the syllabus page and has an appendix where there's a proof for that. So, all right, so Gaussian processes. [NOISE] So in Gaussian processes, what we want to do is- so let me try to draw ah, an analogy with- with um, multi- multivariate Gaussian and Gaussian process. In case of a multivariate Gaussian, we had [NOISE] say y_1, y_2, y_d, according to some normal with mean mu_1, mu_2, mu_d, and some- some covariance. In Gaussian processes, what we instead have is a function f distributed according to what you call as a GP with a mean function m and a kernel k. Okay? So this is, you know, a direct analogy between multivariate Gaussian and Gaussian processes. In multivariate Gaussian, we had a vector which had a corresponding mean vector and a covariance matrix. In a Gaussian process, we have functions distributed according to a mean function and some kernel, right? So there is a direct one-to-one mapping between positive definite matrices and kernel functions. Okay? The- and here, you are now going to use, the properties, you know, one, two, three, four, to derive Gaussian processes for machine learning or Gaussian process regression in a pretty straightforward way. [NOISE] Okay? Now, as you might remember, um, in vectors, we index them with some index. And the corresponding thing to do with functions is to evaluate a function at some input, right? So you can think of this as a vector, you know, which is basically set of all evaluated function. f of x1, f of x2, basically every possible input for that function will have a position, you know, that's a mental picture to have. That's not the right definition, but it's a very good mental picture to have. And the value eva- evaluated by the function is the value of this infinitely long vector at that position, right? So where you had indices, you have inputs. Okay? And now we are going to use the conditioning property and the marginalization property of um, of- of- of Gaussian, ah, Gaussian vectors. And limit ourselves to just those elements that exist in our training set. And to the test example that you want to make a prediction on. Right? So this means, if we [NOISE] marginalize out all irrelevant examples. [NOISE] So basically, examples not in training and [NOISE] not in test, right? Then we basically get something that looks like this. f of x1, f of xn, and then let's call f of x_star 1 until f of x_star n star. Right? The set of all examples in our training set and the set of all examples in our test set. [NOISE] And we are going to assume that the Gaussian process has a mean 0. So most of the times we assume the mean function to be just [NOISE] a 0 function times the kernel function evaluated, right? The kernel function evaluated on the set of these examples. So this gives us the kernel matrix, just like how we saw in the case of- of, um, the Mercer's theorem, that the, uh, you take- you take a kernel function and evaluate it on a set of points to construct a kernel matrix, then that kernel matrix is positive definite. Similarly, here if- evaluate k of, in this case x^1, x^1 and so on, k of x_star 1- or n^star. Let me write this more clearly. K of x^1, right? And Mercer's theorem tells us that this will be positive semidefinite and so this is a valid, um- um, multivariate Gaussian distribution. Yes question? [inaudible] No, it need not. [inaudible] - So- so is- is, uh, should each of these be non-zero? No, they need not be non-zero. They can be positive. [inaudible] No, they can be negative, some elements can be negative. If you have- [OVERLAPPING]. [inaudible] Yes, every submatrix has to be positive definite. So what is the meaning of submatrix? That's a good question. So if you have a matrix, let's say it has n rows and n columns. Now, choose a few index values, let's say 2 and 7, right? And extract how's- rows 2 and 7, intersected with column 2 and 7. These four make a submatrix. So in order to reduce it to a- so a- a submatrix of size 1, that can only be the diagonal values, right? So row two and column seven will not give you a submatrix. So row 2 and column 2, that element is a submatrix. But then, everything along the diagonal has to be greater than [inaudible] Yeah- yeah. So k of- so if you evaluate k on the same element in the two places, that will always be non-negative. As a submatrix doesn't mean any arbitrary block. [inaudible] Yeah. Yeah. Yeah. So submatrix means, um, means a matrix that you get by choosing the same set of indices for rows and columns, and good question. Thank you. All right? Now, we will represent this in a more compact way where we'll write f to be the examples of the training set and f_star as the function evaluation at the test set equal to normal of 0, and we'll call this block to be k of x x, this block to be, k of x, x_star. Then I guess we've been writing star as a prefix- as a subscript. Then again, k of x_star and x and k of x_star x_star. Again, so this is the same thing that's written about where we are, um- um, looking at just those- those examples that are of our immediate concern. So examples in the training set and examples in the test set and this is like, um- um, you're marginalizing out all the other infinitely many examples that, you know, are not of immediate concern. Yes question? [inaudible] So kernel, uh, so the question is, do kernels act as some kind of a similarity metric in general, or is it only in the case of Gaussian processes? The answer is, kernels always act as some kind of a similarity metric and, uh, you can use any kernel and come up with a covariance matrix for a Gaussian process. There's no limitation of which kernels you can or cannot use as long as it's a valid kernel that can serve as a covariance function for a co- for a covariance matrix- for a Gaussian process. [inaudible]. Exactly, the reason is because a kernel can be interpreted as a- a dot product in some higher dimensional feature space, and dot products are a good, you know, similarity metric, so to speak. Yeah. All right? Was there another question? Right. So we started with an infinite dimensional functional space and then marginalized out every example that is not of immediate concern to reduce ourselves to a Gaussian- Gaussian distribution- a multivariate Gaussian distribution, or the set of training and test examples. All right? [NOISE] and with that, we're almost there. [NOISE] Once we are in vector land from process land all our properties that we just reviewed cannot be directly applied. So these properties that we covered, conditioning, modular, conditioning especially, this is, you know, you need to be able to invert a matrix, right? So this- this is- we are- we are dealing with finite dimensions here. [NOISE] Right? So, um, we got the f vector to be this, and in general, our y vector, our y we saw was f of x plus noise. But it so happened conveniently that the noise was also Gaussian discriminant- Gaussian- followed a Gaussian distribution, and we're gonna use the summation property to add these two, right? So we get y, y star equals f, f star plus Epsilon, Epsilon star, and this is now distributed as normal distribution where the mean of f was 0, the means of our errors were 0. So we are still at 0 here. The covariance matrix is now k of x, x plus Sigma square I, k of x, x star, k of x, x star, and k of x star, x star plus Sigma squared I. Right? Any questions on this? So f followed this distribution by marginalizing out all the examples that were not of immediate concern. And by our noise hypotheses that we have Gaussian noise in our observation. We get a distribution of what the y's to be x plus, um, the noise, and over here we're just using the summation property and adding up the covariances, um, Epsilon- the entire Epsilon, um, vector follows normal 0 and Sigma squared I. Right? So you just add it here, it only adds along the diagonal, uh, and this is the- this is the matrix that we're gonna now work- work with, okay? And given this matrix, we are just one step away from- from, um, finishing up Gaussian processes. So the probability of y star given y and x star and x. X and x star is kind of implicit in- in the indices that we've chosen. Is now, or another way to put it is y star given y, x, x star, distributed according to a normal distribution with some mean Mu, call it Mu star, and some covariance Sigma star, [NOISE] where this Mu and Sigma star are calculated according to this conditioning rule. We apply the conditioning rule to this mean and this covariance, and we get Mu star equals k of x star, x times k of x, x plus Sigma square I inverse y Sigma star equals k of x star, x star plus Sigma square I minus k of x star, x, this is times, x, x plus Sigma square inverse k of x, x star. This is, uh, a complex-looking expression, but this is exactly the same as this, where we plug in k, k, x, x in place of, uh, Sigma AA, k, x star, x star in place of Sigma BB and k of, uh, k- or rather k of x, x plus Sigma I square in place of Sigma AA and so on. Right? We're just applying the conditioning rule and we got our posterior predictive distribution. This is the posterior predictive distribution of Gaussian processes. Yes, question. [inaudible] know about the slogan of the covariance [inaudible]? Yeah, so over here, um, this is just a short- shorthand notation for the kernel matrix you get for evaluating on that set of examples. K- k is- so there's a one-to-one relation between kernels and covariances, right? Covariance functions and kernel functions, there's a one-to-one relationship. Any kernel function can be used as a covariance function. That's probably- that's probably the most important takeaway from this class, you know, for, you know, in general, that kernel functions and covariance functions are- are essentially the same. [NOISE] All right? So this is the posterior predictive distribution for Gaussian processes and the expression looks pretty complex, but everything here is straightforward linear algebra. Right? There is- we have a set of xs, we have the set of x stars, um, and- and, you know, just compute this and you get your posterior predictive distribution for the test examples. Nowhere here do we have y star, that's important because we're, you know, making predictions about y star. We only have ys, xs, and x stars. And given this, uh, in order to kind of, uh, construct a good GP model, the most important decision step, or rather the only decision step, is to come up with the right kernel function, or to come up with the right covariance matrix. Right? Once you come up with the right covariance matrix, there's- everything else is pretty much set in stone, right? Everything else just flows along once you decide- once you choose what your kernel matrix is. There is no loss function, there is no gradient descent, there is no, you know, fitting of models. All that we do is given the set of xs and ys in our training set, we just remember them, choose a kernel- kernel function, and then wait for test examples. Once we get our first test example, we plug it into this expression and obtain our predictive distribution. Right? And because we covered this in SVMs as well, and because this is a kernel-based method, a consequence of that is that we need to remember our training set all the way into test time, right? And that- that is basically showing up here because you need to have the xs and ys from your training set in order to make the-, you know, the, uh, prediction. Yes, question? [inaudible] what we get is very similar to the generative model. [NOISE] So the, uh, the result, what we get here is similar to a generative model. Can- can you expand on that a little bit? [inaudible]. Yeah. [inaudible] Yeah. The- the fact that the posterior ended up being a Gaussian, is actually a property of Gaussians, right? We saw that when you- when you start with a- a- a distribution that is Gaussian and you condition on a few variables, what you end up with is also Gaussian. So that's a property that is kind of unique to Gaussian, where you start with a multivariate Gaussian, you marginalize out a few of them, you still end up with a Gaussian, with the smaller dimension- in a smaller dimensional space. You start with a Gaussian vector, you condition on a few of them, you still get a Gaussian with what's remaining. Right? And- and that's why the posterior also is now- is now Gaussian. Because we condition on the- the ys that we- that we observed in the training set. Why is it called non-parametric example? The reason why this is called non-parametric is because we did not come up with any kind of a functional form for our prediction function. Isn't that a function form, Mu star and Sigma star? So Mu star and Sigma star are-, well, you can- you can- you can think of them as a functional form, but what you will- what you will realize is that there is- there is a- there was no fundamental limit on what these values could have been. So next I'm going to show some visualization that will probably give you a better- better, uh, uh, better hint of what- why this is called non-parametric. Maybe- maybe that might answer the question. Let's- let's move onto some visualization. [NOISE] Any other questions on this? Yes, question? Can we do the same thing for other exponentials families? Can we do the same thing for other exponential families? Great- great question. So- so Gaussian processes is what's also a- is a type of a stochastic process. And if you're, uh, if you're familiar with stochastic processes, stochastic process, you can think of them as, uh, collection of random variables with- with some kind of an index set. And this is a- a Gaussian process is essentially a collection of random variables where the index set is the domain of your function, right? And the- in theory you could do the same, you know, go through the same exercise for any other distribution, uh, other than- other than Gaussians, but the other distributions may not have, uh, first of all, they may not have multivariate versions. So for example, a Poisson distribution, uh, you know, there is no multivariate Poisson. I mean, you can have a collection of, uh, uh, Poisson, but there's no multivariate version of it. You- you can still have, uh, for example, uh, you still do have a Poisson process, but they don't have this- the nice property is that if you condition on something, what remains also is a Poisson process, right? But that's true only for Gaussian processes. That if you condition on something what remains is still Gaussian. If you marginalize out something, what remains is still Gaussian. [inaudible] actually made two same processes exponentials family. Then [inaudible]. So, so I guess the question is, um, now what if your data is distributed according to some exponential family, right? And- they're the most common uh, the common approach uh, that you'll see is the f over here. Instead of f being y directly, so we assumed uh, y is equal to f plus Epsilon, right? The most common thing to do is what- uh, there's gonna be some kind of- um, you can call it an activation function or the g function that- that we use in GLMs. And you can have a natural parameter be distributed according to Gaussian. You can- you can do something like this. And your observations, y will now go through this nonlinear transformation um, to- to make the actual observations. But once you do this, you lose the conditioning ability, right? In Gaussian processes, because the noise was additive and the noise was also Gaussian, we got the y's in a Gaussian form again, right? But if this is non-linear, if it's not a simple summation of two Gaussian's, your y's will no longer be normally distributed and you- you cannot do the conditioning and so on. So you- you can do it in principle, but you don't get these simple expressions. Next question? So the question is, do these properties, do they hold for other exponential families? Yes, so these properties are very unique to the Gaussian, which is what makes Gaussian processes kind of nice to work with, right? And you may have actually also come across, you know, especially if you have some kind of a background in finance, you may have come across uh, like Brownian motion and Wiener processes. And basically uh, uh, the way to connect back with, with what we saw today is a Wiener process or Brownian motion, is a GP, is a Gaussian process with k of say, s, t equal to just min of s, t. For this particular choice of a kernel, you get the Wiener process. If you don't know what a Wiener process you don't have to worry about. If you're, you know, if you have some background in finance and you know- you know what's Brownian motion or Wiener process. It relates to what we saw today by this particular choice of a kernel function. All right, let's do some visualization. Where's my browser? There you go, all right, so this is, um, I see another cursor here I don't know. [LAUGHTER] All right, so this is a webpage that I've kind of preloaded. It is from a website called distill.pub. I would highly encourage you to visit this website, there are great articles explaining things about machine learning and I'm a big fan of it. And you probably too should be one. Anyway, so this is they have some really cool visualization of Gaussian process. And I'm going to scroll all the way down to the very last visualization. Just pretty cool, oops. Yeah, over here. Okay, so this is some kind of a functional space and abstract functional space. So the x-axis over here is our space of inputs. It need not necessarily be, you know, just scalar real value. Think of it in some abstract space. And now, the moment we choose a kernel, so there are three choices of kernels here, an RBF kernel, which is also called the Gaussian kernel, a periodic kernel, and a linear kernel. So once we choose an RBF kernel, you know, this defines a prior on our functional space, right? And these, these curves that we see here are samples of functions drawn from this- from this Gaussian process prior, right? And once we have this, we can now observe our training data and start conditioning on them. So- so if- if you click over here, it's- this means we have no condition on this particular input in our training of having this particular y value. And similarly, as you keep conditioning on things that we observe, the posterior distribution of our GP, of our Gaussian process takes this form. What this essentially means is for any test example that might have an x value over here.The GP, if we marginalize out everything, everything, everything else except that single slice, right? That's going to be our posterior predictive distribution for the new x star, Right? And this basically tells for examples. For test examples in this region, our posterior predictive distribution has a very small variance, it means we are very confident of what those values should be. Whereas over here, in regions where we haven't seen much of training data, the prediction value will be pretty, you know, pretty broad in the sense will have very high variance. We are less confident about making predictions in this region where there weren't much training examples nearby. As opposed to here where the prediction- the posterior distributions have very small variance, which means you're making very tight confident predictions. Next question? So, uh, so back to the question. Does this look like over-fitting? Right? So, uh, so this is like a fundamental trade-off between parametric and non-parametric models. Non-parametric models are extremely flexible in the sense, if your data follows some pattern, your function, your posterior distribution in GPs will follow that pattern, right? And there is a common criticism that non-parametric models can over-fit pretty- pretty easily. But in- in cases where you have a reasonable amount of data, you know, non-parametric models tend to work pretty well. This- this- this might look like over-fitting and I guess that's, you know, as they say, it lies the- it lies in the eye of the beholder. You might call this over-fitting, but over-fitting is a, is a loose concept in general, it's hard to kind of give a formal definition of what over-fitting means. And you- you could- you could call it over-fitting, but, you know, um, [NOISE] for example, uh, if you use a different kernel, it would look like this. Now, you unconditionally you get this is the prior and these are samples from- the gray lines are samples from that GP prior. And as we start observing training data, we're basically just conditioning. And these are the- the- the pink band is basically- represents, uh, some, I guess 95% range of the posterior distribution. And without conditioning, it's like the 95% range of the prior distribution. Or you could use a combination of the two- compose the two kernels and get something like this. Yeah, so you can construct kernels by taking existing kernels and kind of composing them. Um, you'll see some of that in your homework. Homework 2 we'll probably do some kernel composition. As long- you know, any function that, that, that obeys the properties of, you know, a kernel is a kernel. And you can take existing kernels and compose them with certain rules and still get palette kernels. Right? So this- this is, I think of a pretty, pretty intuitive and- and a nice visualization of a Gaussian process where you start with a prior, the choice of the kernel gives you different priors, right? Just the choice of the kernel without observing any data. Right? You choose different kernels, you get different priors. The choice of the kernel gives you a different GP prior. And depending on the choice of the kernel that you've chosen, observing data gives you different kinds of posterior, GP posteriors. Right? There's a GP posterior if you use a different card. And at test time, to repeat, you know, your test example will lie on the horizontal axis at some point. And you are predictive distribution is gonna be the- the vertical slice or- or, you know, marginalize out everything else. And- and that's gonna be like the Gaussian distribution whose mean follows the red thick line. And the- the- the-the- the borderlines kind of define the- the- the, uh, the 95% range of that Gaussian distribution? Yes, question. Yes, so uh, uh, the question I guess is when do you use a Gaussian process, right? And um, the, the answer is going to be pretty boring as for almost all answers in machine-learning, see what works well on a cross-validation set. It- it sounds it- uh, and that is probably the most pragmatic answer and that should be the answer you should always go with. Uh, always see any change that you do. How well does the performance, uh, you know what- what kind of performance you get on a cross-validation set? Yes, there's another question. Yeah, so the heat map on the right is showing, uh, uh, this, like the kernel- kernel matrix. The kernel matrix that you get by, uh, conditioning on, uh, uh, the covariance matrix that you get. So in the posterior predictive distribution, uh, we ended up with a mean and a covariance. So this is like, uh, I think this is the posterior covariance. Yes, next question? Yeah, so the question is, um, should the choice of kernel also be something that works well on cross-validation? And the answer is yes. And there are also techniques with which you can learn a kernel. We didn't cover it today, but the book that I refer to, the Gaussian process for machine learning, it gives you a good overview of how you can even learn kernels from data, not just handcraft them. Yes, question. So can composition of kernels be interpreted as a composition of two functions with two inputs? In general, no, I wouldn't think of it that way. It's just that you get a different kernel. All right, that's about it for today.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_20_Variational_Autoencoder.txt
Welcome back, everyone. This is lecture 20 of CS 229. And the main topic for today will be variational autoencoders. So variational autoencoders is probably one of the simplest deep generative models. So deep generative models is a very hot topic in machine learning right now, where we try to build generative models of our data using neural networks. And variational autoencoders was one of the early models which made good progress in this field. And it's also probably the model that one should start studying first because it has the key components that are kind of necessary that are used in more fancier models. So for studying variational autoencoders, first, we will look at simple autoencoders. Autoencoders have a long history, and we look at what simple autoencoders are. And then we'll kind of switch gears back into expectation maximization and look at a few variants of expectation maximization because that kind of gives good motivation into variational autoencoders. We'll first study something called as MCMC expectation maximizer, MCMC EM, where MCMC stands for Markov Chain Monte Carlo. And we will also have a quick look at variational inference, which is kind of like a counterpart to Monte Carlo techniques. And then we'll look at yet another variant of EM called variational EM, and then switch gears into the variational autoencoder itself. So that's the plan for today. And so if, in terms of the overall course, if we are able to cover this today, then probably today will be kind of the last math-intensive class. We'll have one more class on Friday, where we'll be covering more general topics like evaluation metrics and general tips for executing machine learning projects that will not be as math heavy. So probably this is going to be the last math math-heavy class. And next week, we'll be just doing review of all the topics that we have done in the course. And also the review that we do next week will be suggestive of the kind of topics that are important for your final exam. So we'll be stressing more on topics that are important for the final exam so that you can focus on that. Yes, question? Is the final cumulative, or is it the last of the chapter third assignments? So the final is cumulative. Since we did not have a midterm, it's cumulative. It covers everything. So a quick recap of what we covered in the last class-- so in the last class, we mostly dealt with the principle called the maximum entropy principle, where entropy of a probability distribution is defined as the expectation of the negative log of the probability value itself. And the maximum entropy principle suggests that we should maximize the entropy of a probability distribution that we are trying to estimate, subject to some constraints, where most of the times the constraints are expectation of some function of the variable-- of the space over which the distribution is defined, and we want the expectations to be equal to the empirical expectations, and these generally come from data. Right? This is where data comes in into the maximum entropy principle because they serve as the values to which we want the constraints to satisfy, right? So subject to these constraints, we get an entire family or an entire class of probability distributions that satisfy this constraint, right? So supposing these T1 and T2 are like x and x squared-- so if T1 of x equals x, T2 of x equals x squared, then basically the constraints that we are trying to express is the first moment of the data should be some-- the first moment of the data. The second moment of the distribution should be the second moment of the data, and so on. And subject to these constraints, we basically have an infinite number of probability distributions in general because satisfying just two or three constraints is pretty easy. And then the maximum entropy principle suggests that, among the class of all these candidate probability distributions that satisfy these moments, the one that we want to choose is the one that maximizes entropy. All right? Yes, question? Can you clarify what you mean by "moments"? So moments are-- so expectation of x is called the first moment. Expectation of x squared is the second moment. Expectation of x cubed is the third moment, and so on, right? So the objective that we want to maximize is the entropy. I think there was also a question somebody asked where, why are we trying to maximize entropy and why not maximize variance? Maximizing variance, for example, will-- so supposing our probability distribution is defined on the range a to b, a, right, finite support. Now, if we want a distribution that has high uncertainty across, then we need the probability, PDF, to look like this, which uniformly assigns 1 over b minus a density to all the areas. Now, if you want to just maximize variance instead of maximizing entropy, then the distribution that we have will be something like this-- a and b, where half the mass is here, half the mass is here, right? So this maximizes variance, but it does not necessarily mean that you are maximizing uncertainty, right? So maximizing variance is not something that we want to do if we want to just increase our uncertainty and kind of stay unbiased about our estimate of what the probability distribution is, right? So maximum entropy principles tells us that satisfy this constraint, and among all the distributions that satisfy it, choose the one that has the highest entropy. And we saw that this is equal to this kind of dual problem, where the dual problem is maximum likelihood, where if we were to start from another direction, where we want to perform maximum likelihood-- and the probability distribution, if we assume, is part of the exponential family, where the sufficient statistics of the exponential family are basically the constraints that we want to satisfy, then these two problems are equivalent. In other words, maximum entropy naturally gives rise to the exponential family of probability distributions, right? And then we kind of saw this somewhat related topic called calibration, which is super important if you're building real-world predictive models, real-world forecasting models. So calibration is this property where the predicted probabilities match the observed frequencies, where you say that some outcome has a probability of, say, 80%-- let's say it's going to rain tomorrow with probability 80%. Then if you collect the set of all predictions where the prediction was 80%, then approximately 0.8 fraction of those two outcomes, it actually should have rained, right? Not more, not less. The fraction of observed outcomes should match the predicted probability. Then that model is then said to be well calibrated against that distribution. And we also saw that calibration and accuracy are kind of orthogonal. One does not necessarily imply the other. You can have well-calibrated models that have very poor accuracy, and vice versa. And related to this concept of calibration, there is this concept called a proper scoring rule. So you can think of loss functions as being a proper scoring rule if they take a forecast distribution and some actual outcome that was observed and kind of basically give a score to the forecaster based on what the actual outcome was, right? And you can think of it as a loss function where the smaller the score, the better the forecaster was, right? And a proper scoring rule is one that satisfies this property, where, given any two probability distributions P and Q, where if you assumed Q is the real-world occurrence or the real-world probability distribution, the true probability distributions. And if x's are being sampled from this real world, and these real-world samples are used to score the forecaster's probability distribution, where these are the predictions, then only when the prediction is the true probability, the loss will be the lowest. For any other probability distribution being predicted, if the true events are sampled from Q, then the expected score for the other forecasts will always be higher than the expected score for the true distribution being forecasted, right? So this is called a proper scoring rule. And if we build models that optimize for proper scoring rules-- that is, if our loss function is penalizing our prediction P using some proper scoring rule f, and subject to few other constraints-- it's easy to see that proper scoring rules encourages our model to predict probabilities that are forecasted, because they are minimized when the predicted probability equal to the real-world occurrence of data, right? And we saw the connection to maximum entropy of this proper scoring rule that if we strive to maximize the entropy or use the maximum entropy principle, then, as a consequence, the loss function that we get is going to be the negative log P of x, and that comes directly from this. I know we are trying to maximize this. Instead, we can minimize negative log P of x, right? So the negative log likelihood is just the loss function of the maximum likelihood objective. And we saw that f of (P, x) equals negative log P of x is a proper scoring rule because if we just plug minus log P of x here and take it to the other side or bring that to the other side, we get that KL divergence is always greater than or equal to 0, which is true, that we've also seen that in the homework. So essentially, the big picture is that maximum entropy principle encourages us to make calibrated predictions, right? That's the big story from last class. Any questions on this before we move on to today's topics? OK, cool. So today, we'll switch gears and talk about variational autoencoders. And the first step in this journey will be to have a look at what autoencoders are in general. So to study autoencoders, we again go back to neural networks, right? So in neural networks-- So the only kind of neural networks that you've studied so far are those that can be used in a supervised setting, where we start with some input layer, or we started with some input layer. And then we had all these hidden layers fully connected, right? Fully connected layers. And we ended up with a single scalar y hat. And we compared it against the ground truth y, or the true label y. And out of these two, we constructed a loss, right? And this was a scalar, and then we minimize the loss. That gave us a scalar valued function as a function of all the parameters. And then we basically optimized the loss by performing gradient descent on our loss function. And in order to calculate gradient descent with respect to all the parameters, we use the multivariate calculus chain rule, which was essentially the same as backpropagation, right? That's what we did in supervised learning setting. Instead, now, what we're going to do is we are only given x's. So our training set is now just a set of x's, x1 through xn. There is no y. And the goal with autoencoders is to learn a way in which we introduce something called as a bottleneck and reconstruct the original data. What I mean by that is we start with the original-- with the input layer being x, right, and this used to be d-dimensional, right? And this will still be d-dimensional. So this is the input layer. And then we have a few fully connected layers, right, and we bring it down to some k-dimensional hidden layer. Let's call it z, right? And from this k-dimensional hidden layer, we start where typically k is smaller than d. From here, we start increasing the dimensions of the hidden layers until we are back to d-dimensional layer. And this layer, we'll call it x hat. And our loss will now be to minimize x hat minus x, right? So we want the output of our network to be the input itself. This may sound trivial because all the network has to do is to take the input and give the same thing as the output. But the challenge for the network is that it has to basically take this data transformation through this bottleneck layer z, which has dimension k, which is much smaller than d. What this encourages the model to do is to learn some kind of a low-dimensional representation of our high-dimensional input data. And then starting with this low-dimensional representation, we map it back into the high-dimensional data itself, right? And if the model is able to successfully minimize this loss to a satisfactory level, then essentially the model has learned to compress data into some kind of a latent state or a hidden state, right? So here, we typically call the parameters of the first half of the network-- let's call these parameters as phi. So these are all the weights and biases of all the layers until the hidden layer that we are interested in. And let's call the weights and biases of all the layers starting from this hidden layer z all the way until x hat, which is the reconstruction, to be-- let's call it theta, right? And so the loss is now essentially sum over i equals 1 to n, where small n is the number of examples. We want to minimize the norm between xi minus-- let's give them names. So this part of the network, which takes as input x and outputs z-- let's call it the encoder, right? So it encodes data into some hidden representation, z. And we'll call the second half of the network that takes z as the input and outputs x hat as the output-- we'll call it the decoder, right? And the encoder is parameterized by phi, and the decoder is parameterized by theta, right? So now the loss is x minus x hat minus-- so first, we want to encode xi, and the parameter here is phi. And we want to take this-- this is essentially equal-- you can call this zi, right? And then we want to take the zi and feed it as input to the decoder, right? So feed this into the decoder. And this decoder is parameterized by theta, right? And the version that comes out of the decoder-- that is, the original x encoded through the encoder, and decode that encoding back from the decoder-- this whole thing is x hat i. So we want to minimize the norm between xi and x hat i, where the loss is a function of theta and phi, right? So this objective is basically the-- gives you what is called as the autoencoder. We call it an autoencoder because we want to encode something and decode it back to the same thing, which is where the word "auto" comes in. And the way we go about training this is through backpropagation, right? This is going to be a scalar loss because it's a norm of a vector, of the norm squared of a vector. So we are starting with some kind of a scalar loss. So you can think of this as-- you come to an encoded state and decode it back into x hat-- and decode it back into x hat. And we are provided a second copy of x itself. So this is d-dimensional this is d-dimensional. So start from the input, encode it, decode it. You get x hat. Take the original-- another copy of x. And using these two, construct the loss to be x minus x hat squared. And this is a scalar loss. This will be in R, right? And this gives us a loss function against which we can now perform backpropagation and train all the model parameters phi and theta. These are basically the weights and biases in the first half and the second half of the network, right? And the way you go about doing backpropagation is exactly how we saw in the neural network lectures. Yes, question? What is the benefit of doing this? Because we can do the same thing with PCA, where you can find the dimension and then use the data to those dimensions. This would be a lossy encoding. Good question. So the question is, how is this different from PCA, right? In PCA, we are doing something very similar. We start with a d-dimensional data and project it down into a low-- a k-dimensional hidden representation. And then in PCA, the objective there was to minimize the distance between every point in this projection, right? The main difference between PCA and an autoencoder is that, in PCA, the transformation from x to z is strictly linear, right? Here, you can have multiple hidden layers, which have multiple nonlinearities. So you can actually perform PCA, interesting. So, yes, so technically, you could perform a PCA here with this thing, except in PCA, you might also require this-- you need the exact mapping. Should this be the inverse? You can do something very similar to PCA with this without nonlinearities. Yeah, you can do it. OK, so what is the benefit of using it? Is that the same as PCA? So this is not the same as PCA because, here, things can be nonlinear. But why would someone actually want to do this with data in real life? The reason why you would want to do this with data in real life is because you want to learn some kind of a useful hidden representation that has some kind of a latent meaning, right? And we'll see with variational autoencoders of how this z representation is kind of-- where the hidden representation with variational autoencoders will end up having some kind of a meaning. We'll see later today. All right, so this is variational autoencoders, and think of this-- in a simplistic term, think of this as dimensionality reduction, or a way in which you are just learning some kind of a compact representation for your data. There are variations of autoencoders called denoising autoencoders, where, in denoising autoencoders, you learn how to denoise your data by training, by feeding some kind of a noisy version of the input. And in the loss-- so you feed a noisy version of the input but try to recover the original x. And that gives you something called as the denoising autoencoder, where the network learns to denoise the training data and, hopefully, generalizes to denoising unseen data as well, right?> So that's a-- there are many variants to the autoencoder. Any questions on this? This is all we're going to talk about autoencoders, and you're going to switch to a different topic. Yes? So this flow model, can you bring up the predict label? So there are no labels here. So the task here is to not predict a label, but the task here is to go through this bottleneck and reconstruct the original data itself. That's the task. So if I consider z as the last [INAUDIBLE] working off the first important part, what will be the-- what will z output? So z. So the question is, if we consider z to be the output of the first half of the network, what should z output? And the answer is whatever the network realizes is the optimum z that helps it recover back x hat, right? We are not providing any kind of supervision of what z should be. All we are saying is, whatever z comes out of the first half should be the z that you feed into the second half that performs good reconstruction of x. Yes, another question? Yes, so two questions. Are phi and theta related in any aspect? No, phi and theta are just two different parameters. They're not necessarily alike. Should they be inverses in some way? No, we don't presume phi and theta to be inverses. OK, and in the end, is your goal going to be somehow to detach the second half of it and use that as a compression? Yeah, so the eventual goal is to use only the first half on unseen data to get their hidden representations or their compact representations. So you chop your network at this point and use it as an encoder, where you're getting some kind of a compressed representation. Yeah. So then you replan this with different number of layers and then you see what is optimal? Yeah, so the question is, what's the dimension of z? How many layers do we have? Those are all hyperparameters that you want to tune, where you can train with your training data and see how well the reconstruction is happening on a test data. And you can do your bias variance analysis there. Yes, question? Why do we not see the decoder as vector and factor analysis with no conditions? Like, then it should be even less than d? Right, so a relation to factor analysis-- let's take that offline. That's kind of tangential to our topic right now, but happy to answer that after the lecture. All right, so this is the autoencoder, and we'll come back to autoencoder in a few minutes. In the meantime, a few more related topics for our buildup. So next, so this was autoencoders. 2, MCMC EM, right? So a quick recall of EM. So in EM, we performed this iterative procedure of going through two steps, the E step and the M step, over and over again until we converge, where the E step was for all i, where i is index in your data elements, set Qi of z to be equal to P of z given xi under the current parameter, theta. And then to make this more clear, let's also give time indexes. So at the t-th iteration, we use theta t. And in the M step, we perform theta t plus 1 to be equal to-- I'm just going to write it for one example, but it's basically just the sum over all examples-- arg max theta of the ELBO. I'm going to write out the ELBO in full form-- sum over z Qt of z times log P of x, z, parameterized by theta, over Qi t of z, right? This, in fact-- I'm just writing it in terms of one example. Let me just write in all the examples. So i equals 1 to n. You have xi, zi, and zi here. And in this, we can make a few comments. First of all, we assume that we could perform this posterior calculation, P of z given x. We could calculate this somehow, right? This is kind of an implicit assumption in EM, that we can calculate this posterior distribution P of z given x, right? That may or may not hold true in practice. So that was a big assumption that we quietly swept under the rug and assumed we could calculate the posterior distribution somehow. Now, the challenge comes when you are considering complex models. For example, suppose our model is something like this-- so z comes from some normal distribution with mean 0 and I k times k, a continuous, latent variable z. And let's say x given z is some kind of a-- that's a neural network, right-- neural network with parameter theta that takes in as input z, a right? So z is a sample from some Gaussian distribution, and we feed that Gaussian distribution-- so imagine the second half of this network. We feed that z into a neural network, and out will come our data, the data that we observe, right? In this kind of a setting, where our model is so complex that calculating P of z given x is pretty much impossible. There's no way we can take some arbitrary neural network and calculate the posterior of z given x in a closed-form estimate. It may not be a neural network. It could be any kind of-- some complex model, some complex zi parameterized by theta. This could be hierarchical. It could be any kind of a complex model. Now the question is, how do we perform EM in this setting to estimate our parameter's theta? Because we are no longer able to come up with a closed-form estimate for P of z given x, right? And this is where we can make a few observations. So first of all, it's EM. In the M step, we are holding Q fixed, right? We saw this previously as well. The only place where the variable that we are optimizing, this theta-- the only place that shows up is over here, right? Why are you performing this with arg max? This is the only theta that we are adjusting to perform the arg max, right? Everything else here are constants, right? And so we can write this as-- I think we mentioned this earlier as well, that we can all-- in EM, the M step can always be written as-- equals 1 to n sum over z Qi t of zi log P of xi, zi, parameterized by theta. Right? These two are always the same, will always-- are equivalent because log of a by b can be written as log a minus log b. And log b is essentially a constant with respect to theta. So we can just get rid of the denominator inside the log, right? And now, if you look at this in this form, we get another insight. The insight is that, even though in the E step we say we want to calculate Q, the probability distribution of the posterior of z, we don't really want to calculate the density values itself, right? The only way Q gets used in the M step is to construct this expectation, all right? So this is i equals 1 to n, expectation of zi coming from Qi of t of log P xi, zi, theta, arg max. So even though Qi appears in this expression, it is only used to take an expectation of this function, right? We don't really need to calculate the density by itself. We are only interested in the density values of Q only to perform this expectation, right? And using this insight, that the purpose of Q is only to take the expectation, we can instead-- what we can do is to approximate this arg max theta sum over i equals 1 to n. And over here, what we are going to do is to replace the expectation with the Monte Carlo estimate of the expectation. And what that means is, instead of integrating over this function using the density of Q, instead, we will take many, many samples of z from Q, right, and take the average of this across all those samples of z, right? So that would look something like this. Let's assume we take capital T number of samples for each example. And this would be 1 over capital T. T is probably a bad choice, so let's call it capital M. So we're going to use capital M number of samples for each Monte Carlo expectation estimate. Small m equals 1 to capital M log p of xi. And in place of zi, I'm going to write zm, where zm is zm. i is sampled from Qi of t, right? So we rewrite this expectation as a Monte Carlo estimate, where the z's are sampled from Q's. The Q's, however, is still this posterior. But there are many techniques to sample from a posterior of a complex probability distribution, even though we don't know how to evaluate it. There are techniques such as Gibbs sampling or Metropolis-Hastings. So MCMC is a vast, vast, vast field, where there are many techniques for sampling from posteriors, even though we don't know how to exactly calculate the density of a given point in the posterior. But we can still get samples, right? And basically, the law of large numbers tells us that as M goes to infinity, this Monte Carlo estimate will tend to-- will converge towards the true expectation, right? And so, with this technique-- this is a variant of EM where, even though we don't know how to calculate the posterior distribution, we can approximate the posterior using Monte Carlo techniques or sampling techniques. So this is one way to work around complex or hard-to-compute posteriors. Any questions on this? Yes, question? Can you explain the last step again? The last step? Yeah, so in the last step, what we are doing is we are replacing this expectation with the Monte Carlo estimate of the expectation. So we replace this expectation with the Monte Carlo estimate of the expectation. Where did the Q go? Where did the Q go? Q is a distribution from which the zi's are sampled. Oh. Right? So over here, zi was sampled from Q, and this was an analytical expression for the expectation. Instead, we replace it with the Monte Carlo estimate of this expectation, where zi's are sampled from Q. And we sample capital M of number of such samples from Qi and construct this average of this function, using those samples as the Monte Carlo estimate of the expectation. So expressing the expectation in both those ways is equivalent as if M is infinity? Yes, as you take M to infinity, then these two will evaluate to the same value, right, and that's the law of large numbers, all right? So the original EM, we constructed a convergence proof, where, with every E and M step, the likelihood was only increasing, right? We saw that proof with likelihood-- construct a lower bound, and then construct a lower bound and another lower bound. And at every step, we were guaranteed that the theta t's were increasing the likelihood. But over here, that guarantee does not hold anymore because this is an approximation and not-- an approximation of the lower bound and not the exact lower bound. Yes, question? So with Gibbs sampling, do we sample the conditional? Can we sample the joint-- So with Gibbs sampling, you can sample the posterior, samples from the posterior. And then the [INAUDIBLE]? So we get samples of z's, right? And using the sample values of z's, we can take the average of the joint. Good question, right? So this is one approach to addressing intractable posteriors through this technique called sampling, right? And there is this other kind of counterpart approach for approximating complex posteriors, which is called variational inference, all right? So when you want to calculate posterior distributions, you basically have three choices. Choice number 1 is the exact posterior. You use math, algebra, use Bayes' rule, and calculate the exact posterior. If that is not possible, you are left with two other choices. Choice number 2 is to approximate it using Gibbs sampling or through some kind of sampling approach, a Monte Carlo approach, where you take samples and take the expectation of the thing you want to take the expectation of with respect to the posterior. Option number 3 is variational inference. So variational inference is another technique to go to work around these intractable posteriors. And in order to understand variational inference, we go back to the expression we derived in EM, which was basically log P of x was greater than or equal to ELBO of x with some Q, right? We derived this using Jensen's inequality, and we left it at this form, saying the ELBO is a lower bound of the log probability of the evidence, right? It's the lower bound of the evidence. But how much is it lower by? So the question is, log P of x is equal to ELBO of x plus what, right? And, in fact, the answer is pretty straightforward. All what you want to do is take this to the other side, and you'll get-- this is equal to log P of x minus ELBO. And the ELBO is basically this form. And you plug in this form over here. I'm not going to do the algebra. It's pretty straightforward, very simple algebra. What you get over here is basically the KL divergence between Q and P of z given x. And this should not be surprising because when Q is equal to P of z given x, we saw that the ELBO is tight and will be exactly equal to log P of x, right? So it could have been the KL divergence from P to Q or Q to P. One of those two are the natural choices. But if you work out the algebra, it turns out that it is the KL divergence from Q to P, P of z given x, right? So our goal is to estimate log P of z given x, right? And here, you're making this observation that the KL divergence between Q and P z given x is equal to this value over here. So this is basically kind of the motivation for how we derive the variational inference part. So for variational inference, what we see is log P of x does not have any Q term in it. So this is effectively a constant. So log P of x is basically a constant. It has no Q term in it. And then we have the ELBO of x Q plus DKL of Q to P of z given x, right? So this is a constant with respect to Q, with respect to Q. And this has a Q term, and this has a Q term. And we want our Q to be exactly equal to P of z given x, right? And that's our eventual goal, or the ideal goal, that z is equal to P of z given x. But instead, with variational inference, what we do is we-- in order to make it as close to p of z given x as possible, we maximize this with respect to Q. Now, if we maximize this with respect to Q, make this observation that this is greater than or equal to 0. So for no value of Q will this become negative, right-- no matter what Q we choose will that become negative. If you choose the best possible Q, then this term just becomes 0. So instead of calculating P of z given x somehow, we take this variational approach where we say, Q of z given-- or Q of-- or, rather, P of z given x-- we approximate it by the arg max of q and some family Q of the ELBO of x and q, right? If we maximize the ELBO as much as possible with respect to Q, we are bounded by log P of x anyways, right? So this provides a ceiling. And if we maximize the ELBO as much as possible, because this is non-negative, the highest possible we can take this-- when we maximize this completely with respect to Q, we would have naturally obtained the Q that minimized the KL divergence between Q and P of z given x. Does that make sense? So performing arg max of this ELBO over Q will give us a Q that is very close to P of z given x. And this kind of an approach where we maximize this lower bound with respect to Q and obtain a distribution, which we approximated to be P of z given x, is called variational inference, right? And this is a kind of complementary approach to the sampling-based approaches, because in sampling-based approaches, you could take an infinite number of samples. And eventually, you would recover the exact evidence lower bound. You would recover the exact ELBO as you took M to infinity, all right? Whereas over here, most of the times, we will not recover the exact posterior, but we will end up having some kind of an approximation of P of z given x depending on how flexible this family of Q is. And over here, the technique we used for sampling, whereas with variational inference, the technique uses optimization, right? In sampling, we do-- sampling, in Monte Carlo techniques, we do sampling. With variational inference, we do optimization. Here in the limit, we recovered the exact solution, whereas over here, we always get an approximate solution depending on how flexible the family Q is. Over here, if you're familiar with Monte Carlo techniques, you never know how good your estimate is. You need to keep taking more and more samples. All you know is, eventually, you will reach the exact solution. But at any given time-- say you've taken 100 samples or have taken 10 million samples-- that you don't know how good or how close you are to the true expectation. You're just told that eventually you will be. But this technique will converge, right? You know when to stop. When you complete this optimization problem, you know when to stop. You know that the solution is approximate, but you know when to stop, whereas over here, you don't know when to stop. So those are the trade-offs between Monte Carlo techniques and variational inference techniques, or the sampling techniques versus variational inference techniques. Yes, question? Is this kind of doing a different thing? So the Gibbs sampling one is for the M step, and this guy is for the E step because you are getting Q's, right? Yeah, so the Q that you end up here, you're going to plug it in over here. Is it the first part and not-- Yeah, you can think of this as the first part, where you recover Q using variational inference and use that Q to construct your M step objective, right? And, whereas over here, you constructed the M step objective, or the proxy to the M step objective, through sampling. Whereas here, the proxy was through this approximation of-- from the variational inference step, all right? And this technique of constructing the M step objective using a proxy Q instead of the exact posterior Q will give rise to something that's called variational EM. In both MCMC EM and variational EM, the goal is to work around the construction of the exact posterior, right? In MCMC EM, we worked around it using sampling, Monte Carlo sampling, or Gibbs sampling. In variational EM, we worked around the exact posterior using variational inference. Yes, question? You said the Monte Carlo [INAUDIBLE].. Can you not, one by one, keep adding some samples and then keep also [INAUDIBLE]? So I guess the question is suggesting a technique of how we can check whether MCMC has converged. I'll not go deeper into that. We can discuss that offline. But in general, it's a hard problem when you're running Monte Carlo techniques to know whether you-- so the technical term there is something called as a burn-in phase, where, first, you start with some kind of an initialization. And you don't know how good it is, so you want to first discard some samples. And then, hopefully, after the burn-in phase, which is hard to calculate whether the burn-in phase is over or not, hopefully, from then, you are getting samples from the true posterior. And then it becomes a question of how many you want. But in general, calculating whether the burn-in phase is over or not is kind of a hard problem. All right, so the two approach-- so in the standard EM, you calculate the exact posterior using math and analytical expressions. If that is hard, you're left with two options. One option is MCMC EM, where you approximate the M step with-- the expectation in the M step with the Monte Carlo expectation. And in variational EM, which is the other approach, you construct an approximation of Q from some family, capital Q. And you use that, the recovered Q from optimization, and construct the M step proxy by using the Q that came from the variational step. Yes, question? Why is it called MCMC? Monte Carlo, Markov Chain Monte Carlo. That's the techniques you use are called MCMC techniques to obtain samples. All right, so now, a few more details about variational inference, and then we can start talking about the variational autoencoder. So in variational inference, most of the time, the question is, how do we choose the family Q from which we want to perform optimization? And a common family of probability distributions is to-- so remember, z-- so the Q distribution that we are trying to recover in variational inference is over z's, and z is a vector in Rk, right? So k-dimensional vector. So our probability distribution must be a distribution over k vectors, or vectors of dimension k, right? And high-dimensional probability distributions are quite few. You don't have as many probability distributions as distributions over scalar values. And a common assumption made in variational inference is to assume that Q of z are the family of distributions from which you want to perform the optimization, Q of z, where z is in-- think of it as in Rk-- can be factored into components, Q1 of z1 times Q2 of z2 times Qk of zk, right? You're going to assume-- if we assume that the components of the z vector can be factored into independent scalar probability distributions-- so if we make this assumption, which is a common assumption that is made, then there is a name given to this assumption. It is called mean-field assumption. Mean-field assumption-- and variational inference that uses this kind of a factorization is called mean-field variational inference. And the roots into why we make this assumption is beyond the scope. For those who are interested-- OK, so the obvious first and the most simple reason is that it makes computation easier. You're making strong, independent assumptions, but you get the computational ease in return. And there are other good reasons why this can be done in certain cases. For example, so the mean-field variational inference actually comes from statistical physics, where variational inference was kind of-- I guess it was invented in statistical physics, where you make this mean-field assumption, and that holds well in statistical physics for whatever reasons. But it may or may not hold on your data, but it's still commonly done because it makes computation easy. And, in fact, the word "mean-field" comes from statistical physics, where-- from field theory and whatnot in statistical physics, where they make these approximations that things are independent. However, you're going to call all the techniques that make this assumption as mean-field techniques, where the Q that we are trying to-- the family Q, where we are trying to perform the maximization, can be factored into these individual components. And this mean-field technique, mean-field assumption, is something that you're going to use in variational autoencoders. So that's just a name, variational autoencoders. So if you remember, in EM, we constructed this ELBO, right? And the ELBO had a Q and parameter theta, right? In the E step, we would find the best possible Q, which is the posterior. And in the M step, we would update theta and the error to recover the next estimate of theta. And while performing each of these steps, in the E step, while we were calculating the next Q, we would hold theta fixed. And in the M step, when we were calculating the next theta, we would hold Q fixed, right? And that is a technique that's also called coordinate ascent, right? And coordinate ascent or coordinate descent-- our objective has multiple variables. And the way we go about optimizing it is to start with some initialization, hold most of them fixed, or all except one fixed, or all except a subset fixed. And by holding them fixed, optimize the ones that we have not held fixed. And once you obtain that updated estimate, then hold the updated estimates fixed and optimize the other ones and so on. So one way to think of it is, if this is Q and theta, and supposing this is the contour plot of the ELBO, we start with some-- let's say we start at some point over here. And we hold Q fixed and optimize it with respect to theta, and we need some point over here. And then we hold theta fixed and optimize it with respect to Q, and we need some point here. And then we hold Q fixed and optimize it with respect to theta, and then hold theta fixed, optimize it with respect to Q, and so on, where we are updating only one of the axes at a time. And this is called basically coordinate ascent. We are trying to climb this hill of this contour plot of the ELBO by moving in only one direction at a time. So we are either moving north/south or moving east/west, right? And keep moving until you've kind of reached the local optima along that direction and stop there, and then start moving north/south until you reach a local optima, and then again, east/west until you reach a local optima, and so on. So that's coordinate ascent. And this coordinate ascent kind of worked well with classical EM where, the optimization along the Q step was not gradient ascent, but we were just calculating the posterior, right? So calculating the posterior-- so starting from this and just calculating the new Q, which is the posterior corresponding to this value of theta, was the E step, right? So the E step moves this way, and the M step moves this way. Now, using this kind of coordinate ascent, if we ask the question, can we do gradient descent instead, what does that even mean? Instead of doing coordinate ascent, can you do gradient descent? And that basically gives rise to the variational autoencoder. So in the variational autoencoder, we are going to maximize the ELBO using gradient descent. And the way we go about doing it, we're going to first assume that the distribution from the model is something like this. We're going to assume z comes from some normal distribution k times k. So that's where z comes from. And then z comes from that. And x given z comes from some normal distribution, whose mean is given by a function g of z parameterized by theta. And we're going to assume some kind of a fixed variance, OK? So z is like the latent variable. So z is sampled from some normal distribution. There's the prior. And x given z, think of it as the likelihood, right? And now, in order to perform the E step, we need to get the posterior, right? The posterior is-- if we have access to it, it would be P of z given x, OK? But with this kind of a model, where g is a neural network with parameter theta, it's very hard to calculate p of z given x. You're probably never going to be able to obtain a P of z given x in a closed-from estimate. So when we don't have an exact solution, basically, we have two other options-- sampling or variational inference. And the techniques that's used in the variational autoencoder is, not surprisingly, variational inference, which is why we call it the variational autoencoder. So we want to estimate or approximate P of z given x using variational inference. And the way we go about doing variational inference is that the family Q, Q family for variational inference-- we need to choose a family for doing variational inference. And that family is basically Qi is equal to normal distribution with mean small q xi, phi and a diagonal of v of xi, psi, right? And what does this mean, right? So first, we spoke about having families of distributions for performing variational inference. But in the distribution, of the family of distributions on which we perform variational inference, we would get a different Q distribution, per example, right? In EM, the E step is performed separately for each example. However, here, what we are doing is we are recognizing the fact that Qi depends on x because Qi is supposed to be P of z given x. And we are going to approximate the Q distribution across all examples using a neural network, right? So the neural network will take x as the input and output the mean and variance of a normal distribution for the corresponding example Qi, for the corresponding example i. And this is sometimes called amortized inference because the difference between EM and the variational autoencoder here is that, in EM, we were separately calculating the parameters of the Q distribution for each example separately and independently. We would calculate Q-- I'm just going to pull this here too, right? In EM, we will loop over each example, and for each example, separately, we would calculate P of z given xi, right? And once you calculate that, keep it aside, pick the next example, and repeat this and calculate the Q distribution for that example. And we will do that independently for each example. Whereas with amortized inference, what we instead do is we're not going to calculate a separate posterior distribution for each example. Instead, what we are going to do is we're going to assume that all Q's belong to a normal distribution. And each of these-- the mean and variance of each Qi is going to be a function of your x's, right? They're going to be a function of your x's. The mean and variance of each Qi is going to be a function of the x's. And those functions are essentially neural networks. So feed x as the input, and the neural network will output two scalars per input. One vector will be the mean. The other vector will be the variance. So your Q network would look something like this. You take x as the input, and you will have a few layers. And as the final layer, you will get mu, so the mean. We call this q parameterized by phi. So phi, q phi, this entire network-- this is q parameterized by phi. So phi represents all the weights and bias of this network. So the input to this network is an example, xi. And the output of this network is some vector mu i. xi was in Rd. Mu i is in Rk. And this Mu i will be used as the mean of a normal distribution, where this normal distribution represents Qi. Yes, question? Why do we need a bottleneck for the distribution? So there's no bottleneck here. We are just mapping it from x to mu, right? There's no bottleneck here. These could be any dimensions, right? We just need to go from x to z. This is not an autoencoder? This is not an autoencoder yet. It's just some network that takes you from x to z's. There is no bottleneck here. Yes, question? Why don't we just assume [INAUDIBLE] is k less than d here? Yeah, k is generally less than d. Yes, that's the general assumption because, in general, the latent representations will always have a compact representation. So z's will be of dimension k by k, and x's-- and this will be in Rd. So when using that mu for [INAUDIBLE] Q-- like, mu is the mean for the new family for the distribution Q, right? Yes. So why does it need to be in the lower dimension? Can it be in the same dimension, then? So Q is a distribution over z. So the question is, why is mu a smaller dimension? That's because Q is a distribution over z. And z is k-dimensional, so the mean has to be k-dimensional right? So this should be in Rk. The mean should be in Rk because z is in Rk. And the covariance should be in R of k times k. Yes, question? OK, so the formula that you wrote there that is about referring to the variational autoencoder if you just [INAUDIBLE]? So the thing we discussed in the beginning was just autoencoders all the way at the beginning. That was not variational autoencoders. So, OK, I was going to ask you, why is the z and x given z justified to be that? Like, why is that-- So this is the model. So why is this justified? This is the model we are starting with. We are not proving anything. We are assuming if this is the model and seeing what the consequences are. So that z is not the compressed version, not the hidden latent version of it? Yeah, so I would say, hold off making connections to autoencoders for the moment. We'll put them all together and see a connection soon, all right? So for now, z is some hidden layer. You're right. Eventually, this will be the bottleneck layer. But for now, assume-- just like in factor analysis, in factor analysis, z was some k-dimensional vector, and x given z was something else. So in place of that, assume you have a model like this, right? So we go from x, which is in Rd, down to Rk, and that will be the mean, right? Yes, question? I don't understand how you got to the covariance matrix. I'm going to come to the covariance now right here, right? So that gave us the mean. For the covariance, instead, what we are going to do is have something very similar. Instead of q, we are going to call it v, right? So again, we're starting with x in Rd and have some kind of a network, right? I know over here, you're going to get v, again, in Rk, right? And this v, are then going to make it positive because standard deviations and variances are always positive. And there are many techniques you can use to take any number and make it positive. One approach is to just square them. Another approach is to exponentiate each element, and both are commonly done. So this v is generally taken to be-- let's call the last layer as-- I don't know-- u, not v, u right? Generally, the last layer of the variance network is taken to be e to the u element-wise to get positive standard deviations, right? And this vector of length k is then converted into a diagonal matrix, where we have v1, v2, and vk. And this will be k by k. And this matrix is used as the covariance matrix of Qi. Why, though? It seems like you have two different models there. We have two different models there. Two different networks. Two different networks to take us from x to one parameter and x to another parameter. In practice, what's commonly done is to have a single network, where everything until the last layer is shared. And at the last layer, you have a separate branch, so one branch for the mean, one branch for the variance because we need two parameters for a normal distribution. We need a mu and we need a covariance. And you can get all of them as the output from the last layer. That's also totally fine that there's not much difference between having distinct networks versus having one network, where you kind of split the last layer into different parameters. Both work totally fine. I don't want to say this because Q-- I could understand how the Q function describes this family of Q because the model is the same. But I don't understand how you derive this from a separate model with the variance of this. Why is the-- So the variance of this? Yeah. So, essentially, Qi-- so the question is, I guess, why do we have two networks? I guess that's at the heart of the question. So the Qi, we define it to be a normal distribution. And in order to define a normal distribution, you need two parameters, right? So for each example, we need two parameters, a mean and variance. For another example, another mean and a variance. And because they are in high-dimensional space, the mean and variance will be a vector and a matrix, right? Take a new example-- you need a new mean vector and a new covariance matrix, right? The mean-covariance pair is per example. And the way, instead of separately estimating them for each example, we are going to perform amortized inference, which means recognizing the fact that they depend on x, we try to capture the relation using a shared network, where you feed in x, and the output is the mean vector. Feed in x, and the output is a covariance matrix. Instead of calculating them separately, this technique is called amortized inference, right? And instead of estimating all the means and variances separately, we instead learn the parameters phi and psi. Yes, question? Actually, as you mentioned, in the case of the q, we use a-- parameterize by the phi? Yes. And then in case of the v, we are parameterizing under the psi? Yeah. How can we use the network to get them? Yeah, so the question is, here, we use the notation that q is parameterized by phi, and here, it's parameterized by psi. And I also mentioned that you can share all the layers and have just the last layer separate. If we do the shared layer, then this notation does not work, right? So according to this notation, we just have two different networks, right? And these parameters are psi, and these parameters are phi, right? But you can-- in practice, what's commonly done is you don't have two completely separate networks because these two will have shared parameters. Yes, question? You're kind of just assuming that you guys have-- you completely made up q and v, and we said that we get the variance from them. But how are you even going to verify what those two networks without [INAUDIBLE]? We'll come to that. We'll come to that. We're going to come to that, right? So we're going to have these two networks, which, when you start, they're untrained, randomly initialized. You feed in an x. What comes out at the other end, you use it as the mean of the Q distribution for the E step. And at the other network, feed in. You get a vector, convert it into a diagonal matrix, and use it as the covariance matrix for the E step of that example. So for each example, you feed it once through this and once through this, take the two parameters, and plug it, and construct the E step of that example, right? And now, because we are assuming a Gaussian distribution, and because we are having a diagonal covariance matrix-- so Gaussian distributions have this property that if there is no correlation between two components of a joint Gaussian, then they necessarily must be independent, right? The Gaussian distribution is probably the only one. Maybe there are others, but I think it's only the Gaussian that had this property that if you take a joint Gaussian vector and two components of the joint are uncorrelated, have zero covariance, then the two are necessarily independent, right? So this diagonal covariance matrix means we are making the mean-field assumption. So because we are constructing a diagonal matrix from a vector, and we are using a normal distribution, we are effectively doing mean-field variational inference, where we are assuming that each of the zi's are independent. So this is the mean of the zi's, and these are the variances of the zi's. Yes, question? That squaring for the diagonal-- what element are you squaring? Yeah, so you can think-- I would say, don't worry too much about this squaring because what you get out of here is generally the standard deviation, and the variance is basically the square of the standard deviation. That's why I use the notation here, just the way in we write mean and sigma squared. But what if it is diagonal? Because it's a diagonal matrix, you can think of it as element-wise, yes. Yes, question? Can we just train the model to such that the output is set to the variance for the standard deviation? Can you please repeat the question? Should we train the model such that the output is on variance instead of standard deviation? Variance instead of standard deviation? The output is-- Yeah, you could. You can treat this as the variance. You can treat this as a standard deviation. It doesn't make a lot of difference. Both approaches work fine. Yes, question? Just to confirm, phi and psi, they're both shared across all i examples, right? Yes, good point. So phis and psis are shared across examples, right? You feed different examples through this network. The network is the same. The network stays fixed for all examples. The output of the network will be the mean and variance of the q distribution. Good point. So now, we basically have all the missing pieces, or all the pieces to write out our objective. And there's going to be one final trick that is required to finish this up. So remember, you first observed that EM was coordinate ascent. And now, instead, we want to do gradient descent. And the Q's here are coming through this amortizing inference, where we have a single network that outputs the parameters for each example rather than having different parameters for each example. In standard EM, we calculate different parameters for each example in the E step. But with amortized inference, we just have one network having shared weights that outputs the Q parameters for all examples, right? So now the ELBO-- we can write the ELBO like this. So the ELBO here will be over phi, psi, theta equal to sum over i equals 1 to n expectation of zi coming from Qi log P of xi, zi, parameterized by theta over Qi of zi, right? Where Qi is a normal distribution with mean q xi parameterized by phi, and covariance being a diagonal matrix, the v network, which takes input xi parameters psi, right? So this is our ELBO now. The original ELBO-- so for reference, the ELBO with standard EM was-- over there, the ELBO had just Q and theta, and that was sum over i equals 1 to n-- let me just make it nice-- As expectation zi from Qi log P of x, z parameterized by theta and Qi of zi. So this was the ELBO in case of EM, where each Q was separately calculated for each example in parallel. However, when we move over to amortized inference, we don't have Q and theta anymore. We had Q because we would calculate Q separately for each distribution. Instead of having separate Q's for each distribution, we instead have shared phi and psi that are shared across examples, right? And the rest of the expression stays the same, except Qi is now not something that was independently calculated for each example. But instead, we use this amortized inference technique, where we feed each example to these networks, and the output of the network will become the parameters of the corresponding Q distribution, right? So now we are pretty much ready. In case of EM, we would optimize this ELBO with coordinate ascent. Now we're going to optimize this objective with gradient descent, right? And the parameters that we want optimize are the phi, psi, and theta. So this means you want to now maximize this ELBO with respect to these parameters, which means we want our update rules to look like this. So we want theta to be theta plus some learning rate-- let's call it eta. That's just the learning rate-- times the gradient with respect to theta of ELBO of psi, phi, theta. And phi is equal to phi plus eta gradient with respect to phi of basically the same thing. Similarly, psi equal to psi plus eta times the gradient with respect to psi of the ELBO. You want to do this until convergence, right? In EM, we would do E step and M step until convergence. In case of the variational autoencoder, we want to perform these gradient updates until convergence, which means, in case of EM, we had Q's and thetas. Now, we will have-- so thetas, and here, we will have phi and psi. I'm just writing them together. And you have-- I'm going to think you start with some random location, and we want to perform gradient update, right? And most commonly, we don't do gradient ascent. We do stochastic gradient descent. And now the challenge is, how do we calculate these gradients? And once we are able to calculate these gradients, we are effectively done. So the way we go about calculating these gradients is-- so the first thing, the gradient with respect to theta. So gradient with respect to theta of ELBO of phi, psi, theta is equal to gradient with respect to theta of i equals 1 to n expectation of z with respect to Qi of log P of x, z parameterized by theta over Qi zi. Now, how do we take the gradient of some term under an expectation? We just take the gradient inside the expectation. And we can take the gradient inside the expectation because the distribution with which we are taking the expectation does not depend on theta, right? So this is equal to sum over i equals 1 to n expectation zi from Qi of gradient with respect to theta of log P of xi-- so xi and zi-- xi, zi, theta. And the denominator will just cancel, just like in the case of EM. In case of EM, when we were optimizing with respect to theta, the denominator could be removed, right? And this, we will factor it out into P of x. P of x, z is basically P of x given z times P of z, right? And this is equal to i equal to 1 to n expectation zi from Qi gradient with respect to theta log P of xi given zi parameterized by theta plus gradient with respect to theta log P of zi, right? And this just goes to 0. And this, P of x given z-- if you remember P of x given z is our encoder, right? So P of x given z, we assume takes this encoder. And this is basically the log likelihood using g, the encoder. And we use the backpropagation over here. Now, the challenge comes. This was pretty straightforward. We could just take the gradient inside the expectation, and everything else inside was pretty simple and straightforward. But now comes the challenge for the other parameters. And this is where the VAE's innovation comes in. So now, if you want to do with respect to-- the gradient with respect to phi, ELBO of phi, psi, theta equal to gradient with respect to phi sum i equals 1 to n expectation of zi from Qi of log P of x, z theta and Q of zi. But now the Q is basically what we saw is parameterized by phi and psi. And now, if you want to take the gradient of this objective with respect to phi, then the distribution with respect to which we are taking the expectation depends on phi, right? In these cases, we can just swap the gradient and expectation. And this is where, really, the kind of key innovation of the variational autoencoder comes into picture, which is called reparameterization trick, right? So the reparameterization trick is something we've seen already in the past, which is basically, if z comes from some normal distribution of mean mu and standard deviation psi, we can rewrite z to be equal to some epsilon times standard deviation plus mu, where epsilon comes from a normal distribution, a standard normal distribution, right? You're going to make use of this special property of Gaussian distributions. This is also called the location-scale property, where you can decouple a Gaussian random variable from its parameters, where the randomness is completely contained in the separate variable called epsilon, which has mean 0, standard deviation 1. And we can take these parameters and scale the standard normal and move its location to mu, right? And now, we're going to rewrite this objective as the gradient with respect to phi. Sum of i equals 1 to n-- I have a question. Yes, question? So are you saying that the distribution of Qi is parameterized by phi? The distribution Qi is parameterized by phi precisely because Qi mean depends on phi. OK. Right? And this, we're going to now rewrite to test. i equals 1 to n, expectation of epsilon coming from some normal 0, 1. So instead of taking expectation with respect to z, we instead take the expectation with respect to epsilon, which has no parameters. And I'm going to write log of P of x comma-- in case of z, we take epsilon. And the epsilon times the standard deviation plus the mean. So this might look a little complex, but let me use some simpler notation here. So this, we will write it as epsilon i times-- I'm going to write it as psi i plus mu i, parameterized by theta, divided by Q of epsilon i times plus mu i, where mu i is equal to q of xi, phi. And i is diagonal of v of xi parameterized by-- Right? And once we are able to rewrite the expectation with respect to z in terms of expectation with respect to some epsilon, and in place of z, we're going to take epsilon and scale it by the covariance matrix and add the mean vector that comes from q and v, which is basically z. So this whole thing over here is zi. And similarly, this whole thing over here is also zi, where we make use of this property to do these replacements. And now, the expectation does not have this parameter anymore. And this will now allow us to swap it. And again, once we swap it, the gradients can be taken in a straightforward way. Yes, question? So [INAUDIBLE] our loss function-- shouldn't we make mu predicted minus exactly squared [INAUDIBLE]?? So the question is, where is the loss for the correct mu, I guess? I don't know. [INAUDIBLE] becomes ELBO instead of that loss function. Yes. Yeah, so let's piece it all together. That hopefully will answer it, right? So this gives us a way in which we can take the gradient of the ELBO with respect to phi, right? Taking the gradient of the ELBO with respect to theta was pretty straightforward because we could swap the gradient and expectation because the expectation did not depend on theta in any ways. But for the other parameters, where the expectation depends on phi itself, we make use of this reparameterization trick, where, with the reparameterization trick, we can rewrite the expectation from in terms of z to in terms of epsilon that has no parameters. Yes, question? You sample epsilon, or-- Yes, I'm going to come to that, right? So, now, we've replaced the expectation with respect to z with expectation with respect to epsilon, right? So the question is now, piecing this all together-- over here, what we can see is we still have, even though we were able to swap the gradient and the expectation, we are still left with the expectations, right? So the gradients still have an expectation from left integer which, in general, can be a problem. But what is done in practice is that this expectation is approximated with Monte Carlo, which means we take a sample zi from Qi and construct and calculate the-- and use that as the input for this encoder neural network to perform backpropagation. And backpropagation is performed with respect to theta. And the output of the neural network is trained to be xi's, right? So you can think of this as the-- sorry, this should be the decoder. I'm sorry about that. This is the P of z given x, so this should be the decoder. And then the decoder, we are fed the input zi, and the output should be xi. And the xi over here-- instead of g, let's give it a different name. So what does the notes use for g? So, all right, so the note uses g for decoder. So, OK, so g is fine. So this has-- the loss function over here is the maximum likelihood objective. And the maximum likelihood objective is just the multivariate Gaussian, and that gives us the loss. And this is basically the gradient of the loss of this multivariate Gaussian, whose parameters are theta-- whose parameters are the-- so let me just write this clearly. So you can think of this as a neural network, where the inputs are z's, and the output is x hat. And this x hat is going to be compared against x's, right? So this becomes the mean of the distribution, and x is going to be the observation. And we assume that the variance is some sigma squared I. And this, you plug it in into the log probability of xi parameterized by the mu that comes out of g of zi and the variance of sigma squared I, right? So this log likelihood, or the negative log likelihood, becomes the loss. And the input becomes the zi that is sampled from Q. And you feed that zi. You get a corresponding mean. And you assume a constant variance, and you plug it in into a multivariate Gaussian likelihood objective. And here, P will be 1 over 2 pi to the d by 2 and the whole thing, right? So that's how things work on the decoder side, where zi's are sampled from Qi. But what are Qi here? Qi is basically the output of our encoder network. So this was the decoder network. The gradient with respect to theta was for the decoder part of the network. And the picture of the two working together is something like this. So let me do it. Right? z is latent, right? x is observed. Now, the encoder takes us from x to z. And the decoder takes us from z to x, right? So in the variational autoencoder, we are using neural networks here and here. They are two different neural networks. This neural network has parameter phi and psi. And this is being used as a replacement for the E step. And over here, the decoder neural network has parameters theta. And this is being used kind of like the M step, right? But we we're going to jointly optimize both these objectives rather than one at a time. And in order to optimize them with respect to-- and the way we optimize them is by maximizing the ELBO. So the ELBO has phi, psi, and theta. And we want to calculate the gradient of this with respect to phi, psi, and theta separately and perform gradient descent. And in order to perform the-- in order to calculate the gradients, we saw this problem where the gradient with respect to theta was pretty easy because we could just swap the expectation and the gradient operator. So for the decoder, this was easy. We can just swap it. But for the encoder, where the expectation depended on phi and psi, we made use of the reparameterization trick, OK? And after doing the reparameterization trick for both the gradients, we are still left with an expectation, though, right? We are still left with an expectation. We were able to swap it, but we are still left with an expectation. We do not eliminate the expectation, right? And what is done in practice is those gradients are approximated using Monte Carlo estimates. Those expectations of the gradients are approximated with Monte Carlo expectations of those gradients. And the way you think of it is-- so we have the x's, and you have an encoder, which takes you down to z. And from z, you have something that takes you back to x, right? And in the encoder, we saw that the encoder takes x as input. And, in fact, it outputs two components-- a mu component-- so it outputs a mu component and the sigma component as a diagonal matrix. So the encoder takes x as input and outputs two sets of parameters. Now, these two sets of parameters are-- so this is xi. These two sets of parameters together define Qi, right? That's the posterior. And from this, we are now going to sample a zi because the gradient has this expectation. And for that, we're going to sample the zi's from these parameters. And we're going to use the zi's as input for the decoder network. So the encoder has parameter phi. The decoder has parameter theta. And we're going to take these sample zi's and feed it as input to the decoder to get the recovered or the reconstructed x's. And together, with both these encoder and decoder networks, we are able to construct the ELBO, right? So the ELBO was-- so the ELBO was over here. So the ELBO is over here. And this is P of x given z times P of z, right? So this is the decoder neural network, right? And everywhere where there is Qi, we have the encoder neural networks, right? And the way we go about training the variational autoencoder is to minimize, maximize that loss and-- maximize the ELBO, and we go about maximizing the ELBO by calculating the gradients and taking gradient descent steps. Yes, question? Can you use any kind of autoencoder to do expectation maximization where you calculate the gradients properly? So I would say this is-- so the question is, can we use any kind of neural network to do expectation maximization? I would say, don't think of this as doing expectation maximization. This is an alternative to expectation maximization. And expectation maximization is defined as the coordinate ascent approach, right? So what we are instead doing here is we are just trying to fit a latent variable model where the relation to EM is through this coordinate ascent versus gradient descent interpretation. Also, is coordinate ascent susceptible to saddle points or local optima? So the question is coordinate ascent susceptible to saddle points or local optima? Yes, they are. VAE is also susceptible to local optima, absolutely. They are not convex problems. All right, so that's pretty much the variational autoencoder. Generally, because we are assuming a constant variance over here, it generally works out that the encoder loss works out to be just the squared error, the squared norm between the two networks. So here, you used x hat minus xy, right? If you work out the math, this part works out to be just the squared error. And these two-- the encoder does not have a direct loss on its own, right? So there is no supervision for x's and sigmas. Instead, the mu and sigma will push these z values to one particular location and concentration because we are sampling z's from this distribution, right? And in case of-- you can think of the variational autoencoder to be exactly equal to a simple autoencoder in the case where the covariances are 0, where we use the mean itself as the z. Whereas in the variational autoencoder, we are outputting two sets of parameters-- a mean, which will tell you approximately where z will be, and also a covariance, such that we sample-- we are effectively adding some noise to the z before we take it through the decoder process. And it's motivated through this ELBO maximization kind of theory. But in practice, the way we actually implement this is to implement it as a simple autoencoder, except at the z layer, we add some noise in the form of the epsilons that we sample, right? So that's variational autoencoders. And that pretty much wraps up our study of unsupervised learning. And in the Friday lecture, we'll be kind of switching gears and looking at evaluation metrics and other general tips on how to implement machine learning projects. And on Monday, we're going to just start the review of the full course, focusing on parts that are important for the final exam. All right, if there are any questions regarding this, feel free to walk up, and I'll be happy to answer them.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_23_Course_Recap_and_Wrap_Up.txt
This is lecture 23 of CS229. Um, so, uh, today we're gonna just continue the- the, uh, finals review that we started last class and, uh, we'll finish up the final review. And, uh, that's gonna be it. So, uh, we- we might finish a little early today. All right. Um, continuing the- the, um, the, uh, final review. So in the last class we started off with supervised learning. We- we, uh, uh, we kind of again went through linear regression, all the different interpretations of linear regression, uh, such as, uh, minimizing the square loss, the probabilistic interpretation, the projection interpretation, the normal equations, um, and then we moved on to logistic regression, uh, which is, um, a model that you can use for classification. Logistic regression outputs a probability, uh, for each, uh, for each example of, uh, the probability that the- the, uh, uh, class is equal to 1, and then you can use a threshold and convert it into a classifier. And then we, uh, spoke about Newton's method, uh, which is another optimization problem that works well for convex or concave problems. And, uh, the- the key summary from Newton's method is that it can be very efficient, uh, in terms of converging quickly. But Newton's method is kind of plug and play, which means it automatically performs optimization. You don't need to specify whether you want to maximize a function or minimize a function. You just throw a function at it and it optimizes, uh, by finding the nearest stationary point. Which means if your function is concave- convex, then it automatically minimizes it for you. If it's concave, it automatically maximizes it for you. If it is neither convex or concave, it just takes you to the nearest, um, um, nearest, uh, stationary point, right? That was Newton's method. Moving on. So, uh, there was this other algorithm that we saw called the perceptron algorithm. Okay? So the perceptron algorithm was a streaming algorithm. By streaming we mean it's an algorithm where you encounter one example at a time. You know, think of it like stochastic gradient descent variable where you are encountering one example at a time, right? And the perceptron algorithm had a very simple update rule, which was, uh, Theta equals Theta plus Alpha times y minus g of Theta transpose x. Right? Where g was just an indicator function, which would return 1 if Theta transpose x is greater than or equal to 0, or return 0 if Theta transpose x was less than 0. So it's just the indicator function which returns 1 if z is greater than equal to 0, return 0 if z is less than 0, right? Um, and the- the idea here was that supposing, uh, you have some examples, let's, uh, let's use some colors. [NOISE] So let's say you have some positive examples. Okay? And you have some negative examples. And for our purposes, suppose this is the origin and you have a Theta vector that's currently pointing in some direction, right? Generally you want Theta- the Theta vector to point in the direction of your- of your, uh, positive class. So the- the intuition behind the way, um, um, the perceptron algorithm works is that if your answer is correct, that is, if Theta transpose x is greater than to 0, then this whole term evaluates to 0, if you make a correct prediction, right? And so you- you- you don't change your, um, you don't change your parameters at all. However, if you make a mistake, that is if the correct answer was 1 and you predicted a 0, that means that, uh, you want to, uh, this will then end up, uh, I forgot x here, okay? This will then end up adding a small, uh, uh, small scalar times x to your parameter vector. Okay? Which means if this is, uh, where a Theta is pointing and the new example, uh, x is, let's say over here, um, over here, then the decision boundary of the, uh, uh, current parameter vector would be something like this. And let's say the, uh, uh, to make it easier, let's say, uh, the- the, uh, uh, example was here. And so the- the, uh, the decision boundary is like this. And so, um, this- this, uh, example is now predicted as a negative because it is- it is on the other side of the decision boundary. So what we wanna do is now is to take this x vector. Okay? This is the, uh, x vector which got misclassified. Right? Take a small for a, uh, a small scalar times this, um, uh, x vector and add it to Theta vector. So which means you add a small component. All right? So that we call this Alpha times x, and now this becomes the updated parameter vector. All right? And in this updated parameter vector for- for this, the separating, uh, hyperplane which is perpendicular to it. This would be the updated separating hyperplane. And now we see that the, uh, x is now correctly classified, right? So the- the general idea here, the- the larger take-home message is that when you want some vector, Theta vector, to take a value that's- that's closer or, uh, is oriented closer to some desired vector, then a thing you can do, or an obvious thing you can do is to add a small scalar times the vector to the desired vector, right? That is, Theta plus Alpha times x, dot product with x will always be greater than Theta transpose x, right? That's- that's- that's the take, uh, take-home summary that you wanna add a small, uh, uh, scalar times the vector to- to Theta to make Theta be oriented closer to, uh, closer to x. I think that's last perceptron algorithm. Adding vectors make them- make them similar. That's- that's the take-home message. Okay? And we saw, uh, in fact you- you also implemented the perceptron algorithm in, uh, I think it was Homework 2 where, uh, we took this algorithm and then analyzed it. And- and, uh, you guys also, you know, plotted the separating boundaries and whatnot, right? So that was- that was perceptron. And in the perceptron algorithm, uh, there is theory which we did not cover in- in the course, but this theory to show that, uh, if a separating hyperplane does exist, then no matter in what order you present the examples to the learning algorithm, it will eventually find some separating hyperplane. Okay? That was- that was, uh, that was the- the, uh, uh, property of a perceptron algorithms that no matter what- what order you present the examples in your stream, uh, as long as you- as you follow this update rule, if there is a- a separating hyperplane, it will end up recovering that hyperplane. And once you get a- a- a separating hyperplane, uh, all the updates beyond that point will always be 0. So it- it would have converged to, uh, one of the- one of the possible separating hyperplanes. And after, uh, the perceptron, we moved on to something called the exponential family. So exponential family is a family of probability distributions which have the general form, p of y parameterized by Eta is equal to b of y times exponent of eta transpose T of y minus a of Eta, right? Where y is the- is the, uh, uh, uh, is the variable. Y is the variable. T of y is called a sufficient statistic. Statistic. And for most of the problems that we consider, especially with our generalized linear models, T of y would mostly be equal to y for the purposes of our class, right? So our T of y is the sufficient statistics. Statistic. Eta is called the natural parameter, natural parameter. And b of y is called the base measure. And a of Eta is called the log partition function. Right? And this exponential family covers many different kinds of variables. Exponential families can- can, um, um, encompasses discrete random variables, continuous random variables, positive only random variables, uh, um, integer, uh, random variables. Um, uh, it- it is- it is pretty, uh, pretty flexible in terms of the different kinds of supports the, uh, random variable can have. And so, um, as a consequence, when we define a generalized linear model- generalized linear model, where we take an exponential family and set Eta equals Theta_transpose_x, where Theta is now your learnable parameter - learnable parameter that we learned through gradient ascent or descent. And x's are the inputs that correspond to y, right? Once we- once we kind of connect the inputs and outputs using this linear model, then the- the generalized linear model can therefore be- is- is- is this more general form of regression, classification, you know, um, uh, Poisson regression, etc, where depending on the data type of y or the support of y, we get all these different- different, um, different models. For example, when y is just a real value, you get regression. When y is binary between 0 and 1, you get a classification, okay? And- and, um, based on- based on, uh, this relation of how we extend exponential family to generalize linear model, where this is the linear model, the sufficient- the natural parameter is- is set equal to, uh, Theta_transpose_x. We get these, um, we get these, uh, uh, we also showed these special properties. So first of all, the expectation of T_y, or in general just the, uh, expectation of y, where- where, uh, T_y equals y is equal to a prime of Eta. That is the derivative of the log partition function at Eta- valued to the Eta, okay? And similarly, the variance of y or variance of, uh, our T_y is equal to the second derivative, a double prime of Eta, right? You showed this in, uh, Homework 1, uh, problem 4, right? And the- these- these two results can be extended to generalized linear models as well, okay? So in generalized linear models, expectation of y given x parameterized by Theta is now a prime of Theta_transpose_x times x, right? It's a simple- simple extension from, uh, from- from, uh, exponential family to GLM. So this is exponential family and this is GLM, okay? And similarly, the variance of y given x parameterized by Theta is now a double prime of Theta_transpose_x times x, x_transpose, right? And using this property, we showed that generalized linear models are- are, uh, uh, what- what we- what we, uh, showed is that this term over here is also the Hessian of the log-likelihood. So the Hessian of the log-likelihood works out to be, uh, uh, this particular formula. Or in fact, it's going to be the sum over all the, uh, uh, x's. And from this met, uh, this- this term, we can see that the Hessian is always some, uh, you know what? I think I wrote it the other way. Uh, [NOISE] this should be on the right. So what- what we showed was that a double prime of, um, a double prime of y, of sorry, of- of, uh, of Eta will be the variance of y given x parameterized by Theta times xx_transpose. Yeah. And, uh, and here this will be, uh, uh a prime of- of, um, uh, x_Theta will be- will be, uh, uh, the expectation of- of, uh, uh y given x times, uh, times x, right? So what we see is that, uh, the- it can be shown that the, uh, the- the Hessian of your log-likelihood will evaluate to this. And this is always positive because there's the variance and this is PSD, right? So the Hessian of the log-likelihood of a generalized linear model will always be positive semi-definite and therefore, convex. Yes, question? [inaudible] from GLM to the exponential family. GLM essentially is- how is it different from the linear model? So GLM is when we, uh, we take an exponential family, right? And reparameterize the natural parameter with the, uh, covariates of- of corresponding to that example. So that takes us from exponential family to GLM. In exponential family, you only have y's, there is no x's there, right? If you want to use it as, you know, some kind of a predictive model where given x, you want to predict y, this- the exponential family alone does not work because you- you have only y, right? And the way you introduce your inputs is by reparameterizing the natural parameter as Theta_transpose_x, where x's are the- are the, uh, inputs corresponding to y and Theta of the learnable parameters. [inaudible] I couldn't understand. What, uh, additional benefit does this para- reparameterization give you over, uh, for example, feature transform, feature maps? So feature, uh, uh, good question. So feature maps are- is kind of orthogonal to this. So you can still use feature maps where instead of Theta_transpose_x, you do Theta_ transpose_Phi_x. As- so you can- you can introduce feature maps into generalized linear models. Uh, so it's- it's a separate concept introducing feature- feature, uh, feature maps. What- what, uh, GLM allows you to do is- is provide you a way to connect y's- y's with x's, right? Without- without- without, uh, uh, GLM, you- you have only y's, there's no x's in- in exponential family. So again just to make sure that I am understanding this right. Mm-hmm. Maybe [inaudible]. So in- in y equals Theta. So this is- this is Eta equals Theta transpose x. [BACKGROUND] So, eh, it- it- it means that we assume that Eta or the natural parameter is equal to Theta transpose x, and then y is sampled from the exponential family that has this Eta, right? That's where the noise comes in. So y is gonna be a noisy version- noisy observation, and the noise follows this exponential family, and the parameter of the exponential family will be Theta transpose x, right? Once you- once you- once you evaluate Theta transpose x, you get an Eta, which will determine the exponential family, and from that, you assume that the y is gonna be a sample from that exponential family. Yeah. Okay? So that's- that's, uh, exponential families. And then we saw this- this, um, and then we saw this, uh, connection later in the course between exponential families and maximum entropy, right? So, um, performing maximum likelihood on exponential family. So, um, if you are given a dataset, um, y^i, you know, i equals 1 through n, when you perform MLE with some, uh, exponential family, that is equivalent to performing maximum entropy, that is max H of p. The constraint that the, uh, expectation of some- some, um, um, um, statistic functions T of y is equal to the sample- sample averages, i equals 1 to n, T of y^i. Subject to this constraint, if we maximize the entropy, then you- the- the, uh, solution that you get will lie in the exponential family, and the parameters of that exponential family will be the ones- will be the same parameter that you get by performing maximum likelihood on the exponential family, and the sufficient statistics that you used for the constraints are gonna be the sufficient statistics of that exponential family, right? So that's the, you know, it's- it's this, uh, duality result where you can, you know, maximum likelihood and maximum entropy are like these, uh, dual problems of each other, right? So that was, uh, maximum likelihood and maximum entropy. [NOISE] And then we moved on to something called generative models, right? [NOISE] In- yes, question? [BACKGROUND] Well, so, um, maximizing the entropy subject to these constraints is the same as maximizing the likelihood of this dataset using an exponential family that has sufficient statistics equal to these functions. [NOISE] And then we moved on to something called generative models, [NOISE] generative models. So the algorithms until then that we studied were modeling p of y given x. X was considered given, right? We did not assign any kind of probability distribution to x's, right? They were given but we- we, um, uh, we would assign some kind of a distribution to- to, uh, y. And then, uh, with- with generative models, we now instead try to- so this was our discriminative models, right? In discriminative models, we just model p of y given- given x, whereas in generative models, we actually try to model x as well, right? Now, our interest here is p of y, x, which is generally written as p of x given y times p of y. So this is the joint, and over here, p of y, we generally call it the prior, and if y is discrete, we call it the class prior. And p of x given y, you can, um, uh, think of it as the likelihood function. So, um, we model both- uh, both y's and x's or x given y's, and depending on the- the datatype of x, we could either, um, um, so x's could either be real valued, real valued, right? Uh, for example, in GDA, y was- uh, uh, x was- x was, uh, real valued or x could be discrete- discrete valued, right? And we saw one example of discrete valued x which was naive Bayes, right? So these are- these are these two generative algorithms that we saw where, uh, both these generative algorithms have discrete y's. So the y's are discrete in- in both these versions, uh, uh, in both these examples, which means in a way we are trying to perform classification because your y's were discrete, right? But the x's in one- one- uh, in one case in GDA, x was real val- valued, and in the other case, in naive Bayes, x was discrete valued, okay? And for GDA, the- the model was something like this. So we assume p of y, the prior, was some Bernoulli- Bernoulli with, uh, parameter Phi, correct? And p of x given y was some kind of a normal distribution with mean Mu y. So each y had its own Mu and a common covariance matrix, Sigma, right? So this was- this was, uh, GDA. And the, uh, kind of, uh, picture to have in your mind is in GDA, we have some x's. So- so, uh, some- some- some x's that come from y equals 0, so let's just call them 0, that- that have some kind of, uh, a covariant structure Sigma, right? So this ellipse kind of, um, uh, you know, uniquely, uh, uh, uh, determines the- the, uh, covariant Sigma. And then there is another, uh, another class y, so this corresponds to y equals 0, y equals 0, and for y equals 1, we have this other that has- that is- So the assumption is that y equals 1 also is distributed according to a Gaussian with the same covariance structure as y equals 0, right? So this is y equals 1, okay? So this- this is the picture to have in, uh, uh, about GDA where, um, until we- we, uh, st- study generative algorithms, we were not even visualizing x, right? So we would only focus on, you know, what y was. We would plot, um, um, um, uh, you think of- think of, uh, uh, y and- and modeling y. But now we are trying to assign probabilities to x directly, right? So this is x1, x2, x1 or xd. And because of this equal covariance, the- the posterior of P of y given x, P of y given x takes a form of 1 over 1 plus exponent of minus something, right? And that's because it's- it's easy to see that if both of them have this- this, uh, um, um, equal covariance structure, and suppose both of them have an equal number of examples, then the line that, or the- the- the set of all points that gets equal likelihood both from the Class 1 Gaussian and the Class 2 Gaussian is gonna be some straight line, okay? And- and that kind of corresponds to, um, um, uh, uh, a form which can be expressed as a logistic regression because, uh, in logistic regression, we also got, uh, linear separating boundaries, right? Now, if the equal covariance assumption is not met. So instead if we had a normal with mu, a mean corresponding to the class and also covariance corresponding to the class then we would have gotten instead, uh, one covariance that looks like this. Another covariance probably looks like this, all right? And now the, uh, set of all points that get equal likelihood under both these classes would probably look something like this, right? It wouldn't be a straight line anymore. It would be a quadratic, right? So that's- that's GDA. The posterior of the GDA is, uh, um, is a logistic, uh, takes on a logistic regression form y- you saw that in PS1, Q1, right? And we also, kind of, uh, uh, briefly discussed, you know, what are the cases when you wanna use logistic regression directly and what are the cases when you want to use GDA? Uh, yes, question? Why did we make that assumption of equal covariance in mid range? Like we know how to do analysis with, uh, different covariance, was is just to show that you get the logistic as a- as a posterior or I'm guessing you don't get a logistic as posterior [OVERLAPPING] Well, so in this case, um, uh, so the question is, do we get a- a logistic as a posterior in this case. Uh, you can show that you do get a logistic, uh, as a posterior in this case, but that logistic will have quadratic, uh, form, quadratic features of x's, right? So you can show that this takes on, uh, a logistic regression form except that the- the, uh, it's a logistic where you include quadratic features of your x's. And why did you assume initially that we have the same coordinates? So- so- so, uh, so- so the- the question of why we assume something, the answer is always, if you assume something, you get this, right? And- and, you know, why you assume something is most of the times it's because it gives you some kind of a mathematical convenience. And then you kind of measure up against real data whether the assumption was reasonable or not, right? That's- that's- in general, that's gonna be the answer for why we assume something for anything, you know, mostly in this course and in general, right? Uh, you know, making some assumptions gives you, you know, some kind of mathematical convenience maybe, you know, easy to implement or, you know, convexity or- or whatever. And that assumption may or may not hold true. And then, um, so you made that assumption, you know, build a model or whatever, and then you fit your data and see if that assumption was valid or not. Next question. Um, the significance of translating from this commutative and generative index, you know, it's not that apparent to me. When would we want to use a discriminative and when would we want to use a generative? Yes. So I'm coming- I'm- I'm coming to that, uh, right away. So then we kind of, uh, um, uh, kind of, um, a-a- asked the question, you know, what- under what cir- circumstances do we want to use logistic regression? And when do we wanna use GDA? Right? And the- the- the answer to that is if you don't have a lot of data and also if you are pretty sure that the modeling assumptions hold true. In those set of situations using generative models is- is gonna be beneficial over using logistic regression. However, if you have lots of data and, you know, you're- you're not too sure of whether the modeling assumptions hold true or not, then you're better off using logistic regression. In logistic regression, we made no assumption about the- about how x's are distributed. The only model P of y given x, right? X was assumed to be given. So logistic regression is in general, more robust to the kinds of, uh, x's that- that you may, uh, uh, you may- you may actually encounter. However, in examples where the assumptions hold, GDA will be more sample efficient, right? You need a lot fewer examples when the assumptions hold with GDA as opposed to logistic regression, okay? And- and that's- that answer kind of holds for, you know, all generative models. Um, when you want to- when you want to learn, um, um, some kind of, um, um, machine learning model, right? So when you wanna build a model, you can kind of give information to it either through assumptions or you can give it information from data, right? And as long as your assumptions are true, feeding those assumptions is beneficial, right? And you have these other, er, in this other case- so in- in- in GDA, in these generative models, you're feeding in a lot more assumptions, a lot more prior knowledge, right? And as long as those knowledge is- is, uh, a-a- as long as those assumptions is- is true, it is beneficial. You need a lot less data to make up for, um, um, uh, for the information, right? In discriminative models, you're feeding a lot fewer assumptions. You're saying nothing about x's at all, right? Which means you need to make up for that lack of information that you're feeding by having more data, right? And it- it- it fits the data no matter how, you know, um, uh, it fits the data better because you're just making fewer assumptions about how the data is supposed to be, right? So that's- that's gonna be a common difference between or- or a common, uh, way to think about when you wanna use generative models versus when you wanna use discriminative models. Right, so that was, uh, GDA. And then we kinda move on to naive Bayes, right? So Naive Bayes. In Naive Bayes- so Naive Bayes is- is, uh, an algorithm that's commonly used with text classification, right? Uh, in text classification, the inputs or the x's are discrete, you know, they are discrete words that make up a sentence or a message. And in Naive Bayes, we kind of saw these, um, so first of all, in Naive Bayes, we make this conditional independence assumption. The conditional independence assumption tells us that x_i is independent of x_j given y. Which means if you know what the class is, whether, you know, for example, if you're doing spam classification, y can be 0 or 1, uh, spam versus no spam. If you know what the class is, then the probability of observing one word is independent of the probability of, uh, uh, observing another word, right? This does not mean x_i is independent of x_j, right? So independence and conditional independence are two- two, uh, kind of orthogonal properties. Uh, being conditionally independent says nothing about being independent and vice versa, right? In- in, uh, Naive Bayes, we make this conditional independence assumption that if we know the class, then the probability of one word occurring is independent of the probability of other words occurring, right? And this- this assumption may or may not hold true in practice. But, um, but it so happens that even when these assumptions are not met, the models tend to work reasonably well with text, uh, or textual data, especially for a classification. Yes, question? Sorry that- that, uh, doesn't hold, right? Oh, sorry. Yeah, that doesn't hold. [NOISE] Right? And we saw two- two different, uh, event models called the, uh, Bernoulli event model, Bernoulli event model, and the- and the multinomial event model, multinomial event model, right? So the difference between the- these two is in, um, in both of- in both of, uh, um, in both of these event models, we assume p of y is equal to Phi y. P of y equals Phi y, right? So this is the, uh, class prior. Class prior, class prior. So the class prior is what fraction of all the messages that you encounter will be spammy in general, right, without even looking at the messages, uh, of- of the content of the message. That's just the, uh, class prior. And then in the, uh, Bernoulli event model, we say p of xj given y is equal to Phi j given y according to, uh, um, again, this is a Bernoulli distribution. Bernoulli distribution. What that means is over here, xj is the jth word in the vocabulary, right? What this means is, uh, we have one parameter per word in the vocabulary per class. Okay? And the- the, uh, what the- what the parameter signifies is the probability that of the word, the jth word in the vocabulary occurs in a given message. So the, uh, on the- on the- on the, uh, on other hand with the multinomial event model, we have p of xj given y is equal to- is as, um, equal to Phi j given- given y, where this is the- is the, uh, jth multinomial distribution parameter or the categorical distribution parameter, multinomial distribution, and xj here refers to the jth word in a message. [NOISE] Right? So in the- in the Bernoulli event model, we care about what fraction of the message the word occurs versus does not occur. We don't care how many times the word occurs in that message, okay? We're just- we're just, uh, counting what fraction of the messages does this word occur versus not occur. It may occur 10,000 times in a message, but we just count it once that it occurred in this message. Whereas in the multinomial event model, we are trying to build a histogram of- of words that occur in all the messages of a given, uh, um, of a given class, right? So, um, here, we basically end up counting the number of times the word appears across all messages in spammy, uh, emails, and similarly, the number of times a word occurs across all messages in non-spammy emails, right? And, um, and then, uh, uh, we count them, uh, and normalize them to get a multinomial distribution over the words. So that's a multinomial event model, whereas in a Bernoulli event model, we kind of treat each word in the vocabulary as a separate problem and estimates- estimate the- the, uh, Bernoulli parameter of what fraction of messages does this word occur in the spammy emails, uh, or not. Yes, question. The way I think about it, like, can you confirm if this is the, uh, right way? Basically, multinomial is the same as Bernoulli, but what you do is you concatenate all the non-spammy messages and concatenate all the spammy messages and treat them as two big messages and then just do Bernoulli. Uh, that's not the case. So, uh, um, the- the, uh, the- the question was, in, uh, in multinomial, do you think of it as taking all the spammy messages, concatenating it as one big message, just take all the, you know, non-spammy message concatenate against one big message, take those two big messages and perform Bernoulli? Uh, that's not the case. Uh, because in Bernoulli, we are only- we- we- we are, uh, counting the number of times or rather the number of messages in which a word appears. So if you collapse everything into one message, then the answer is always 0 or 1. Does that appear in that one large message or not? Whereas in- in, uh, multinomial what you wanna do is actually count the number of times the word repeats across all messages. Right? So the two- the- the- the those two are- are- are, uh, are- need not to be the same. Right? And then, um, we also saw this concept called Laplace smoothing, where in Laplace smoothing is a technique to handle rare words where in words- uh, may- may occur very, uh, infrequently. And in Laplace smoothing the idea is for each of the, uh, two different, uh, uh, event models, first, kind of precount that each event happens once. And starting with a count of one for each of the events, then look at the data and- and start incrementing the counters. Right? That's- that's the general idea of- of Laplace smoothing where you want to start with- instead of starting with a 0 prior, where a 0 prior means you have no idea whether a message is, you know, a- a word occurs in a- a spammy email or not, start with a prior where you assume you've seen, [NOISE] um, you know, one- one count of each event. [NOISE] Right? Now so that means, uh, in the- in the Bernoulli case, each word assume that you've seen it once in a spammy email and once in a non-spammy email, then you start incrementing the counters by looking at the emails and seeing whether, uh, and- and checking whether the word occurred in it or not. Right? And similarly, in, uh, the multinomial event model, assume that, um, every word occurs once in the spam class, or the spam pool of messages, and occurs once in a non-spam pool of messages, and then construct the histogram of words from, you know, this pool and that pool by adding on to that one, right, and then normalize it. So that's Laplace smoothing. [NOISE] And that kind of wraps up generative models. So those were basically the two models that we saw: GDA and Naive Bayes. GDA was for continuous x's, naive Bayes was for discrete x's. And then, uh, after, [NOISE] after, uh, uh, generative models, we move on to kernel methods. So with kernel methods, the motivation for kernel methods is having some kind of an efficient way to introduce feature maps. Right? So we saw feature maps in homework 1, the last question, where we implemented polynomial features for linear regression and we saw that by using feature maps, even though we're fitting a linear model, the hypothesis that we get can be quite nonlinear or quite curvy. Right? So linear regression is linear in its parameters or linear in its features. It's not always linear in the- with respect to the original data. So by introducing these features, we can get, you know, uh, pretty complex nonlinear models. And kernel methods is a way to- to, uh, have these nonlinear features in an efficient way. So a kernel, [NOISE] in order to define a kernel, we start with a feature map. Kernel methods, right? So we start with the feature map Phi of x, right? That takes R^d to R^p, right? Where x is in R^d, that's the data that's given to you, right? And p is the, uh, uh, uh, um, the dimension of the feature space. And importantly, p can be infinite. Okay? And now we define a kernel based on this feature map, and say that a kernel corresponding to- a kernel is a function of two inputs, let's call it x and x prime, corresponding to a feature map, if it evaluates to Phi of x transpose, Phi of x prime, right? So a kernel takes as input two xs, like, you know, before you map them to a f- a high dimensional feature space. And it evaluates the, uh, output to have the same value as that you would get by mapping them to the high dimensional feature space and taking an inner product, right? So mathematically, these two are equivalent, but computationally or algorithmically, the kernel would generally perform some kind of a more efficient computation that gives you the same answer as if you had constructed an explicit feature map and done an inner product between them, right? So that- those are kernels. So kernels are- are functions of- of, uh, x and x prime, corresponding to some feature map such that they evaluate to, you know, mathematically evaluate to the same answer as you would have, uh, uh, obtained, from the explicit feature map construction. And then, um, there are a few properties of- of, uh, kernels. So a- a function K is a kernel if it is symmetric, right? If it is, um, symmetric, and if you were to take a collection of examples, x^1 through, uh, um, so if you were to take x^1 through, let's say, x, um, let's call it m, and construct a kernel matrix. So kernel matrix is then a matrix of m by m, right? And these examples can be any examples, these may not be a training set. Take any possible, uh, examples, uh, you can- you can, uh, uh, come up with. And if you evaluate K of x^1, x^1, K of x^1, x^m, and similarly, K of x^m, x^m, K of x^m, x^1. So evaluate the, uh, kernel function on every pairwise, uh, on- on every pair of, uh, uh, of- of, uh, two- two, um, two choices from the set. And this matrix that you obtain, where each of these is just a scalar, right? Each of these is a scalar, and you get this, uh, matrix, which is also called the kernel matrix, this kernel matrix will be symmetric and positive semi-definite, right? And we saw- we also saw this, uh, this theorem called Mercer's theorem- Mercer's theorem, which says, in order for a function K to be a kernel, it is necessary and sufficient for the corresponding kernel matrix to be symmetric and positive semi-definite, right? So the- the, um, so the, uh, kernel matrix being symmetric and positive semi-definite, is both a property of kernels, which means it is, uh, a necessary condition, and Mercer's theorem, uh, tells us that it's also a sufficient condition that for any K, if this property holds, then K must be a kernel function, right? And the Mercer's theorem, um, you know, one- one lose intuition there is, the Mercer's theorem is essentially telling us that, um, any sub-matrix of a positive semi-definite matrix is also positive semi-definite. So you can think of kernels as these, uh, PSD functions which are infinite-dimensional, um. And a kernel matrix is, you know, um, a choice of or some choice of specific input values, evaluate the kernel at those points and extract it into a matrix. And Mercer's theorem essentially tells you that, you know, any sub-matrix of this infinite-dimensional, uh, PSD matrix or PSD function, must also, uh, um, um, is necessarily PS- uh, a positive semi-definite and vice versa. Yes, question? Where does x^1 through x^m come from? It can be any x^1 through x^m- any arbitrary x^1 through x^m, right? Any example. They need not be a training set, right? It's- it's just a property of this- of this, uh, function, right? So that was- that was, uh, kernels, um. Yes, question? Can you clarify that sub-matrix theorem? So, uh, you know, the- the, uh, Mercer's theorem tells us that- so this infinite-dimensional space that you see, think of- think of this as the function K, um, where this is one input, the other input, and the values here are the- the values that the function evaluates to, right? And now, uh, think of this as an infinite-dimensional matrix, right? And now, if you just, you know, extract the evaluations of this function at certain points, right? So this corresponds to x^1, this corresponds to x^2, you know, and this corresponds to x^m, and similarly this corresponds to x^1, you know, x^m, right? So basically you're just extracting these- these, um, extracting a sub-matrix out of this infinite-dimensional matrix here, right? And Mercer's theorem is, uh, is a- is another, uh, um, is another way to say that, uh, a matrix, in this case, the infinite-dimension matrix is positive semi-definite if and only if any sub-matrix is positive semi-definite. So that's- that's, uh, that's Mercer's theorem, and that's, uh, kernels. And the kernel trick is basically, uh, if we can rewrite our algorithm to be represented as inner products of xs, then we can replace those inner products with kernel functions, right? And that basically allows us to redefine parameters. So if this is our design matrix x, where you have n examples, and let's call it, uh, uh, p features. So- so this is Phi of x, right? So there's some feature map. And, uh, you have- you have, uh, p features, and n examples, right? Um, in a simplistic approach- in the, in the, in the naive approach, we would define a Theta vector, right? So we would define the Theta vector in R^p, you know, if- if you use a feature map p. And then we would use some kind of a gradient descent to keep updating these, um, Theta vectors with each time step by minimizing the loss, right? So I think of this as Theta at time^0, and then Theta at time^1, Theta at time^2, and so on. And gradient descent gives us, uh, updated, uh, Theta vectors. With the kernel method, we can basically flip it around, and say we define one coefficient per example. So this would be, call it Beta^0, where we have one element per example, which kind of sets the weight of that example in a way, right? And with- with kernel methods, we keep obtaining these updated Beta vectors, right? So gradient descent keeps hopping us from here to here. And similarly, if you- if you kernelize it, you- you, um, you work with coefficients for different examples, right? And this- this method, therefore basically allows us, uh, to have infinite-dimensional feature maps, right? The naive method would start failing as you extend the p to infinity, because you need to maintain vectors of that length. Whereas with kernel methods, we are maintaining a vector of length equal to number of examples. And so even- even, uh, this kind of allows you to use infinitely long feature maps. And we- we use kernel- the, uh, uh, the kernel version to evaluate the, um, evaluate, uh, uh, the infinite-dimensional, uh, inner product in some inexpensive way. And we maintain a finite-dimensional, uh, uh, coefficient vectors, right? So that's- that's, uh, kernel- that's the kernel trick. We, um, in the notes, we have- we have, uh, described how to use the ker- apply the kernel, uh, kernel trick to linear regression. And in one of the homework problems, you kernelize the Perceptron algorithm, right? In the Perceptron algorithm, not only did you kernelize it, but you also made it work in a streaming setting, where you're obtaining one example at a time and the Beta vector would be extended as you keep en- as you kept encountering new examples, right? So the, uh, that was from, uh, kernelizing the perceptron, right? And then we briefly covered support vector machines. So support vector machine is a kernel-based classification algorithm. A kernelize perceptron is also a kernel-based classification algorithm. The perce- the support vector machine, however, focuses on something called as the geometric margin that defines the separating hyperplane, right? The, uh, the algorithm, such as logistic regression, try to maximize the functional margin, which is different from the, uh, geometric margin. And you saw the distinction between functional margin and geometric margin in the, uh, um, again, in- in, uh, uh, P Set 2 question 1 on training stability of- of, uh, logistic regression. Well, logistic regression, by trying to maximize the functional margin, would keep extending the margin all the way to plus infinity because it found a well, eh, uh, uh, uh, a hyperplane that could be separated, and it would just gain for free uh, uh, a higher functional margin by just scaling all the values up and it would just keep, uh, scaling it forever. Whereas with support vector machine, you don't have such problems because you're trying to maximize the geometric margin. And the geometric margin is the geometric margin, uh, because it's a geometric concept, right? And the support vector machine also has this other nice benefit that the coefficient vector that you end up with at the end of training is going to be sparse. By sparse, it means that most of them will be zeros, except a few examples, the rest will all be 0s , right? And these few examples which have a non-zero coefficient are also called the support vectors, right? So the idea there is, let's say you have, uh, you know, some classes over here and another class over here with examples. Support vector machine tries to find this separating hyperplane that maximizes the geometric margin between the two classes. And in order to, uh, [NOISE] decide this- this, uh, exact position of the separating hyperplane, all what matters are these nearest examples from both classes, right? And these two- these two, uh, uh, subsets of examples end up being the support vectors. And the- the, uh, the location of the other examples don't matter so they get a coefficient of zero, but the nearest examples end up having non-zero coefficients, and tho- those are also called the, uh, support vectors, right? So support vector machines are- are- are kind of this nice- they have this nice benefit that you can use kernel methods and get scalability in terms of features to infinite dimensions. But also, unlike other kernel methods where you need to hold on to the trainings- the entire training set into test-time, support vector machines allow you to hold on to just these few support vectors into test-time, right? So- so the sparse ql co-effi- the sparse coefficients that you get from support vector machines gives you this nice benefit that you get scalability in terms of number of features and also scalability in terms of number of examples, because you're only gonna hold on to, you know, a few support vectors into test-time. Whereas with other kernel- kernel methods, do you generally need to hold on to the entire training set into test-time, right? So that was, uh, support vector machines? [NOISE] And then we moved on to another kind of kernel algorithm called Gaussian processes, right? So Gaussian processes. So Gaussian processes is a kernel method for regression, right? So support vector machine and kernelize perceptron were kernel methods for classification, right? Gaussian processes is a kernel method for regression, right? And with Gaussian processes, the- the, uh, the way we- we, uh, define Gaussian processes is to generalize Gaussian distributions to infinite dimensions, right? And in infinite dimensions, uh, so if we have a Gaussian vector, right? Of, you know, uh, uh, of a certain dimension that has a certain mean and a certain covariance, in a Gaussian process, you have this function, where you are- where we are, uh, thinking of function as an infinitely long vector, right? So it's- it's- it's like a continuous version of an array- of- of- of a vector which extends to infinity, right? And this is a- a Gaussian process that has a mean function, which is, again, infinite dimension, and instead of a matrix, you have a covariance function k, right? So a Gaussian process is this infinite dimension extension of a multivariate Gaussian distribution, right? And there are certain properties of multivariate Gaussian distribution that makes marginalization easy. So if you want to marginalize out, one of the- one of the, uh, components of- of a- a Gaussian, uh, distribution, you just ignore that component in the mean and ignore that row and column in the covariance function, right? So marginalization is very easy in- in Gaussian distribution. So one way to think of the way we, uh, apply Gaussian processes to, um, uh, uh, to, uh, uh, given- a dataset is to, uh, it is an incorrect, uh, uh, technically incorrect, but very useful in terms of, [NOISE] uh, understanding that you have this infinite dimensional Gaussian vector and we marginalize out everything that is outside our training set and test set. So choose only those points in this infinite- infinite, uh, dimensional, uh, vector. Choose that finite subset of points, where that finite subset is made up of examples in our training set and examples in our test set, right? And that basically kind of condenses this infinitely, uh, infinite long mean function into a finite dimensional mean vector and an infinite, you know, a- a covariance function into a covariance matrix. And this covariance matrix that you- that you obtain is exactly like this. How you obtain a kernel matrix, uh, with kernel methods, you obtain a covariance matrix, uh, by using, uh, the kernel function. And we saw that a kernel matrix must always be positive semi-definite, Mercer's theorem told us, and covariance, uh, functions, uh, or covariance matrices need to be positive semi-definite, so that kind of matches up, right? And once we obtain this, we just use the, uh, conditioning of Gaussian distributions to condition on the training set and obtain posterior on the test set on- on the examples that we want to make a prediction, right? So that's- that's Gaussian processes. The mathematical notation can be a little messy, but essentially what's happening there is- is just this, right? Marginalize out everything that's not necessary and use the conditioning rules, right? The rules themselves are heavy on notation, but, you know, you just plug the- the, uh, the, uh- the- the correct pieces into the conditioning, uh, rule and you get your predictive distribution on- on unseen examples, right? So that's- that's Gaussian processes. [NOISE] Any questions on that? Yeah. Yes, question. What defines what the kernel we use for that? Yeah, good question. So what kernel do we use? And generally, the choice of kernel is a hyperparameter that you tune. You try different kernels and see what's best- what works best instead of giving you good predictive power on a validation set, right? You generally- you generally play around with a few different kernels and- and choose one that works best for you. [NOISE] So the- the, uh, choice of which kernel you want to use is somewhat equivalent to the question of asking, what feature map do you want to use with linear regression, it's- it's- it's very similar. [NOISE] right? So then we moved on to neural networks. [NOISE] So with neural networks. We wanna think of neural networks as these complex differentiable layer-wise or composite functions that have parameters at all layers with non-linearities, right? So imagine, um, you're given- you're fitting an input vector x into a model, right? So this is x in Rd, and so we're gonna- we're gonna talk about, uh, what's commonly called as- as, uh, fully connected networks. [NOISE] This is one specific architecture of neural networks where you start with, uh, um, the input x that you're feeding into your model, and you take it through these hidden layers, right? Where from the input layer, this is called, you know, you can think of it as layer 0, you go to hidden layer 1, by multiplying this by a matrix W- Wx plus some bias b, and running all of these through some non-linearity g, right? So g of Wx plus b will give you this vector, right? So W- W is, uh, a- a parameter that you wanna learn, right? And x is the input that's going in, b is another set of parameters that you wanna learn, g is some element-wise non-linearity, like the sigmoid function for example, right? And depending on the dimension of W, right? W will generally, uh, be our, uh, let's- let's, uh, um, um, let's call this, uh, k by d, right? So Wx will therefore be, [NOISE] let's call it Rk, where this has, um, uh, k components in it, and this had d components in it, right? And then we basically uh, perform- perform, uh, another layer of the same. So g of- so this is W1, you have another W2 of- of this which- let's call it a1 plus b2, right? And so on. Eventually, we bring it down to one scalar, which we call it as y^, right? And then corresponding to this x, there is another y. So you take the output of the network, take the actual output, and combine the two into some kind of a loss, right? And this loss is a scalar. These- these outputs and labels could possibly be- be vectors. So if you're doing multi-class classification, this would give you- thi- this would be like the output of softmax, so it would be a full vector, and the, uh, true- true, uh, label would be a one-hat vector of what the correct answer is, right? And- but the loss is always a scalar valued loss, right? That's- that's, uh, that's the, uh, common convention. We start with a scalar value of loss, and now we want to- we wanna calculate the- the, uh, the gradient of this loss, with respect to every parameter and every layer. And in order to calculate that, we use an algorithm called backpropagation, and backpropagation is essentially just the multivariate, uh, uh, uh, chain rule of calculus, right? So the way you wanna think of backpropagation is always start with one scalar parameter, let's say W, um, um, i j, and try to calculate the partial of loss with respect to partial of Wij of some layer l, right? And this will always- must always evaluate to a scalar because you're- you're calculating the- the gradient of a scalar with respect to a scalar. And the intermediate steps to calculate this will involve, you know, going from the scalar to a vector, and from vector to vector, vector to vector, and finally vector two scalar, right? And so- um, and each of those intermediate components are called Jacobians, right? Going from vector to vector, and, and- um, uh, so each of those intermediate, uh, um, a- are called Jacobians. And you will encounter- you'll generally see that the seq- that the chain of Jacobians that you encounter while calculating, uh, this will generally be, you know, we have a row vector, matrix, matrix, matrix, and finally a column vector, right? And once- once you- once you multiply them out, this will just evaluate to a scalar. This- this is- this is, uh, gonna be common. And for layers where you are uh, performing a non-linearity, the Jacobian will be a diagonal matrix, and for computational reasons, you- you know, you- you might want to, instead of, uh, uh, implementing also a diagonal matrix, you might just wanna do element-wise multiplication, right? So, uh, perform this for every possible, uh, uh, parameter in the network, and then you can use some- um, um, you can identify common patterns and define update rule for the entire matrix in a more compact, uh, compact form, right? And the- so- so all the- all that is basically just algebraic manipulation. But the goal that we're uh, trying to achieve is to calculate the gradient of the loss, which is scalar, with respect to every scalar parameter, right? And once you- once you calculate these gradients, it's basically just gradient descent, right? Perform gradient descent on all these parameters simultaneously. Take a small step to minimize the loss with respect to all these parameters, and take a small step and, you know, repeat until you converge. So that was neural networks, uh, and backpropagation. And after neural networks, we moved on to some learning theory, basically bias-variance analysis. So bias-variance analysis is probably the most important concept from- from the entire course, and it's this bias-variance analysis that distinguishes machine learning from say, you know, optimization, right? In- in, uh, in general optimization problems, you're given some kind of a function or an objective, and you wanna maximize or minimize it, right? And we use such optimization techniques like gradient methods, Newton's method, um, or even just close form expressions for maximizing or minimizing, uh, some kind of a loss, right? But what distinguishes machine learning from just optimization, is the bias-variance trade off. Which means, uh, wha- what it actually means is, our end goal is not really minimizing the training objective itself. Yes, we are- we are minimizing the training objective, but our goal really is to perform well on unseen data, right? And we're kind of doing this minimization of the training loss as a proxy, with the hope that we're gonna do well on test data, right? So the- the, um, uh, um, bias-variance analysis is- is a way to kind of decompose our test error or the encounter- the- the, uh, error that we encounter at test-time, and break it down into sub-components, uh, for bias, variance and irreducible error, right? So irreducible error, uh, and- and this thing decomposition generally, um, holds true for all losses. But in the sp- in- in the specific case of a squared error loss, we get a very clean decomposition that mean squared error, you know, on test-set or on a test example, is the sum of [NOISE] irreducible error plus [NOISE] bias squared plus variance, right? The- the fundamental assumption is that our data is noisy, right? And this noise can- can, uh, can affect our test error in two ways. So first of all, the test example that we're gonna encounter itself is gonna be noisy, right? And so the noise in the test example contributes to, inde- irreducible error, right? So this is noise in test example. This basically tells us that no matter what model you choose, right? Choose the best possible model that can possibly be, you know, uh, uh, uh, um, possibly be imagined. Even that model is gonna encounter some error at test-time because your test data itself is noisy, right? So that's irreducible error. No matter what model you get, you're gonna- you- you're gonna pay this irreducible error penalty no matter what, right? And then the bias and variance are basically- um, so you can think of the- the noise in the training data that you have to be contributed- to be contributing to- to variance. So training [NOISE] data noise [NOISE] contributes to the variance of the model, right? And bias is more or less, uh, kind of telling you how inflexible your model is, right? So your data may be- may be saying some story, of course, a noisy story of how- of- of- of the pattern between your x's and y's, but the model that you have may be limited to just linear models. Even though your data has a clear- clear, let's say, you know, quadratic relationship, x and y, where, you know, there's a clear quadratic linear, uh, relationship between x's and y's. If you happen to choose linear models, then obviously your model is biased because you're limited to solutions that look like this, right? So- so bias is mostly due to, um, inflexibility in your- in your, uh, um, model class, inflexibility or limited capacity. [NOISE] Of model class. And variance is due to noise in- in your- in your, uh, training data, right? And so, our goal is to not minimize the training loss, but our goal is to actually minimize the test loss, right? Or the test error. And this test error has one component called irreducible error for which you can do nothing about. So we focus on just bias and variance, right? And the- the kind of action space that we have includes kind of contradictory actions, right? So some action- no, one action is, you know, increase regularization. And increasing regularization will reduce variance. But there is also another action called reduce regularization, which will re- which will- which will, uh- uh, fight bias. So in order to decide which action you wanna take, you want to get a good sense of what is the contribution of bias versus what's the contribution of variance in your test error. And a loose proxy for that is, you can think of [NOISE] bias as training error [NOISE] and variance as gap between test or validation and training error. [NOISE] So bias technically is not the training error, but for the purposes of choosing the action to take, it works as a- as a sufficient proxy to think of, uh, bias as training error. So if your training error is very high, then you wanna take steps that fight bias. So in that way, it's- it's- it's a reasonable proxy for the purposes of the bias-variance analysis to think of bias as the training error. And the variance is the gap between the- the test and- and, uh, um, test or validation and- and training error. And we- we also discussed some- some actions, uh, in the- in the, uh, in the previous class of what- what- what actions help fight bias versus what action helps fight variance. And before you take any action, you always, always want to get a breakdown of test error and train error and- and, uh, kind of make, um, a judgment call of whether you're facing a high bias problem or facing a high variance problem, and accordingly, take an action that will reduce your overall test error. All right. So that's- that's bias-variance problem. And this methodology, this bias-variance way of thinking holds for classification, holds for regression, holds for supervised, holds for unsupervised. It- and it is- it is- it is kind of this- this topic that permeates all- all the models and all the algorithms that we consider. And it's probably the most useful when you're applying your model- ap- applying, uh, machine learning for a new problem that you're working on in practice. Because the new- the new algorithm that you are working on in practice might be, you know, an algorithm that you've- that we've not studied in this class, for example, random forests. But you can apply bias-variance for that algorithm as well. It works for all algorithms. All right. So that was- that was, uh, bias-variance analysis. Then we studied regularization. [NOISE] And I gave a Bayesian interpretation for regularization, right? So regularization, um, is a way in which you add a penalty for large values of your parameters where the intuition is that large values of parameters can result in very complex models. And in order to kind of minimize the complexity of our models, we wanna limit the- the- the, uh, values or the magnitudes of- of our parameters. And so we, um, if our, um- um, so our loss, you know, if it's generally some kind of i equals 1- i equals 1 to n. Let's say y_i minus h Theta of x_i square. If this is our regular loss, we want to augment it with some kind of a penalty for the loss, uh, for- for the parameters, uh, being too large, right? And you can either, um- um, there are several choices here. You may sometimes want to penalize the 1-norm of the- of the, uh, parameters. And we- we saw that, uh, adding- adding the 2-norm for the parameters is equal- is - has this Bayesian interpretation of performing MAP estimation with Gaussian prior [NOISE]. And per- with one, uh, with- with the 1-norm is equivalent to performing MAP estimation with the Laplace prior. [NOISE]. And in your homework, you also saw how to- how- you know, what the exact value of this Lambda term should be depending on the parameters of the Gaussian and Laplace distribution. So that was regularization. And after that, we move on to reinforcement learning, [NOISE] right? Reinforcement learning is this- this, um, is slightly different from the other algorithms in the sense that in the other algorithms, we make this IID assumption that for each prediction, and each prediction or each example is completely independent of another example, uh, on which we're making a prediction. Instead, where in reinforcement learning, we are in this sequential decision-making, uh, situation where the predictions that we make at one, uh- uh, at one situation will cho- will- will- will, uh, result in some action being taken. And that action will decide the next state that we'll end up with. And therefore decide the next situation where our prediction is being made, right? So the examples, if you think of each timestep, um, in- in reinforcement learning as an example, these examples are not IID anymore, they're all kind of correlated. And in order to formalize reinforcement learning, we define something called an MDP or a Markov decision process. That's a tuple of states, actions, transition probabilities, PSA, in fact, a set of, uh, transition probabilities, a discount factor, and a reward function, right? So the reward function- so we can be in one of the many states, which is defined by the set S. We can take one of the many actions defined by the set of actions A. And depending on which state we are and what action we take, we end up in a new state, and that transition or the dynamics of moving to the next state is captured by this, uh- uh, this- this set of transition probability vectors, PSA. And at each state, when we arrive at a state, we obtain a reward that's defined by the reward function. And there is this discount factor called Gamma, which- which basically tells the rewards that are obtained sooner are better than rewards obtained later, right? And Gamma is generally a value between 0 and 1. So this is the- the formalism in which we- we, uh, we, uh, attack the reinforcement learning problem. And based on this MDP, we define these two, um, kind of related concepts called value and policy, right? So value and policy. So policy is a mapping from state to action, right? It's like a rule book. If you're in a particular state, what is the action that we should take, right? This rule book is called the policy. And the value V that corresponds to a policy Pi of a given state is basically telling us if we were to start in a state S, and were to continue the trajectory by taking actions according to the policy Pi, right? Which means when you are in state S, take action according to, you know, the value Pi of S evaluates to. And that action, according to the dynamics, is gonna take to a random new state. And in that new state, which is random, refer to the rule book and take an action according to that policy, right? And so on. And if we were to repeat this- this pr- uh, this process over and over, where at each time, you know, you- you- you can end up in a different state according to the dynamics. What is the average accumulated sum of discounted rewards? And that is- is- is- is captured by this function called V Pi. Yes, question. So, did- did you just say- did you just say same value in policy in the nation or- Just the concepts of policy and value. So policy is just a rulebook, and value is the- is the long-term reward that you're gonna accumulate by starting at S and following the rulebook pi, right? And- and these two concepts, you know, uh, policy and value, are- are, kind of related. So if you are given a policy Pi, you can perform something called as policy evaluation and get V^Pi- er, V_^Pi. And to get V^Pi- you can- you can- you can obtain V^Pi using a simple, uh, um, um, set of linear- linear equations. You can just solve a linear equations and get, um, the values, uh, for- for all the different states. And if you are at- if- if you're given some policy V, that policy kind of implicitly defines- uh, uh, I think I said it wrong. If you're given a value function V, that value function V implicitly defines a policy. But the policy is to take action in a greedy way to maximize the po- the value of the next state. So- um, so- so Pi of S is equal to argmax of a, expectation of V of S prime where S prime comes from P_sa. Choose an action that is going to maximize the expected value of the next state. So value implicitly defines a policy, and a policy defines a value. However, the subtlety is that this cycle is not- is- is- is, uh- is not the inverse of the other direction, which means a policy will- will give you a particular value function. But then if you take this value function and try to calculate the policy that- that, um, um, acts greedily according to this value function, you will not get the same policy again. And that asymmetry is what is captured in policy iteration. So policy iteration is an algorithm where we start with a random policy, evaluate the corresponding value function, and take that evaluati- uh, value function, and re-estimate the best possible policy you can come off, and then re- re- re-evaluate, uh, the new policy- the- the new value function according to the policy, and go on and on until you converge. So that's policy- policy, uh, uh iteration. Until your policy converges? Until your policy converges. And then there is this other- other, uh, algorithm called value iteration. In value iteration, we have this- this, uh, thing called the- the, uh, Bellman equation. And the um- and the goal in- in, um, in- in the value iteration is to estimate this function called V*, which is called the optimal value function. And V*of S is basically, you know, argmax of Pi of V of S. So basically, uh- now, what is the best possible value that you can have for a given state if you were to scan across all possible policies. So that's- that's, uh, the optimal value function. And this optimal value function, if we were to plug it here, the policy that you're gonna recover is the optimal policy. Yes, question. Shouldn't that be just a max because you're on such a policy? Yeah, you're right. This should- this should just be- yeah, you're right. Thank you. So this just should be a max, right? So the- the, uh, idea with, uh- uh, with- with value iteration is that, uh, value iteration has- you perform this update rule where V of S, you set it equal to V of S plus, um- I'm sorry, R of S plus, um, Gamma times max a P_sa of V S prime. [inaudible] Uh, I- Yes, you're right. Thank you. All right, so this- this is um, this is also called the, uh, uh, the Bellman backup operator where if you were to perform this over and over, you will recover V*. And the intuition to have here is if this is the space of all value functions, so this is a V of S_1 through V of S^S. All the possible value function that is this optimal V*. And no matter where you start. So you can call this as uh, thi- this operator as, you know, V is equal to Bellman operator of V. So take any two value uh, any two value functions. Run both of them through the Bellman operator. Now the distance between these two will always be smaller than the distance between the original two. This distance will always be smaller. Which means the Bellman backup operator, uh, or the Bellman operator is a contraction mapping. So the space is getting, um, um, if- if you were to ru- run your examples over and over, uh, with these Bellman, uh, operators, then they are kind of converging. And whenever you have a contraction mapping that exists this point called the fixed point. And this fixed point is the- the optimal value function. So the, uh, um, the value iteration takes your value function from any spa- from any point in the space, you keep applying the Bellman operator, you're gonna eventually converge to the fixed point. So that's- that's, uh, uh, value iteration running this over and over. And we- we also saw this other- um, we also saw, uh, a variant of this called fitted value iteration. Right? So fitted value iteration, where in fitted value iteration, we, we limit ourselves. So supposing this is V-star, we limit ourselves to a class of value functions, right? In this class of value functions, let's say thi- this could be the class that is like the output of a neural network or a linear model, you know, any, any, any, any- uh, some class of value functions where we start with some point, you know, some Theta that, that parameterizes this, uh, value function, and on this, we apply the Bellman backup operator. The Bellman backup operator will take us to some new value function which may be outside our parameter space, all right? And then we project this back onto- so this, this Bellman operator take- operator takes us here, and then we project it back onto our class by, by trying to fit this, this, uh, value function from a function in our class, say, you know, uh, say, uh, by minimizing the least square or, or some such loss function, right? So that's like projecting this new function back into the class. And from here, again, apply the Bellman, uh, operator, you get a new function, project it back, right? Again, apply the Bellman operator, and then project it back, right? And this is called the fitted value iteration where you are staying within this parameter- uh, this parameterized family of, of functions, right? And the iteration takes you closer and closer to, to V-star, right? But, uh, this algorithm is not guaranteed to converge, right, because this is not a true contraction mapping anymore, right? There's no, there's no, um, uh, there's no fixed point for this. Um, and what- however, in practice, it, it tends to work well. So that's- that's, uh, um, fitted value iteration. We, um, in, in your homework, you implemented, uh, value iteration for the inverted pendulum problem, okay? We did not do fitted value iteration in the homework, you implemented this, and you also implemented this in the context where PSA was not known, right? So in the, in the homework, by, by running your, um, uh, simulator, we also learned the transition probabilities as a separate problem, and using the- the, uh, estimated transition probabilities, we learned the, uh, uh, val- we perform the value iteration. Yes, question? What would fitted value iteration look like in code? Like what it- what kind of, uh, what kind [inaudible] [OVERLAPPING] Yeah, so how would fitted value iteration look in code? So in co- in, in, in the homework, you represented value as some vector whose length was the number of states you had, right? In- um, which meant you could set each element of this value function freely to any value, right? You could, you could, you could set this and not worry about what value was here, right? In fitted value iteration, you are no longer representing your value function explicitly as an array. But instead, you will have V of s will be some, some, um, let's call it h_Theta of s. So s will be the input to the model, the output of the model will be the corresponding value, right? So this is fitted value iteration. So now, inste- initially, I have a problem when we were doing, [NOISE] when we were doing value iteration was- we were using the Bellman backup operator to get v star but now, we are trying to fit some theta. Exactly. So initially, with, with, with, with Bellman backup operator, we directly work with this array and apply the Bellman backup operator until this entire array converges, right? With fitted value iteration, uh, we describe the algorithm, uh, in the class and it's also in the notes. Um, we, we limit ourselves to a representation like this, it's no longer an explicit array, right? And on this, we- [NOISE] we apply the Bellman backup operator, obtained something called y, right? And then find a new Theta, new Theta, uh, arg min of h_Theta of s minus y squared, something like this, okay? So we are kind of projecting the y back to the Theta space. So the assumption- so the log that you do that that's the, that's the, lets say, for example, a linear regression. So it's- your value can be expressed as Theta transpose s- Yeah. -is the assumption. Exactly. So this could be a linear model where you represent your value as Theta transpose s, yeah. So you kind of lose flexibility, but you also gain generalization. So if you, if you do well on one state and another state is very similar, then you kind of have a good sense of the value of the other state, just because two states are similar. Whereas with an explicit array representation, there's no hope of generalization, whatsoever. Can you, can you explain, can you explain the on policy versus off policy? Uh, so on policy off policy is not relevant for the, for the, um, uh, review. So yeah, we still have a few more topics, maybe I can talk about on policy of policy later, right? So, so after reinforcement learning, we move on to unsupervised learning where in unsupervised learning, the- we are given a set of examples x's but there is no corresponding [NOISE] y. And our, our task is to learn some interesting structure, okay? And the structure we saw, uh, in- in the class, basically, we, we look at two kinds of structures. The first one is, do the examples cluster in some way, right? And the k-means was the first algorithm that we saw. And in k-means, the idea is you're given, you know, some set of- some dataset, you know, x_1 through xd, right? And given this dataset, um, that's all what you are given, there are no labels. You know, we don't have a y value that tells these- all these are class 1, class 2, class 3, that- we don't have any of that, right? And our goal is to now, kind of, cluster them in some way, right? And the- those were clustering algorithms. And there is this other kind of unsupervised learning, um, where we are interested in finding subspaces, x_1 and xd, right? If this is our, uh, set of examples, x's, there's no y's given to us, right? Uh, in, in, in, in clustering, we are trying to group examples into different categories, right? Whereas in subspace finding problems, we are trying to see if this high dimensional representation can be instead, captured in a lower dimensional representation. So this data can be projected onto a one-dimensional representation that pretty much captures all the variants, right? And this was in PCA, [NOISE] right? So we had these, um, you know, uh, unsupervised learning, we had clustering prob- problems, and subspace finding problems. Okay? And in, in, in, in, uh, the two, the two that, uh, we spoke about just now, you know, k-means and PCA, we call them non-probabilistic, non-probabilistic. So k-means and PCA. And we have probabilistic equivalence, right? For clustering, we use the Gaussian mixture model, right? And for subspace finding, we use factor analysis. The- the rough intuition to have is think of clustering as the unsupervised version of classification, and subspace finding as the unsupervised version of regression. Right? That's- that's- that's a lose- lose- lose analogy there. Right? And we use- we also saw this algorithm called EM or expectation-maximization that we applied to solve the probabilistic versions, now both GMM and factor analysis using this algorithm called expectation-maximization. Right? In expectation maximization, which is really an important uh, algorithm, especially if you want to get into machine learning research, um, uh, you know, in- in, you know, some of the new hot topics like, you know, deep generative models, it's very important that you have a good understanding of expectation-maximization and all its variants, um, because that's- that's really where things kind of start off. Right? So in EM, we have parameters for a model. So we- we start with a model, where here by model, we mean some kind of a probabilistic model, x, z with some parameters Theta, which has parameters Theta, and um, we call the observed components of our model as evidence. So evidence is generally denoted as x. And we- the other unobserved elements of the- of the- um, uh, of the model are called latent variables. We call them z. Right? And in EM, what we do is this, um, iterating, it- EM is an iterating algorithm where each iteration has two steps. The E-step, we calculate the posterior, now Q of z^i is equal to P of z^i given x^i at the current value of Theta. And in the M-step, right, we update Thetas to arg max Theta of expectation of z according to Q_i of log P of x, z, Theta over Q_i of z. Right? So here, the arg max Theta, the Theta that we are optimizing appears only here. Okay? And- and by- by performing this- this over and over, this term is also called the ELBO, right? By performing this over and over, we saw that this algorithm will eventually converge. We saw a proof of convergence of the EM algorithm, um, and- and uh, this- this- this algorithm will um, uh, eventually converge. Right? So that was- that was uh, EM algorithm. The- we also saw another- another um, um model called ICA, independent component analysis. We used it to- um, we use it to solve the uh, a source separation problem in audio, right? You did that in your last homework where you're given some- some mixture of different signals, you make a non-Gaussian assumption about the sources of those signals. And using those non-Gaussian assumption, you are able to construct an unmixing matrix, and that unmixing matrix is calculated using maximum likelihood, right? And, um, the- the unmixing matrix that you obtain will be able to separate your audio sources into distinct- um, um, distinct original, um, audio components. Right? So we saw this in the homework, and in- in- again in your homework- in the EM algorithm, we saw an extension of this for semi-supervised learning. Right? So it's- it's- uh, I would say it's pretty important to kind of understand that step that lead from unsupervised to semi-supervised, how we went, and try to get a real sense of, you know, what the- what the moving parts was in the proof for moving from- from- from unsupervised uh, to semi-supervised. And what else? That's- that's pretty much that we- that- uh, that we covered in this course. Then we also- ah, um, in- in the last few lectures, we saw the variational autoencoder. Again, in the variational autoencoder, we are- um, the variational autoencoder is probably not- not- uh, not going to be in your exam but, you know, it's an important concept especially, those who want to go into research, where you have the z, the latent variable, and x, the evidence. In EM, you would construct a Q for each example in the posterior. Whereas with the variational autoencoder, we construct an encoder network- neural network that takes x as input and outputs z as the output. And given the latent variable to the evidence um, in- in- in case of, for example, simple models, we call this the likelihood function, right? But in- in a VA, we call this the decoder network. Right? And we train the- um, um, so again, just like we saw in the- in the- in the fitted value iteration, right, in- in the original EM, each example got its own Q function, right, for the- for the posterior, and each Q of each example could be set independently and freely. Whereas with a variational autoencoder, we are trying to come up with this class. Now just, you know, have- have a similar picture in your mind. You are trying to represent all the posteriors with some kind of a class. Right? And which is called amortized inference, where instead of calculating a separate posterior for each, you construct this class where you feed x as the input and your output will be the parameters of the corresponding Q distribution. Right? Um, that's- that's- uh, and then in the last lecture, we covered evaluation metrics. Evaluation metrics are- are- we don't have time to review them in- um, in today's lecture, but they are pretty- pretty straightforward, you can just look at the slides. So that completes ah, pretty much the- the uh, course review. Um, we've covered a lot of material in this course. Another few- few kind of, ah, ah, ending remarks. So with all the kind of, ah, techniques that you've learned here, right, you can use these techniques for lots of different purposes. Right? Your interest might be in research, your interest might be more applied. A general, um, um, kind of, um, um, ending note is to- you know, um, is to kind of uh, recognize the power that machine learning has. You can build really powerful algorithms using machine learning. But hopefully, you know, also put some thought into the problems that you're trying to solve with these a- algorithms. Um, machine learning is a great tool. It gives you a lot of power, which means, you know, it can be used for good and it can be abused. Right? Always put some thought into- um, before you jump into a problem and try to solve a problem using machine learning to think of what the possible side effects could be. Right? So machine learning um, is- at the end of the day it is- it- it- it- it looks for correlation among examples. It has no sense of causality. Right? And so the- the problems that- that are there with correlation, you know, as opposed to causation, transfer over to machine learning in, you know, um, with- with- with the same redos and- and restrictions. So, um, any bias that is there in your dataset will transfer over into your predictions. By bias I- I don't mean bias in variance, but, you know, any other kinds of bias uh, and- and- and fairness issues that may be there in- in- in the dataset that you've collected, which may be colle- which- uh, which may be, you know, not purely sampled. All those issues will transfer over to the predictions that you make. Right? So always be skeptical about the way your data was collected, and especially if you're gonna build some kind of a pipeline where actions are being taken depending on the predictions that you've made, you need to be especially even more skeptical about your model. Right? At the same time, um, the- you know, um, machine learning can be applied in lots of different fields. Uh, any field that collects data and needs predictions, you can use machine learning there, which means the scope of application of machine learning is tremendous. And probably ah, a- a good fraction of you might be interested in machine learning research, but probably most of you are interested in applying machine learning to different- um, different areas. So the hope is that, you know, the lessons that you've learned in this- in th- in this course, um, especially the different set of tools that you learned as different models, and also the general principles like bias-variance analysis will help you solve these problems in your respective fields, right? And for those of you who are interested in machine learning research, right now machine learning research is super hot. There are lots of scope for- for doing, um, you know, um, um, cutting edge, building edge uh, uh, research in machine learning. Um, so good luck for those of you who are- who are um, um, interested in doing research, happy to chat about research uh, offline. And to kind of uh, wrap things up, uh, I hope you enjoyed this course. I hope you uh, enjoyed the material uh, and- that we've kind of covered. I personally enjoyed teaching this course a lot. I myself learned a lot, by- you know, by just going through the process of teaching, I prob-, you know, the half of machine learning what I know as of today, half of it probably I learned them in the last two months, ah, in the process of preparing for the lectures. Um, and good luck on your finals. Um, you know the final is- is designed to be hard, so, you know, study- study hard. It's not long, but you know, it can be tricky. Uh, yeah, with that, ah, we'll- we'll- we'll kind of end the last lecture. Uh, thanks everyone.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_7_GDA_Naive_Bayes_Laplace_Smoothing.txt
Okay. Let's get started. Ah, welcome back. So we're going to continue on to Lecture 7 today. And the topic for today is going to be generative learning algorithms. Um, so the plan for today is to- is to, uh, cover Gaussian discriminant analysis, GDA. This is also on your homework. And then move on to Naive Bayes, um, another generative learning algorithm. And before we dive into, uh, generative models, let's look at what we have covered so far. So, so far we have covered linear regression, logistic regression, and generalized linear models. All three of these are what you may call discriminative algorithms. We call them discriminative algorithms because, uh, they directly model p of y given x, where y is the desired output and x is the input, right? And supervised learning, in the supervised learning setting, we are interested in learning map- mappings from x to y. And, um, when y given x is assumed to be a normal, uh, distribution, what we get as a consequence is linear regression. And when y given x is assumed to be Bernoulli, as a consequence, we get logistic regression. And when y given x is assumed to be in the exponential family, we get generalized linear models. And we saw that generalized linear models is a broad family. And it includes normal and Bernoulli as special cases. So linear regression and logistic regression are special cases of, um, generalized linear models. And now, [NOISE] right now, we're going to start focusing on x, right? In discriminative algorithms, x is assumed to be given. In fact, when we were, uh, looking at these, um, at these methods, each of the xj features or attributes could have been real valued, could have been integer valued, could have been Booleans. And we were kind of agnostic to it. All right? But now we're going to focus on x. And we will model p of x,y. Instead of modeling p of y given x, that was the conditional distribution, you're now going to model p of x,y. That's the full joint distribution. Right? And this, ah, joint distribution is also, uh, commonly called as the model, right? And we see that this can be factorized using the chain rule of probability into p of x given y times p of y. All right? And p of y is also commonly called as the class prior. Why is it called the class- class prior? So the, um, we will be focusing mostly on the cases where y is discrete valued, which means these generative models, ah, that we're going to, ah, discuss today, Gaussian discriminant analysis and Naive Bayes are- are best suited for classification problems. Um, so y is- is- is, um, indicates the class of- of a given example. And p of y is just the marginal probability of- of- um, of- of- of y. And the class prior generally, um, you- you think of it as what is the fraction of examples which belong to a particular class without even looking at, you know, the x features. You know, overall in your- in your population, what fraction of your examples belong to a particular class? All right? And p of x given y is generally high dimensional. [NOISE] All right? So p of y given x in- in these, uh, in these two examples, y was a scalar. It's either a real-valued or a 0 or 1, right? Whereas generally p of x given y is high-dimensional, where x is the set of all your features, all right? And- and so p of x given y, we are trying to model, um, uh, um, a high- a high-dimensional probability distribution, um, because your- your, uh, features can be very high valued. And that's in general, um, a harder problem. Ah, high-dimensional probability is, ah, in general, um, ah, harder both for you know, analytical and computational reasons. Yes, question. [inaudible] So right, so the question is, um, you know, ah, why do we want to do generative modeling? Why do we want to model p of x? Because x is generally given to us? Is that the question? Yeah, so, uh, there are a few reasons why you want to do this and, uh, uh, we will- we will touch upon that towards the end of Gaussian discriminant analysis, you know, uh, at this point, where we kind of compare, ah, um, you know, when we want to do one versus the other. And that may be a- a good time to kind of re-ask the question. Um, all right? So this is, um, so we want to model p of, um, x given y. And at prediction time, when we want to make a prediction on- on, uh, a new example, x test that's given to us. So p of y given x, where x is like a new- new, uh, example. We can use the Bayes rule to, um, swap it around. This will be p of x given y times p of y over p of x, all right? P of x given y and p of y are- are- is the direct, uh, chain rule expansion of the joint. And we get a p of x in the denominator. And this can also, um, generally, in cases when y is- is, uh, binary, we can write this as p of x given y times p of y over p of x given y equals 0, and p of y equals 0 plus p of x given y equals 1 times p of y equals 1. All right? And- and we can write it this way because p of x is exactly equal to this when y is, ah, y is binary. All right? So this is the, um, the Bayes rule to get p of y given x, and p of y given x is generally called the posterior distribution. And when we want to make a prediction as a class rather than a probability, right? So p of y given x will give you the probability of say, y equals 1 given x or y equals 0 given x. Um, but when you want to make a prediction of whether, er, example, x belongs to the class y equals 0 or y equals 1, we generally, um, do it this way. So y-hat equals arg max of y, p of y given x, all right? Which is the y that has the highest posterior probability? Does y equals 0 have a higher posterior probability, or does y equals 1 have a higher posterior probability. All right? And this is- this is the- the ah, common recipe for making predictions whether we are using Gaussian discriminant analysis or Naive Bayes or any generative model, uh, in general. And in order to do this prediction, you may observe that p of y given x, we can write it like this p of, um, so the arg max over y. And p of y given x was, uh, p of x given y, p of y over p of x. Right? And in order to calculate the arg max over y, we observe that p of x, the denominator, is just some constant. And it's going to evaluate to the same value for the cases of y equals 0 and for the case of y equals 1, which means this is the same as arg max over y, p of x given y times p of y, right? So, um, for the purposes of making a prediction of which class we want to assign an example to, we don't even have to calculate the denominator. Yes? There's a question? [inaudible] Yeah, so the question is, you know, P of y has-has, um, a meaningful interpretation. You can think of it as a fraction of examples that are positive. You know, what's- what does p of x mean? Right? And p of x means, it can mean different things in different problems. Um, so p of x, uh, you can think of it as what's the probability that you're gonna encounter this example- this input in your population? Right? That's one way to think of it. And, um, it could have, you know, um, it could be- it could belong to class 0 or class 1. But you're, you know, given- given, uh, an input example, you know, you're trying to answer the question, what's the probability of encountering that input. Right? Yeah, that's- that's a rough intuition. Yes. Question. What is the significance of maximizing y? So, um, what we're calculating here is we are trying to find out which value of y results in the highest posterior probability because we want to- we want to consider that value of y as our prediction. Does it make sense? So if our goal is to just, uh, make a decision of choosing y equals 0 or y equals 1, then for that problem of making a decision of y equals 0 or y equals 1, we don't even have to calculate the denominator. But if you want to calculate the probability of- of y given x, then we absolutely need to calculate the denominator. Right? The two are different problems. Right? If you want to calculate the exact probability, is it 0.77 or 0.5 or 0.2, then you absolutely need to calculate the denominator. But if your problem is to only decide whether probability of y equals 0 given x is greater than or lesser than probability of y equals 1 given x, just to do the- the greater than or lesser than then you don't need the denominator. Yes, question. [inaudible] So in this case, the question is 1 the probability of x a change depend on what class it came from. So in this case, we have marginalized out the class, right? So you're just considering all the examples in- in one common pool where, you know, you know, they're mixed with all the classes and just going to pick one example at random and ask what's the- what's the probability that you observe the input? Right? So, so with- in this setting, we're gonna cover two different algorithms. Right? Two algorithms for today. And for both the algorithms, y will be discrete. Right? So the first algorithm is GDA: Gaussian discriminant analysis. And this x is continuous, continuous, right? And the other algorithm you gotta consider is naive Bayes. And this x is discrete. Okay? And, you know, the example that we're going to use is- is text classification, right? So in discriminative algorithms, when- we- we had two different kinds of algorithms, depending on whether y was continuous or discrete, over here our focus is on x. And we're gonna get two different kinds of algorithms depending on whether x was continuous or x was discrete. But we're limiting ourselves to the- to the- to the case of classification only. So we're gonna limit ourselves to, you know, y equals, uh, 0 or 1. Uh, but your, uh, x's can be continuous or- or, uh, discrete, right? And before we jump into our, uh, first learning algorithm, first G, um, jumping into GDA, just some terminology. Um, so for the model, we call the joint probability of p of x, y as the model. So this is the joint p of x, y. And it is common practice to define our model or express our model as a data generating process, right? Data generating process. And this process is- is represented as a hierarchy of steps with which we generate our model. And this has a- a one-to-one correspondence with the way we factorize our joint probability. Okay? Now, what does this mean? Let's- let's see the case of GDA to get a concrete sense of what- what we mean by this. So GDA. This is our first generative model. So in GDA, we define the model like this as a data generating process. So y sample from Bernoulli with parameter Phi and x given y is sampled. x equals y equals 0, that's sampled from a normal distribution with mean mu naught and covariance sigma. x given y equals 1 is sampled from a normal distribution with mean mu 1 and covariance Sigma. Okay? So this is a hierarchy of how- a hierarchy of steps with which we can generate our data. So first sample, a class variable from a Bernoulli distribution with parameter Phi that decides whether the example that we're about to generate belongs to the class y equals 0, or does it belong to the class y equals 1. And then we generate the input x given y equals 0 according to a normal distribution with some mean and covariance. And if y equals 1, we generate the example as a sample from a different normal distribution with mean mu 1. But in this case, we are sharing the covariance matrix and- and we'll go into the details of why we do that later. But for now, assume both the normal distributions share the covariance matrix but have different means. And for this sequence of steps, the corresponding distributions will be p of y equals Phi y times 1 minus Phi to the 1 minus y. And this is the Bernoulli, um, the Bernoulli distribution. And p of x given y equals 0 is equal to 1 over 2 pi to the d by 2 and one half minus half x minus mu naught transpose inverse minus mu naught. Similarly p of xa, x1 over. And here y is in 0, 1 and x is in- oops [NOISE] R^d, right? So, uh, what do we have here? So p of y is- is a simple, um, Bernoulli. And let me just write the parameters in a different color. So these are all parameters. [NOISE] Okay. And p of, uh, x given, uh, y equals 0 is parameter here. [NOISE] Mu naught, Sigma inverse, Mu naught. And similarly, this case here. [NOISE] Okay? [NOISE] So y is- is, um, is discrete, it's either 0 or 1. And depending on what y we sample, we generate an x, that's the, um, input based on what either one of the two Gaussian distributions with different means, but they share the same covariance structure. Right? So, um, the generative algorithms, one way to think of them as in discriminative algorithms, you are asked to differentiate, or given- given, um, an input example, you are asked to decide whether that example belong to, you know, one class or another class, right? Um, where you're kind of acting like a critic, where, you know, you're given an x and you are asked to classify it as x versus y. But as in generative algorithms, the way you think of them as you are asked to kind of generate the input. You kind of here now, you're kind of acting like an artist. You are trying to come up with, construct an example x, which look different for different, uh, different examples, right? And- or for different classes. So the- the, um, so the goals here are- are kind of, uh, different in some way, where in one case, in discriminative algorithms, all you are asked to do is have good power to discriminate whether an example belongs to x equals- y equals 0 or y equals 1, which means you may limit yourself to just look for certain- certain cues in the input that helped you discriminate it and just ignore the rest of- rest of the example. Whereas in generative algorithms, you're asked to generate an entire example, right? You're- you're- you're expected to come up with a full description of what makes up a good algorithm of, uh, a good image or a good, you know, um, um, example of y equals 0 or y equals 1 class, right? So in general, uh, generative model or generative modeling is um, is um, a harder problem compared to, um, discriminative modeling. And you can also see that, um, because p of x, y is equal to p of y given x times p of x, which means in order to learn a generative model, you have to learn a discriminative model and also learn something else, right? So generative modeling in- in general is- is a harder problem compared to just the problem of- of, uh, learning one of the conditions, right? So this is- think of this as discriminative and this is generative. And you have something extra as well, you know, uh, compared to a discriminative model. Yes, question. Let's say someone who puts some data [inaudible] you'll learn [inaudible] all the parameters to be non-Sigma and we will try to generate more examples and different strategies [inaudible]. So the question is, um, with a generative model, uh, given some, uh, um, training set, for example, we learn how to generate new training examples, but we won't make a prediction. Is that the question? Yeah. So- so the, um, the question, uh, er, the answer to that is- is, uh, no. Um, because we are studying these in the context of supervised learning, which means our end goal, at least in this part of the course, is still making predictions, right? And what we are doing is that we are learning the joint distribution, which means not only can we generate new examples, we can also discriminate them, right? [inaudible] Exactly. So when we want to discriminate, we will use the Bayes rule to construct a posterior distribution, and the posterior distribution, um, you know, and here's a recipe to construct the posterior distribution using the generative modeling components. And we can use this recipe to make predictions on new examples. [NOISE] All right? All right, so, um, back to GDA. We have defined the model like this, where y is in, um, y is binary, it's 0 or 1, x is continuous. And the- the parameters of this model, [NOISE] the parameters are [NOISE] Phi that belongs to the, uh, Bernoulli. Mu naught that belongs to, uh, this distribution. Mu_1 that belongs to, uh, uh, the sec- second- third distribution, and Sigma that's common to both. All right? So first we define the data generating process. In order to come up with- with this step, you don't need any data, right? This is just math. You're- you're making an assumption that this is the way your data's generated, right? And with this assumption, we have already defined our probability densities. We have determined what our parameters are. And then we observe data. And when we observe data, we can use the data to fit our model and learn these parameters using maximum likelihood. Right? So [NOISE] maximum likelihood [NOISE] to learn parameters. In order to, uh, perform maximum likelihood, the first step is to write out the likelihood function or the log-likelihood function. And in this case the, uh, log-likelihood [NOISE] looks like this. So it is, the likelihood is a function over the parameters, not the data, right? A probability density is a function of the data and assumes, uh, um, parameters. And the likelihood is a function of p parameters, Phi, Mu naught, Mu_1 Sigma. [NOISE] And this is [NOISE] equals 1 to n. The joint p of x^i comma y^i. Right? And this is where generative modeling differs from discriminative modeling, right? In discriminative models, we use P of y given x. And in case of generative models, we use the joint P of x, y. And this is the keys- key difference between generative models and discriminative models. In generative models, we are interested in the joint probability of P of x comma y. And in discriminative models, we only care about P of y given x. Yes, question? [NOISE]. [inaudible] Yes. So- so the question is, what's- what's the, uh, intuition about- about this? And, um, yeah so the intuition is- is uh, exactly that, what's the- what's the uh, probability? Or you know what- probability is the wrong word in this case, you know it's the likelihood. But if you just focus on this, then you know, the uh, it's- it's the probability of finding a specific x, y pair if you just sample it from your population of examples. [inaudible] So in- in the- the- um, the analogy, uh, in case of p of y given x is you're given an x. Right. You had no- x was not a sample. You know, somebody gave it to you, right? And now for that given x, what values of y could it possibly have, right? We- we- we kind of limit ourselves to a universe where we have only one example, right? P of y given x. And given that x, you know, most of the times it could be say, y equals 1, but sometimes it can be y equals 0, and that's your probability of y given x. [inaudible]. So the question in the- in the discriminative sense, why did this product makes sense? Yes. So, in the- in the discriminative case we had you know, product I equals 1 to n p of y i given x i. In this case, for each example we are- we are assuming x got decided somehow beyond our scope. You know, somebody else decided x for us, for this example, right. And for- and there are n such situations where x got decided to different values in each situation. And we are only interested in modeling y for those specific examples, right. And so here we get one probability, here we get a different probability, another probability and the joint is just the product of them because of the independence assumption. And similarly here, um, in this case, um, you know, the- the uh, joint is just a product of the individuals because of the IID assumption. And uh, this we can write it as log of i equals 1 to n. I'll just break it down, p of x_i given y_i times p of y_i. And we know what p of x_i given y_i and p of y_i are, and they are right here. So we plug them in here, right? Take these- take these definitions of x given y and p of y, plug them in here, right, and solve, right? Solve the- take the log-likelihood and maximize it with respect to the parameters, right? And you get estimates for theta hat, phi hat, mu hat, sigma hat. Once you solve for the gradient of this with respect to your parameters equal to 0, solve for them, and those are your estimates, right? We've done this for- we've done this for linear regression. And you can follow something very similar here. Yes, question. [inaudible] So the, uh, so the question is, uh, in this case, why- why- why don't we just maximize the conditional of y given x, right? And in order to make- to define what is y- uh, y given x, in this case. Um, going by this data generative process, you know, y given x does not have a straight answer. Right? So the- the- the data-generating process only tells us what is x given y, and that's normal, right? So what's y given x here, you have to apply Bayes rule. Right? And in order to apply Bayes rule, you need to be able to compute the individual parts of the Bayes rule. And in order to compute them, you need to have solved for the parameter values. And the way we solve for the parameter values is by doing maximum likelihood, right? So maximize the joint likelihood. You get the- you get the uh, parameter values which fits the data. And then you apply Bayes rule to get your conditional p of y given x. Yes, question. [inaudible] Yep. So uh, coming to that next, uh, was there another question? Yes. [inaudible] So the question is, should I be dividing by p of, uh, x over here? [inaudible] So- so, uh, so the question is, you know, do we need uh, p of x here over here? And we need the p of x over here only to get p of y given x, right? But in generative models, we are, you know, for p of x, y, you don't need a p of x in the denominator, right? In generative modeling, this is our likelihood objective. Right? And- and- and that's- that's the fundamental difference between discriminative models and generative models, right? In discriminative models, this will be p of y given x, in which case, you technically need a p of x here. But for generative models we want to- we want to have the ability to generate a new, you know, full dataset by just sampling from- from the model. So, um, yeah, so um, moving on. So take the derivative, set it equal to 0, solve for, um, solve for the different, uh, parameters. And what we get is phi hat equals 1 over n, i equals 1 to n equal to 1. And these are the closed form solutions for- if- if- if you, ah, take this objective, plug in the, ah, density, treat it as a likelihood. Differentiate, take the partials with respect to each parameter of this entire expression, set it- ah, set the partial is equal to 0 and solve for the parameters. These are the so- so- solutions you get. And this is what is there in your homework. Right? Yes, question? [inaudible]. So- ah, so the question is, okay. So, ah, let- let's go deeper into what- what each of these terms mean. Over here. We are using this- this notation, right? So this is, ah, you know, supposing there's some expression here, this is basically equal to 1 if expression is true and 0, you know, if- if it's false, right? So each of these, um- um, so y^i is 1 for a few examples, 0 for a few other examples. So here we're basically counting the number of examples for which y^i equals 1, and dividing it by n. So what fraction of your examples are labeled 1 is what, ah, ah, ah, ah, Phi hat will evaluate to. Over here, for Mu_0, we are summing over those x^i's for which y^i equals 0, right? Because, um, this indicator function- so this notation is called an indicator variable. So this indicator function will evaluate to- this indicator function will evaluate to 1 only for those examples for which y^i equals 0. So you're only summing over x^i's for which y_i equals 0, and you're- here you're counting the number of examples for which y^i equals 0, right? And similarly here you have, ah, you- you're doing the same for, ah, Mu_1. Now what is- what's- ah, ah, what's the deal with Sigma here? There are no indicator variables here. But what we observe here is that for each example, we are subtracting it from the mean that is specific to that example. So we have two means, one mean for y^i equals 0 and another mean for y^i equals 1. What this means is, subtract from x the mean which belongs to its y^i, does the relation make sense? Subtract from x the mean that corresponds to its label, okay? And, ah, this is basically, you know, um, um, these are the solutions, and in your homework you will be deriving it and- and proving that these are actually the solutions, um, which is basically just calculus. It's, um, the- the- the- the computation can be a little, um, verbose but, you know, it's- it's pretty straightforward. And, ah, yeah, so these are- these are the, ah, um, MLE, ah, the maximum likelihood estimates of, ah, the Gaussian discriminant analysis. So we start with defining a model. Get the corresponding density, so the probability distributions. Identify what our parameters are, right? Write out the log-likelihood objective as the product of the joints, because that's what we model in- in a generative models, we model the joint distribution. And break it down into components. And the way we break down, ah, uh, broke this down as p of x given y times p of y, was because that is the factorization that we readily have at our hand. Plug-in the appropriate, ah, ah, densities for p of x given defined p of y in this, uh, likelihood objective. And take the derivative of the entire, uh, likelihood objective with respect to the parameters, set them equal to 0, solve for them and you get these. Yes, question. Shouldn't that be Sigma hat? Yes, this should be Sigma hat. Thank you. Yes, question. [inaudible] let's say if we don't like indicator functions can we just write that as summation over [inaudible] you know? [NOISE] Yes. So if we don't like indicator functions, you could, for example, write this as, um. [inaudible]. Yeah, yeah, that's just equivalent notation i such that y_i equals 1, x_i over summation over i such that y_i equals 1 times 1. So this is- this just another notation which calculates or results in the same- same value, right? Here we are, ah, explicitly multiplying some of the x's by 0, and here we are just skipping over them in the index. Both are equal- equal notations. Yes, question? [inaudible] So the question is, ah, can we have, um, normal for one class, Poisson for another? I mean, um, you technically can, you can define your model anyway you want, the question is, is that meaningful, right? Um, a Poisson is basically counts, so you know- you know, and a normal is- is real valued. So if you already know that some of your data is- or counts belong to one class and the data that take real values belong to another class then, you know, you kind of already, you know, why- why- why go through this exercise in a way. Ah, but, you know, from- from- from a statistical point of view, you can absolutely do that, um, except, um, you know, those x's that have Poisson must be count valued or- or- or integers, positive integers, and those x that belong to the normal must be real valued. So if you have a way to kind of, you know, make your math compatible with, you know, some x's having a different data type versus, ah, other x's then yes, you can- you can absolutely do that. Yes, question. [inaudible] as a concave function or we just take it as prior knowledge? So the question is, um, in this case, will- will this be a- a- a concave, uh, function? Uh, in this particular case, I believe it is, um, but in general it need not be, right? In general it need not be, but I think in this particular case it is. [inaudible]. Yeah, so in- in cases when- ah, so the question is, you know, is this- is this concave and how can we do this without knowing? In this case it is con- concave, and so this works. Ah, in general, uh, just taking the derivative and setting, uh, it equal to 0, ah, may not work, but, um, in those cases where, you know, um, so if- if you- if you, ah, uh, er, it so happens that for cases where closed form solutions exist, they tend to be concave, um, and for cases where you don't have closed form solutions, you tend to do gradient ascent. So you can- you can- you can optimize this using gradient ascent. And in- in- in that case, you know, if you're doing gradient ascent, um, you- your- you will reach some kind of a local maximum. Yes, question. [inaudible]. Yes, we are coming to that next. Okay. All right, so let's move on. So here we see that, um, by- by following the above steps we- we calculate the, ah, parameter values for the joint and using the recipe given here, you can construct, um, a posterior distribution for y given x using, you know, a plug-in p of x given y, which is Gaussian, p of y, Bernoulli, Gaussian, Bernoulli, Gaussian, Bernoulli, and you will get an expression for p of y given x. And it so happens, ah, that for the case of Gaussian discriminant analysis with the ha- with the- with the shared Sigma variables, shared covariance, we see that, ah, the posterior distribution takes the form of the logistic- ah, ah, ah, of the logistic regression. What do I mean by that? Let me write it out to make it more concrete. Right. So if you go through this exercise of calculating the posterior distribution for Gaussian discriminant analysis, we get p of y equals 1 given x can be written in the form 1 over 1 plus exp of minus Theta transpose x, where Theta depends only on Mu naught, Mu_1, Phi, and Sigma. And this is also in your homework where you need to- where- where you show that, uh, given, um, a Gaussian discriminant analysis, um, model, you can represent the posterior distribution of y given x can be represented in this form where Theta is- um, um, Theta has- Theta can be represented only with, uh, Mu naught, Mu_1, Sigma, and- and Phi. Right. So this means the, um, intuition to have here is [NOISE] let's assume we have a Mu naught here, so here, um, this- this is x1, xd. [NOISE] Let's say this is Mu_0, Mu_1, [NOISE] we'll use a different color for this, [NOISE] Mu_1. And they share- so we have two different- two different, um, means. Each mean is specific to each class, a blue class and a red class. And the- both of them share the same covariance structure, right? So if- if we write out the- um, um, the contour lines for some of- for- for, um, these examples, they might look like this. These are the contour plot for the density of class 1, and these are the contour plots for the density of- of, um, the other class. And our examples are sampled from these Gaussians. So let's say we have- [NOISE] similarly for the red class, here are some examples. [NOISE] Right. So we start with just the- with just the data points, and we fit a GDA model. And your GDA model would learn these to be your Mu's or- or values sufficiently close to be the two Mu's and the two Sigmas, um, and- and the common shared, uh, covariance structure. And, um, Phi would just be the- the fraction of examples in the red and fraction of- uh, to the total fraction, for example. And the posterior distribution, which you can think of as the- as the- um, or the ar- uh, separating hyperplane, which will- which corresponds to p of [NOISE] y equals 1 given x equals 0.5, which is also equal to p of y equals 0 given x. So, um, the set of all points, x points, for which the y equals one class and y equals 0 class assign, you know, uh, where the point has equal probability of belonging to the y equals 1 class or the y equals 0 class would be something like this. Right. And the claim here is that for any Gaussian discriminant analysis model that share the same Sigma, the posterior distribution of y given x can be represented as a logistic regression model where the Thetas only depend on- on the, uh, parameters of the model. So which means GDA, any GDA, can be written as a logistic regression model. Now, the reason why the two, um, covariance structures or the two- um, the two covariance structures are similarly oriented and similar in shape and size is because they, you know, they- they co- they come off the same covariance matrix. And these, um, ellipsoids are- are- are- um, the shape, and- and orientation, and size are completely determined by the- the covariance matrix. And the principal axes of this covariance matrix are basically the eigenvectors of this covariance matrix, and the eigenvalues, um, tell you how- how, uh, spread they are in each of those, uh, axes. Yes, question. [inaudible] Yes, the Phi- Phi also decides where the hyperplane will reside, yes. The hyperplane will def- uh, will depend on Phi. It's gonna depend on Phi, Mu naught, Mu_1, and- and, uh, Sigma all the- all the way. [inaudible] Exactly. So- so this Theta over here is- is vector value. It dep- you know, um, and the- the- uh, the value of- of, uh, uh, Theta naught, Theta_1, Theta_2 will depend on- will be a function of these parameters. Yes, question. [inaudible] Yeah. So p of y equals 1 given x, [inaudible]. Actually, you're right. So this should- this should not be- yeah, it shouldn't be- uh, it should just be equal to, it shouldn't be 0.5. You're right. Thank you. [inaudible] So it- it's a line because you can represent it in this form and this is your logistic regression and- and, you know. Yes, there's a question. [inaudible] So the question is what if they have different covariance matrices? We made an assumption that, um, you're gonna- you're gonna assume that both the classes are gonna share the same covariance matrix, right? And, uh, if- if the two classes don't have the same covariance matrix, um, you can think of it like this. Right. So one class will- let's say is concentrated over here. And, um, which- what did I write here? So this- this, I- I completely messed up, sorry. So thi- this, uh, the- the line that corresponds to p of x given y equals 0, equals p of x given y equals 1, so. Right? Does that make sense? The- the set of all points which have the same density under class 1 and same eq- and- and, uh, the same density on the class 0, right, that would be a- a straight line in this case. Um, in case of- uh, if- if you have two different covariances, your examples may look like this, [NOISE] right? So you have some examples that are, say, more concentrated here, and other class examples are more dispersed, right? And in this case, the set of all points that have the same probability under class 0 and class 1 would- would actually look something like, right, instead of a straight line. So these would be the set of all points which have the same probability under class 0 or under class 1. If both of them share the same covariance structure, then this line will be a straight line. And if- if they are- if they have unequal covariances, then it would be- it'll- it'll actually- it can actually be represented as a quadratic instead of a linear. [inaudible] Yes, you can think of it as logistic regression with polynomial features, yeah. That's one way to think of it. Yes. [inaudible] Uh, I- I- I would encourage you to plot it and you'll see that, you know, it- it, it's- it's going to curve towards the- the one which is more concentrated. Now just- just- just, you know, take two covariant structures and plot the set of points which are- which have equal probability under both classes and it's going to come like this. Any other questions? Okay. So, um, a GDA model- for a- a- a- a given GDA model will uniquely determine a logistic regression model for its posterior distribution. Whereas the converse is not true. If you have a logistic regression model, it may not be the posterior distribution of a GDA model, right? Uh, it- it- it could be, it could- it need not be a posterior distribution of any model, or it could be a posterior distribution of some other model. Whereas if you have a GDA model, its posterior is always a logistic regression model, right? And the- the, um, the GDA model is making a stronger assumption in the sense that it is making a stronger assumption that your data is actually distributed according to Gaussian distributions. The two classes actually have a Gaussian distributions with the shared covariant structure, it's making that assumption. Now, if that assumption is true, then your GDA model will tend to be more efficient or more asymptotically efficient or more sample efficient compared to logistic regression, okay? When- when the assumption holds true, then GDA is a better model. You probably need a lot fewer examples, uh, to get some level of accuracy, um, compared to a logistic regression, um. However, logistic regression does not make any assumption and tends to be more robust. It might need a little more data, um, in cases where, um, the assumptions are true, but it tends to work well even when, you know, the assumptions are- are- are not met. It's a pretty robust algorithm. And in practice, you know, um, almost always logistic regression should probably be our first choice of, um, first choice of algorithm to try out on a given data set, uh, because, you know, um, uh, assumptions may or may not hold true. In cases where the assumptions do hold true, uh, GDA will be slightly more efficient in terms of, um, the number of examples required. [NOISE] Right. That's- that's, uh, and- and in your homework you will, uh, we'll also basically see, um, you know, um. You'll- you'll be seeing, you know, this phenomenon in your homework as well where- when, when you- when you, um, um, plot your data, some of which, you know, may be, uh, um, distributed according to Gaussian, some of which are not and then see your, you know, the model performance between GDA and logistic regression on- on both those, uh, situations. All, uh, you know, all that is part of your homework question 1. Any questions before we move on to naive Bayes. Yes. Under what assumptions can you map logistic regression back to GDA? So under what, um, assumptions can you map logistic regression back to GDA? The extra assumption is that your x's are, um, are coming from a Gaussian and- and having same covariance. [NOISE] All right, let's move on to naive Bayes. [NOISE] [inaudible] The center line has- okay. The center line has to be linear. Yes, so if the covariances are- are the same, then the, uh, the line that has equal [NOISE] probability under the two classes will be a straight line. Yes. [inaudible]. So when you solve for 1 over 1 plus E to the minus Theta transpose x, uh, equal to 0.5, you will see that Theta transpose x must be equal to 0, so that will be a, that would be a line. [NOISE] All right, naive Bayes. So, uh, in GDA, we saw that, um, the x's are real valued. They live in some d-dimensional real space. But that's generally not how all, [NOISE] you know, um, that does not account for all the different ways in which we count the data. For example, our data may be, you know, text messages, emails and we- we may wanna build a spam classifier for whether a given, uh, uh, text message or email is- is, uh, spam or not. In those cases, you know, your inputs are- are basically strings. They're- they're, you know, it's- it's a- it's a list of words and that doesn't fit well as, you know, as- as- as a- as a multivariate Gaussian. And so when our, uh, x's are discrete value, [NOISE] discrete, that's when we use naive Bayes. In order to use naive Bayes, x is discrete value. Most commonly we use it for text classification. [NOISE] Okay? For example spam filters. [NOISE] That's going to be the running example today. And this would be a good time to review conditional independence, a concept in probability, okay? [NOISE] So conditional independence [NOISE] means, um, given two random variables, x_j and x_k. We say they are independent if probability of x_j given x_k equals probability of x_j, right? This is the definition of independence. [NOISE] We say they are, uh, so this is independence. We say they are co-conditionally independent when conditioned on some new random variable, say y, let's say y, p_ x_j given x_k,y equals p_x_jy. This is conditional independence when conditioned on y, right? Now, can anybody tell me, does independence imply conditional independence? Does conditional independence imply independence? No. And what is the answer to the first one, does independence imply conditional independence? So does independence imply conditional independence and does conditional independence imply independence? Both don't. Yeah, that's correct. So, both- the answer is no to both, right? In- in- in general, conditional independence, um, your random variables could be conditionally independent that doesn't say anything of whether they are independent or not. There may not be conditionally independent and even that does not say anything about where they are independent- independent or not. Conditional independence, uh, assumes you're- you're, uh, uh, um, assumes that both sides are conditioned on a third variable like y. And in this case we are gonna, um, we're gonna make use of conditional independence. Um, and- and this just a- this just a refresher for what, uh, conditional independence is. Yes, question. Okay. So the first, in naive Bayes, we are going to consider two different kind of event models. The first thing- the first model that we are going to consider is called the Bernoulli event model. [NOISE] So in the Bernoulli event model, [NOISE] the, uh, mental picture to have is, let's say we have a text message, uh, that says, you know, "Buy our lottery." [NOISE] Right. [NOISE] So this is, uh, a sequence of alphabets, right? And we want to convert this into some kind of, you know, um, uh, some kind of a vector, some- and the way we do that is come up with what's called a multi-hot representation where imagine you have a long vector where each component of the vector is associated with some word in the dictionary, right? So let's say the first word- first component belongs to the word a. And, uh, if- if- if you go through [NOISE] some- some dictionary, here we're gonna assume English, but, you know, um, the concepts apply for anything. So second word in the dictionary would be say, aardvark [NOISE] and the third is aardwolf, so on. Then the word buy is gonna show up in your dictionary, uh, lottery, [NOISE] our and say some last words, zymurgy, all right. So- so- some word, right? And first we're going to convert our sequence of words into a fixed length vector. Where you have 0s in places for those words that do not appear in the message, right? And 1 in places where those words do appear in- in the message. [NOISE] Okay. So we're gonna to assume a fixed set of words which is called the vocabulary, right, and we will- we assume this vocabulary, uh, for each word to have a fixed location in this long vector. Okay. And given a new- new, um, input text message, we will convert it into a fixed-length vector where we have 0s in places where word- the word does not appear in the text message and 1 in places where the word appears in a text message. It doesn't matter how many times it appears, the question is, does it appear once or more. Yes, question? [inaudible]. Oh, We- we're gonna- we're gonna talk about it, right? Um, so here, um, x is- so if our vocabulary has d dimensions, okay, so i- i- if- if this count is d, then x belongs to 0, 1 to the power d. This is how we write, um, a d dimensional vector where each element is either 0 or 1, right? And any given x_j is either 0 or 1, right? and you make the assumption that pi of x_j [NOISE] and we write out our model like this. So this is our model. Just as the way in case of the Gaussian discriminate analysis, we- we had a hierarchical model that described how the data was generated. Similarly, we're gonna have a model here, pi of y equals 1 equals phi_ y and p of x_j given y equals 0, equals phi_ j given y equals 0. [NOISE] Okay. So we're gonna to assume that there's one Bernoulli distribution parameterized by phi_y, our phi_ y is one value between 0 and 1, and that's the parameter of this Bernoulli distribution, which tells us what fraction of our overall text messages are spammy versus not, right? It's just the class prior. And then we have two sets or two collections of Bernoulli variables; one set for the class y equals 0. So for y equals 0, we have a phi_j for each j, from 1 to d, which tells us what's the probability that x_ j equals 1 when the class is 0. And another set of Bernoulli variables, a full set of Bernoulli variables corresponding to y equals 1, which tells us what's the probability that x_j equals 1 when the class is 1? [inaudible]. So what the, uh, what this means is, what's the probability that this word- the word, um, corresponding to the index j will show up in an email that is spammy. And this tells you what's the probability that the- the word indexed by j shows up in a-n email or text message that is not spammy. Yes, question? [inaudible] So what's the number of parameters here? So here we have one parameter, here we have- technically, we have d minus one parameters. Oh, no. So here we have d parameters, and here also we have d parameters. So 2d plus 1 parameters. Good question. Does it make sense? We have one parameter to- to, uh, give us the class priors, whether just, you know what, in the entire population, what fraction of messages are spammy versus not. And then condition on, you know, by limiting ourselves to say, only the spam emails, then we have, um, one parameter for each word that tells us the probability that the word will appear in a spam email or the probability that it'll appear in a non-spammy. Yes? [inaudible]. So there are d such distributions for each j. So j can be from one to d. So you have d such Bernoulli distributions. [inaudible]. I'm sorry, I- I didn't get the question. [inaudible]. No, there- there are two different sets of variables, right, two different sets of distributions, right? And for this, um, so this is our model. And we can write out the likelihood function for phi of y phi of j given y equals 0, pi of j given y equals 1. So basically, think of this as, um, so this is one parameter, this is a set of d parameters, this is another set of d parameters, and the likelihood can now be written as probability of- or the log probability i equals 1 to n phi of x_i, y_ i. Given the full set of d, and this can now be written as i equals 1 to n p of y_i given- p of y_i, phi of y times product j equals 1 to be p of x_j i given y_ i phi. [NOISE] And the way we can, we can, um, we came up with this term as the product of all the different, uh, x_js is using the- the conditional independence assumption. What are basically, u h, the way we came up with that is phi of, say x_1, x_2, x_d given y, this is p of x given y, can be written, uh, using the chain rule as phi of x_1 given y times p of x_2 given x_1 over y times p of x_3 given x_1, x_2, y and so on. And with the conditional independence assumption, each of this becomes p of x_1 given y, and with the conditional, er, uh, independence assumption, this becomes p of x_2 given y, and this becomes p of x_3 given y, and so on. And this product is what's written here. So, you know, just the way the IID assumption allows us to factor out different exam- the- the probability or the likelihood of- or- or the probability of the full data, uh, into product of the, um, individual probabilities. Similarly, conditional independence allows us to break down, um, this full, ah, ah, collection given y as, you know, the product of the individuals, right. So this is the, ah, likelihood. And, uh, each of these, uh, we know is a Bernoulli. And we know the, um, probability density of a Bernoulli, plug them in, take the gradient, take the partials with respect to each parameter, set it equal to zero. And by doing that, we get the maximum likelihood estimates, MLE estimates phi j given y equals 1 is equal to i equals 1 to n h j 1 [BACKGROUND] [NOISE]. What does this mean? The probability that the- the word belonging to the jth index in the dictionary is- is- is, uh, the probability that the jth word in the- in the dic- in the vocabulary is going to show up in a spammy email is equal to the number of times the word appears in spammy emails divided by the total number of spammy emails, right? This expression looks a little cryptic, but it's actually pretty simple. What it's doing is, it is- it scans over every email or every text message i equals 1 to n and count the number of times or, uh, count the number of, uh, text messages where the jth word appears and the message is spammy and divide it by the total number of emails, uh, total number of text messages that are spammy. It's just calculating in what fraction of the spammy emails does the jth word appear, right? The syntax looks a little cryptic, but, you know, it's the- the idea is very simple. Yes, question. This Phi? So this Phi is, basically the full collection or just, uh, written it as Phi. It's 2d plus 1. Yes, question. [BACKGROUND] Yeah so- yeah so, um, and similarly, uh, Phi_j given y equals 0 is equal to same story, i equals 1_n indicator x_i_j equals 1 and so this is the logical and, right, and y_i equals 0 divided by i equals 1_n indicator y_i equals 0, okay. And finally, Phi_y is equal to 1_n indicator y_i equals one divided by n, right? So the Phi_y parameter is estimated as the number of spammy messages divided by n, right? And each- for each class of, you know, um, spammy versus not spammy, you get a full collection of Bernoulli's per word and they are estimated as the number of- the number of messages in which the word appears, uh, divided by the total number of messages in that class and it's the same definition here, but we just switch y equals 1 here. Yes, question. [BACKGROUND] This one? So here, we just broke down the joint into the factors, right? So think of, you know, uh, we start with this joint and you can write this as p_x_i given y_i times p_y_i, right? And- and this one, we factorize it using the conditional independence assumption, right? So this- this is just, uh, this is just the factorization of this by making use of the condition assumption, conditional independence assumption. [BACKGROUND] Yeah, this parameterized by Phi_y. Yeah. Yes, question. [BACKGROUND] So the, uh, uh, assu- what does conditional assumption mean here? It means, um, when you know whether- when a- a message is spammy, then the probability of word a appearing is independent of whether word b appeared or not in it. [BACKGROUND] Yes, so if certain words always appear together, uh, it does violate the conditional independence assumption. But we're just going to make that assumption anyways, just to make the math simple. Yes. [BACKGROUND] Yes, this ignores the order in which the words come. Yes. [BACKGROUND] Yes. So, uh, in- in a, um, spammy email, uh, some words are not necessarily spammy, right? Like words like the, and are not necessarily spammy. But, uh, what we see is that by- by making this assumption and, uh, by th- this, uh, uh, this set of assumptions generally tends to work well in practice even though, you know, there are, um, um, the- the- the intuition that is that if a word, um, has no significance in terms of its indicative power, then it's going to contribute equally to both the classes and its effect kind of cancels out. That's the intuition to have like common words are- will have equal weight in those in- in- in, uh, for both classes. But words that have, uh, a high indicative power of spamminess will- will- will bump up the prob- probability higher, right? So that's the, uh, maximum likelihood estimate and from this, how do we make, uh, predictions now? So we follow MLE and we estimate all the parameters and once we have, uh, estimated the parameters, we're going to make predictions using the Bayes rule. So Bayes rule tells us p_y equals 1 given x is equal to p_x given y, uh, p_y given x times p_y divided by p_x and this is equal to p_x given y times p_y over p_x given y equals 0 times p equals 0 plus p. [NOISE] Now, to make predictions on a new email x, you're gonna first convert it into a feature vector of d dimensions, and we're gonna calculate these terms. Um, so p of y, we estimated, uh, the, uh, and- and here we're just gonna use, you know, phi y, [NOISE] um, and this is basically the product i equals 1, j equals 1 to d x, j, i given y times p of y divided by the denominator. Right? And using- again- again, using the conditional independence assumption, we can break down the- the, uh, the long vector of- of, um, the feature vector of the new email into product of the individual Bernoullis. Right? And, um, multiplied by the class probability, we get the probability of- of, um, p of y equals 1. So we do p of y equals 1 given x equals, and similarly, you can calculate p of y equals 0 given x to be the same thing with- but, um, [NOISE] denominator. Okay? And in order to make a prediction, um, what's commonly done is you calculate numerator 1 and you calculate numerator 2 and see which of that, you know- two numerators is bigger than the other. All right? So, uh, one problem that we may encounter with this method. So any questions on this so far? Yes, question. [inaudible] add this to one? So these two add up to 1. Once you, uh, calculate the denominator and normalize them, they will add up to 1. But just the two numerators need not add up to 1. [inaudible]. Yeah. If you want to get the probability value, then you need to, um, then you need to, uh, calculate the denominator. But if all what you want is to make a decision of which of the two has a higher probability, then you don't need to calculate the denominator. Okay? So one problem that, uh, you may encounter with this method is, what happens if at test time we encounter a word that was never seen in the training set, right? So our, um, so i- in- in this model, [NOISE] what if the words say aardvark? Never showed up in our training set, neither in positive examples nor in negative examples. And we end up estimating the Phis for the word aardvark to be 0 and 0 because the numerator is zero in both the places. And at test time, what do we do? At test time it means, [NOISE] for the word aardvark, at test time, we are gonna end up with 0 over 0 plus 0. Thus this is gonna be 0. If the word aardvark appears in the email, um, the- the- the probability corresponding to the word aardvark will make the entire thing equal to 0. And similarly, you know, this will also factorize and the word aardvark, we made this 0, this 0, and you're gonna get a 0 over 0, um, when you encounter words that you have not seen before. So basically with this, with- with this method, a- a problem, uh, is if we encounter a message at test time that has a word that never appeared in the training set, either as class 1 or as class 2, we'd- we- the method does not tell us what our prediction should be, right? And for that, uh, what is commonly done is a technique called Laplace smoothing. [NOISE] So Laplace smoothing was invented by Laplace. And though the- the origins of Laplace smoothing was- is, um, it goes something like this. So Laplace was basically trying to calculate the probability, what- what's the probability that the sun is gonna rise tomorrow? What's the probability that the sun is gonna rise the next day? Right? And in your training set, you basically have only positive examples, right? So every day in your- you know, if you look at the past 100 days or 1,000 days, you start collecting your data, right? Every day the sun is rising, right? And in your, um, so your training set, um, has only positive examples. And now when you estimate the probability of what of- the sun, uh, rising the next day, your MLE estimates will give you a probability equal to 1. Right? It will always be exactly equal to 1. And, uh, his claim was- was that, um, [NOISE] that's- that is sub-optimal, right? It's- it's- the- the- the- the probability that the sun will rise the next day at someday in the future is gonna be 0. It might be, you know, the sun may not rise, um, you know, uh, uh, uh, someday. And- and so, um, he invented with, uh, the- the- that the correction that he came up with is commonly called as, um, Laplace smoothing. And the idea of Laplace smoothing is supposing you have, um, let's say you're doing coin tosses, right? So let's say you're trying to estimate, um, the bias of a coin, let's call it Phi, and, uh, your x's are 0 and 1. [NOISE] Now, you take- you take a coin, you know, flip it 10 times, and let's say the, uh, the- the- the coin turns up heads all the, uh- uh, for all the 10, uh, for all the 10 trials. What is our estimate of Phi in that case if Phi is supposed to be probability of heads? Right? So the- the MLE estimate tells us that this is gonna be 10 over 10, [NOISE] and, uh, and- and so, uh, our, uh, parameters would be, uh, um, uh, uh, the Phi would be 1. But what Laplace smoothing tells you is the way to kind of, uh, think about that is, um, let's say you, um, you're- you're- you're- you're, um, conducting- you're keeping two counts, so [NOISE] count of heads and count of tails, right? In pure maximum likelihood, we start with the two counts being 0, and as we keep conducting trials, we add a 1, [NOISE] and so on. And then we sum up the, uh, count H over sum over count H plus sum over count tail, right? What Laplace smoothing tells us is before we even start conducting our experiments, start with a count of 1 for all your classes, right? Assume a uniform probability before we start conducting our experiment, and then start your trials and observing data and incrementing the counts, right? And with this technique, we see that because we start with a count of 1 on both sides, even if all our 10 experiments come up with a plus 1, the probability of heads in this method will be 10 over 11. Right? It will never be exactly 1. And similarly, the probability of tails will keep going down as we collect more data. It's gonna start with 0.5, and as we keep running experiments and- and the probability, uh, and, uh, we keep getting only heads, this keeps coming down, right, but it will never be exactly 0. Right? This the- this is the, um, idea of Laplace smoothing. And this Laplace smoothing can be applied to, um, our- our spam classifier as well, right? So assume that we have seen every word once in a spammy email and once in a non-spammy email, right, initialize your counts to that, and then start observing your data and- and, uh, uh, accounting for your data. So the, uh, Laplace smooth version of-[NOISE] so the, uh, Laplace smooth version of our estimates will be like this. So for comparison, here is the, uh, maximum likelihood estimate and the Laplace smooth estimate would be [NOISE] phi j given y equals 1 equals 1 plus sum over y equals 1 to n, x, j, i equals 1, and y^i equals 1 divided by 2 plus sum over n. Right? So the idea here is, um, you add 1 to your, uh, to the count, and you add the number of classes in your problem to the denominator. So you add 1 to the numerator to say you've seen that word at least once, and you divide it by 2, um, for to- to, uh, uh, account for the fact that, uh, that this evaluation of probability, of valid probability density, a prop- va- valid probability distribution. Right? And similarly, the Laplace smooth version of a j given y equal 0 will be 1 plus [NOISE] So this component, in the denominator, comes from the data, the number of messages which are spammy. And then you assume, you've seen two more messages. One message in which the word appear, and one message in which the word did not appear. Right. And over here, since this is the probability that the word appears, uh, you assume you've seen the word appear in one message, and also in the rest of the messages where- where the word actually appear. Right, that's the idea of Laplace smoothing. Right. And this is, um, if you're familiar with Bayesian statistics, this is like imposing a Beta prior on your parameters. If you don't know what that is, don't bother. But, um, it's- it's, not- not only is it just a heuristic, um, heuristic, uh, argument, there's also, uh, a principled explanation of why this is a good thing to do. All right. So that's Laplace smoothing. And now we, um, and this- this- this model is what's also called as the Bernoulli Event Model. And there's another variant of Naive Bayes, which is called the Multinomial Event Model. [NOISE] Right. So in the Multinomial Event Model, which is a slight variation of this, which means a lot of the terminology was gonna show up here, but may mean slightly different things. So please pay attention. Uh, it is- it is a slightly different model, which makes a different set of assumptions. And the assumption it makes is that, you know, y again, comes from a Bernoulli parameter Phi. But then each word in your email, which is either, you know, um, which is either spammy or not spammy, so x given y equals 0, is sampled from a multinomial. Well, so the correct, is actually categorical- is- comes from a categorical distribution and we're gonna still call it Phi. Except now Phi is, uh, D-dimensional categorical, uh, uh, uh, parameter vector. In fact, here it will be- so this is one-dimensional, Phi of y. And Phi of k given y equals 0 will be d- [NOISE] let's not call it d, will be the size of your vocabulary minus one-dimension. What does this mean now? So first I'm gonna write out the expressions and then, you know, give you some intuitions about this. [NOISE] Right. So the- so the MLE estimates here, look like this. Phi k given y equals 1. [NOISE] Right. Here the assumption is that your x now indicate- x_j indicates the word in the jth position of the- of your text message. And that can be- that is a word with- that exists in your dictionary. So v are- is the vo- the vocabulary. And [NOISE] this means the size of your vocabulary. So x_j or the jth word in a message, takes a value between 1 and the size of your vocabulary. And x^i, which is the ith message, is the full- full, uh, message of your, uh, uh, ith example is- has length di. So each message can have a different length. Right. And x^i is now a vector of length di, where each component can take one of the words in our vocabulary. Right. Now, the, um, maximum likelihood estimate, for this model, will say Ph- uh, Phi of k, where k is the word in the vocabulary, is given by this term, where you're summing over every- every message. And within each message you're summing over every word, and counting only those words in those messages, which happen to be k and belong to the class that we're interested in. And divide it by the total number of words across all messages in that class. Right. And similarly, Phi of k given y equals 0, equals, basically the same thing, just replace y equals 1 by y equals 0. Right. And you can also have a Laplace model version of this. [NOISE] I'm gonna write out the Laplace model version and then, you know, give some more intuitions about this. Phi k given y equals 1, equals 1 plus, i equals 1 to n, j equals 1 to di, x^i, j equals k, and y^i equals 1, divided by the vocabulary size plus. [NOISE] All right. So this is the, uh, similarly you have, uh, Phi k given y equals 0, equals, you know, a similar thing, just replace, uh, y equals 1 with 0. And the, um, the way to think about this- the, um, the Bernoulli Event model and the Multinomial Event model, [NOISE] is to probably, um, look at how an algorithmic implementation of- of these two models would look like. [NOISE] So supposing we have a training set. Let's call this, you know, buy our lottery, and, um, whatever, buy this watch. Right. You have a few examples. And let's say you also have a non-spammy, um, example, for example, when is your exam [NOISE] or when is the homework due? [NOISE] Right. Now, um. First, let's look at the Bernoulli. [NOISE] So in the Bernoulli, we convert this into a fixed-length vector where you- we have a aardvark, buy, exam, and so on. And what- what we do is- right. What we do is for each example, we- we use an indicator variable to say whether, you know, the word appear or not, right? Similarly here, let's say you buy and that's going to be, uh, a watch, you know, and 0s everywhere. Right. So this is the fixed-length representation which is D-dimensional, where D is the- is the size of your dictionary. So the way you wanna think about this is, in case of the Bernoulli model, first, we're gonna convert a given message into a multi-hot vector which means it is 1 in multiple places. The word may appear many times in this- in a given message, but we count it only once. Right. So, um, um, maybe to clarify this, buy this watch, [NOISE] this watch only. Right. And we then- and we do the sa- repeat the same thing for the non-spammy. So here, we will have, you know, exam will be one. You know, your will be one, the others will be zero. [NOISE] Similarly, this would be a few places. [NOISE] Right. So we have [NOISE] a set of spammy examples and a set of non-spammy examples. Right. And for each of these two different classes, we have a separate collection of Bernoullis per word. So we have one Bernoulli for the word A, one Bernoulli for the word aardvark, one Bernoulli for the word buy. And each of these Bernoullis is estimated by summing over the number of times the word has a 1 divided by the number of words in that class. Similarly over here, another set of- of Bernoullis. So this is five- say, 5J- 5J for y equals 1. This is 5J for y equals 0. And each of these 5Js, you just count the fraction of- the fraction of words, uh, the fraction of, uh, um, messages in which the word appears divided by the number of messages. And similarly, the frac- here, [NOISE] this will be 1 over 2, this will be 2 over 2, and so on. Right. This is the Bernoulli model that we discussed first. Right. And similarly, the multinomial, however, is a little different. The multinomial, okay, if you were to implement this algorithmically. Right. Assume you have the same set of words here. A, aardvark, buy, exam, and so on. In this case, however, instead of, um, using an indicator, just count the number of times the word appears. Right. So 0, 0. No, buy is 1, lottery is 1. Over here again, buy is 1, watch is 2. This is watch. Watch was 0 here, you know. Only is 1, whatever. Right. And now, we do the same normalization by summing up the counts. So the number of times the word appeared across all examples in your training set. Right. Whereas over here, it was the number of messages or number of examples in which the word appeared. Here, it is the number of times the word appeared across all messages. Right. That's the main difference between the Bernoulli model and the multinomial model. And so here, our- um, so we're gonna sum up the counts. So here, it's going to be 2, 2, 2, and maybe some of them, it is 3, it is 4. In the Bernoulli model, each one of them was its own Bernoulli variable, so we normalize it locally. Right. In the multinomial model, we want the distribution over words. So we're gonna normalize this entire thing by summing over 4 plus 2 plus 3 plus 2 plus 2. Right. So the- the multinomial model gives us a distribution over words of how frequently that word appears in spammy messages or non-spammy messages. And it doesn't care about whether the word appears 1,000 times in one message or appears once in 1,000 different messages. The multinomial model doesn't care. The Bernoulli model, however, counts what fraction of the messages has this word. The Bernoulli model cares, it- it doesn't care how many times the word appeared in a message. It's just going to count it once per word- once per message. Right. And when you apply Laplace smoothing, what it basically does is you add two messages, one message has 0 everywhere, and one message has 1 everywhere. [NOISE] Right. And then, um, we normalize it locally, you get the Laplace smooth version of the Bernoulli event model. Similarly, you know, if you extend the 0s here, one messa- one message that has no words and one message that has all the words, [NOISE] and we normalize it, you get the Laplace smooth version of the multinomial event model. Any questions? Yes, question. [inaudible] Yes. [inaudible]? So the question is if there is- if there is one, um, if there is one spammy message that has a spammy word repeated 10,000 times but does not appear in any other message, then this model will up weight that word, like quite a lot. This model will- will think that, you know, there's just one message with that word. That's the main difference. [NOISE] [inaudible]. So in this- in this model, um, it, uh, it- it is just summing up the words, you know. So one way to think about this model is, take all your spammy messages concatenate it into one long example. You- you get the same model, even if you have one message that's a concatenation of all your, um, you know, messages. But as with this, you're- you're counting how many number of messages a word appears. Right. That's the main difference between them. Yes, question. [inaudible] Yes. [inaudible] Exactly. [inaudible] Here, you're only, uh, uh, summing up. Right. So only the second one will sum up plus v. Right. When you- when you add up all these, you know, the- the- you know, you're getting 0s over here, and here, you're getting one per word. So that's- that's what gives you the plus v. [inaudible] So, um, so the Laplace smooth version, we have a plus- you know, you're adding the number, the- the- the vocabulary size to the denominator. Right. And that is basically achieved, uh, because you're adding a plus 1 per word. So there are, you know, v such words. And when you- when you sum up, you know, the- when you sum up, uh, all the counts here, you get a plus v in the denominator. [inaudible] So buy- so, uh, the Laplace smooth version of buy will be, in this case, 2 plus 1, 3 over the original count, whatever it was, plus v. Right. It's gonna be- this is 2 plus 1. So without Laplace smoothing, it was 2 plus the original count. Now, it's gonna be 2 plus 1 because you got one from the new message, and plus v in the denominator because every word got 1. [inaudible] Sum of all the words, yeah. You're summing over all the number of words. Right. So over here, you're summing over the number of words in your spam class or the number of words in your non-spam class. Yes, question. [inaudible] to your first example, should it be, um, one, um, [inaudible] example, it's all words [NOISE] over one instead of one [inaudible] all words zero. [NOISE] I don't think I got- I- I- I mean, those two actual samples [inaudible] Yeah, so the idea is, you know, these counts may all be 1s or all be 0s, and you don't want them to dominate, so you add a 0 and a 1 for each word. [inaudible] So with- with Laplace smoothing, we add one positive and one negative per word. One positive and one negative per word. Okay. All right. Uh, I think we're over time, so that's about it. If you have any more questions, you know, feel free to walk up and, um, I can answer them.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_22_Practical_Tips_and_Course_Recap.txt
Topics for today are first we will- uh, so on- on- on the last class we covered evaluation metrics. And, uh, kind of like, uh, a- a continuation of that, today we'll talk about some pract- practical tips for, uh, applying Machine Learning in practice, uh, especially if you want to, uh, you know, um, build a machine learning model focused on a real-world deployment. And with that, we're gonna, uh, uh, finish up all the topics and we'll spend some time talking, uh, about the final exam format, what kind of a- a final exam structure you can expect uh, in the take-home exam. And then we're gonna be- begin our- uh, the full course review, uh, starting all the way from the beginning, giving you an overview of what we've covered, um, in the course, uh, so far. We'll be- um, as part of the course review, we'll be emphasizing more on topics that are more relevant for- for the exam. And we'll go over both the, um, topics that we covered in the lectures and also discussing related homework problems. So we haven't really had a chance to kind of go over the homework problems, um, in the lecture itself. So we'll be discussing some of the, uh, homework problems that we've covered and how they kind of, um, match up with what we've seen in the- in the lectures and how they relate to the lectures and so on. So, uh, machine learning for, uh, production. So with, uh, the algorithms that we've- so far in the course we've mostly studied algorithms, right? So we've studied different kinds of algorithms, supervised, unsupervised, reinforcement learning. We've looked at some theory and, uh, in the last lecture, we also saw some, uh, evaluation metrics. If you are planning to build a machine learning model for some kind of, uh, uh, say, a real life product that you say you're- you're- you're uh, building a product for let's say in your company, say you're working for a company or you wanna start a new startup which uses some kind of a machine learning model. The approach that you, uh, take towards building a real practical model is somewhat different from the kind of approach you would take for, let's say, uh, building a new algorithm to be published in a research paper, right? And most of the- uh, probably the biggest difference, uh, between machine learning in- in uh, practice and machine learning in research, [NOISE] in practice [NOISE] versus research. [NOISE] The first thing that you would do, uh, if you were to, uh, deploy, uh, machine learning, um- if you're building a machine learning model for deployment, is to start collecting a dev set. Okay? So the first thing you would do before deciding, you know, which model you wanna use, it may be logistic regression, it may be, you know, a neural network, what- what have you. The first thing you wanna do is start collecting a dev set. By collecting a dev set, what we mean is start collecting data that your model is gonna encounter when it is deployed in production. Let's say you're building some kind of- um, for the sake of an example, let's say you're building some kind of a- a- a phone-based app that recognizes things in- in- in the pictures that you take, right? The first thing you wanna do is start collecting pictures from- you know, uh, from, uh, uh, mobile phones, right? And the reason you wanna do it is you want to have your dev set distribution [NOISE] match the production scenario, right. And it's important that you start with this step even before you start thinking about what your training data is. You know, first define what your dev set is, and then work backwards from there, right? And you- you wanna spend as much time as possible on collecting a dev set that matches the real world distribution as closely as possible. And by matching it as closely as possible, we mean that, you know, um, if your target set of users are, let's say, um, those who take pictures on phones as opposed to say tablets. Or even better if you- if you- if you know that your target set of users initially are gonna be iPhone users versus Android users. Then take pictures from the camera that you're gonna- your app is gonna be used in, right? And then spend time on labeling it yourself, right? That gives you a real good feel for how your data actually looks and feels, right? So as part of this, collect data, preferably from the same devices- collected from the same devices that your model is gonna get used and label it. Right. Once you do this, you get your- your uh, validation set or a dev set. The next- the next thing you wanna do is to define an evaluation metric, [NOISE] right? So these two tasks of collecting a dataset and defining some kind of an evaluation metric should be the first thing you do if you are, you know, some kind of- uh, if- say you're- you're- you're the leader of an ML group or you're a product manager, um, the first thing you wanna do is define what your dev set is, which should be a good representation of who your users are or how your users are gonna use the model. And defining what- uh, defining metric on this. And this metric will capture what's- what's, um, uh, kind of important for you from this product, right? And once you define this dataset and evaluation metric, you now effectively kind of, you know, painted a target for your team to go after, right? These two are- should always be the first thing you define. And once you're- you've uh, uh, done this, then- then now comes the question of what the training set is, right? So your training set could- could come from anywhere, right? If collecting data is inexpensive, then you probably want to collect as much data as you can that mimics the production scenario and make that your training set. However, that's usually very expensive, right, especially, you know, if you want to label your dataset. That becomes really expensive because you- you probably wanna use, you know, your own time or you wanna spend money and use, you know, crowdsourcing to label your data. And generally, labeling data can be very expensive, right? And- which is why, uh, most of the time, uh, training data usually comes from a distribution that may or may not match your, um, uh, dev set exactly, right? If you're working with images, let's say you're building an image classifier, you may wanna use some kind of a pre-trained model or use a- a- very famous dataset called the ImageNet for- for, you know, creating your training data, right? And that can be very different from the- uh, from the use case where your model will eventually get used, right? So um, collect- so the next thing you wanna do is now collect your training data, right? And while you're collecting your training data, it should be as- as close as possible [NOISE] to the dev set. And generally, um, collecting the dev set, uh, you know, as I said, collecting more of the dev set itself, uh, can be pretty expensive. And here, you- you generally, uh, end up using some kind of an automated labeling, automated or noisy labeling, right? And you may be surprised that noisy labeling or automated labeling can sometimes be extremely effective. And the- the idea there is that you kind of make up by quantity what you lose by quality. And in fact, we've actually seen, um, uh, you know, some examples of this in the course already. For example, in your homework 1, PS1, I think it was question 2, you know, we looked at the case where you have some training set, where the noises are- where the labels are noisy. Where the labels are available, say, only for positive examples and, um, the- the- uh, unlabeled examples can contain both positive and negative, right? That's- that's, uh, one- that's one kind of, um, uh, dataset that you may get by automated labeling, where you label only the points that you are very confident of and leave everything else as, you know, unlabeled, okay? And with that, you can still build, uh, models in a naive way and then apply some kind of a correction to get, you know, pretty good or decent models. Yes. Question. [inaudible] That's a very good question. So- so the question is when we were doing homework, uh, or, you know, uh, covering rest of the material, we spoke of first starting with some dataset- some given dataset, and split that into a test, validation and train, right? Why not do the same here as well? The reason why you wanna, um, uh, think about it, uh, this way is because the approach that we uh, discussed earlier of starting with some given dataset and splitting it into three was quite common in- in- um, you know, a few years ago, or a few decades ago when data was really scarce. Right? Now we're kind of, um, in a situation where it's easy to collect data, it's difficult to label them, but it's, you know, generally, you know, the availability of data is quite abundant in a lot of use cases. Of course, there are use cases where data availability itself is low, for example, in- in healthcare when- you know, if you want data about some rare disease, of course, you're not- you're not gonna have a- a lot of examples there. But for the most part, um, especially if you want to build some kind of, um, uh, commercializable product, you know, um, then it's- it generally tends to be around, you know, you know, images, text, you know, media, uh, videos, etc., and generally it's- um, it's quite easy to get lots of- lots of data, uh, but labeling them is still hard, right? And in these- in these scenarios, um, you probably want to have your model be tested against the kind of scenarios it's gonna be deployed against. And have your test set performance reflect that reality when- you know, when- when you actually put it in deployment as opposed to take some dataset that you, uh, you know, obtained from, you know, somewhere and get a sense of generalization performance to that distribution which could be very different from where your model gets used, right? So if you want to- if you want to- um, if you're thinking of deploying your model in a real-world use case, always sta- start from the way it is gonna get deployed and work backwards from there. [inaudible] Yeah. So- so- so the question is, uh, you know, what about using- using, uh, you know, scrape the web for images and do some kind of automated, uh, labeling. Is that even legal? Uh, of course, if you are, you know, if you're scraping copyrighted images, then, you know, you're actually subject to the terms, you know, that come with the license of that, you know, uh, of whoever owns the, uh, owns the images. Um, so yes. So, uh, legality is definitely open- open question and you should, you know, answer that. Um, in this course, we don't provide you legal answers for such questions. Um, but, you know, you absolutely need to, you know, uh, ask those questions and- and, uh, find out if, you know, the- the method that you're following, you know, is it legal or not? Absolutely, you should- you should pay attention to that. And, um, in terms of crowdsourcing, it is strongly recommended that the dev set, the labeling of the dev set be done with as much care as possible, preferably by you and your team. And you could use crowdsourcing for labeling, say your training data, but you wanna be as careful as possible for labeling your, uh, dev set, uh, definitely. Yeah. Yeah. So, um, we saw some of the, uh, some of the automated, um, what could happen under automated labeling with, uh, uh, in PS1 Q2. That's the positive only- only, uh, uh, um, scenario. And once you- once you kind of, um, done your labeling, then you get into the question of, how do we build- how do we build a model that works well in production, right? Your- your goal should always be to build a model that works well in production, right? Which is- which- which might seem closely related to a model that works well on your test set or model that works well on your dev set but that is still a proxy. You should always aim for building a model that works well in production, right? And the way you want to, um, work towards improving your model, uh, performance on production, is to, uh, let's say you- you, uh, come up with some kind of, um, let's say you decide to use I don't know, SVMs or, you know, logistic regression, doesn't matter what it is, right? Anytime you- you, uh, fit the model and you want to measure performance, you should always measure this breakdown. So first you have say, human level performance, [NOISE] right? And this human level performance, even though it is inaccurate, it is sometimes okay to think of this as a proxy for irreducible error. [NOISE] Of course this is- this is inaccurate, but it serves as a good proxy. We do have models that work better than humans in many examples. So in those kind of examples, this may not always work, but for the most part, if you're starting about building some kind of a model or some kind of a- a- a machine-learning model for a, you know, a- a specific product that you're, uh, uh, working on, then human level performance on the dev set can work as a good proxy for irreducible error. Again, it's just a proxy. Yes, question. So question about the automated denoising label. So what exactly is meant by that? So you're saying like you get data from an already labeled [inaudible] Yes. So- so the question is, what- what do we mean by automated labeling? Do we mean by using some, uh, some dataset that's already labeled? Is that what- so what- what we mean by this is, um, let's say you're- you're trying to, um, build a classifier for, uh, let's say, you know, the- the- the prototypi- typical example. Let's say you want to build a- a classifier given an image, is that a cat in that image, right? And let's say you scrape the web, you know, possibly that's illegal. Let's- we'll- we'll push that question apart and you- you've collected a big, uh, dataset. And in this dataset, now we want to label this dataset as those having cats versus no cats, right? And you want to have some kind of an odd- so- so first thing you do is collect a dev set that's going to match your deployment scenario, right? If- if your use case for your- for your, say, mobile phone app is through pictures, then take pictures from let's say your phone. Take lots of pictures and manually label those that have cats or not, right? And then comes the problem of collecting training data, right? Your data can be images that you scrape off the web, which may be completely unrelated. It may look kind of different from pictures taken in your, uh, in your phone. And now you wanna do some kind of an automated labeling, right? And what does it mean here? You could perform automated labeling with some simple rules. For example, if the- if the file name of the image has the word cat, then maybe it has a cat so, you know, label it, you know, ha- have some initial noisy labeling where you look at the file name, if ha- if it has the word cat in it, label it as positive. If it does not label, it as negative, that's very noisy. You know, what, you know, it- it probably works okay, uh, in terms of its, um, um, um, uh, degree of noise in it. You probably want to still perform some kind of a spot check to see, you know, collect some random number of images from this label dataset and see what fraction of them were labeled correctly with this automated rule. And maybe refine it a little further till it is, you know, probably, you know, say 90% accurate or 95% accurate. [OVERLAPPING] [inaudible] Yeah. You could use a simple model to do this automated labeling but generally, this- this automated labeling will use some kind of prior knowledge or domain knowledge to construct a rule-based, you know, it's- it's generally a rule based, uh, uh, labeling at the automated labeling stage. [NOISE] So, um, you wanna break down the, uh, performance into the following, uh, following parts and then you have training performance. [NOISE] And when you're training, once you obtain this training data, you probably want to split it into, uh, an actual train set. And what you might call as a train dev set, which is a fraction of your training set, which is distinct from the dev set that you're- and- and- the goal of the dev set is to mimic the deployment scenario as well as possible. But your training data, you probably don't wanna, uh, overfit or use all your, uh, training data for fitting the model as well. So within the training data that you've- that you've labeled, possibly with, uh, automated labeling, you still wanna hold out a fraction of this and let's call it the train dev set, right? So train-dev performance. It is sometimes common to take a fraction of your dev set and include it in the train dev set and yeah, measure the train dev performance, and then measure the dev set performance. And this is the part that you manually labeled and manually collected. Okay. And finally, we can talk about the deployment performance. By deployment performance, what- what I mean is, when you actually deploy the model into production and real users start using your model, you know, what's the accuracy? What's the level of performance, um, on- on- on the, uh, real world, right? And previously, we were generally talking about, you know, train error and test error or maybe train error, dev error and test error. But here, when you- when you actually put things in- in, uh, or when you're working towards an actual deployment, you actually want to measure all five, you know, all, you know, there- there are these five different measurements that you're, uh, measuring. And the way you want to think of each of these different performance is the human level performance, as I said, you can think of it as a proxy for irreducible error. Once you- once your model starts working really, really well, it might start performing better than humans. But, you know, we'll- we'll, we'll ignore those, uh, cases for now. And the gap between training performance and human level performance, the gap between these two, you wanna think of this gap as a proxy to the bias in the model, right? The gap between training performance and human level performance, right? And similarly, the gap between the train dev performance and the training performance, you wanna think of that as the variance in the model. [NOISE] Right? [NOISE] And now, similarly, the gap between the train dev performance and the dev set performance, this gap, you can think of it as the error due to distribution mismatch- distribution mismatch between dev and train. And this distribution mismatch is, is because the training set that you obtained was generally obtained from a different process rather than starting with-rather than the, the process you use to mimic the test set because this is too expensive, right? And this is generally, you know, uh, therefore, you can think of this as, like, you know, due to cost of collecting. Yes, question? What is train dev? So the train dev is a fraction of the training data that you obtain, that you hold out as, as, you know, a train dev set. Why do we call it dev? Because we're calling this dev so we just call it train dev because it was- it just a fraction of the training data that you have, which is distinct from the dev set which you manually collected and labeled. So what type of train dev is going to be used with somewhat like the validation set? So the purpose of the train dev set is to kind of, uh, use it as a way to check if your model is overfitting or not. And the, and the dev set is used to check if the- um, if the model is going to work well in the deployment scenario or not because the training data that you obtained was distinct or different from the, um, from the, from the, uh, uh, potential deployment scenario, okay? And finally, the gap between these- between the dev set performance and the deployment performance is generally due to overfitting on the dev set. [NOISE] How can we overfit on the dev set even though we are not training on it? We can overfit on the debit side because we go through the cycle of refining our model and tuning our model to work better and better on the dev set to reduce this gap. And in a way, indirectly, due to our hyperparameters, we will inevitably end up overfitting on the dev set, right? And once you overfit on the dev set, then your, your actual deployment performance can be pretty, pretty bad, right? And to kind of summarize, our goal is to build models that work well in deployment. And this is our eventual target. We should never lose track of the eventual target which is deployment performance. And all these other different ar- uh, um, measurements that you perform are a way so that you can perform diagnosis to improve the deployment performance, right? These, these performance by themselves are not of direct interest, right? You want to use these to diagnose where the error- where the error is coming from and go after and fix that error so that eventually, we can do well on, on, uh, deployment performance. And the dev set, as you can see, um, can be subject to overfitting, especially if you evaluate your model performance on the dev set multiple times, you're going to reach a state where you've kind of overfit on it. In which case, you might want to go back and recollect a new dataset because you, you overfit on, on the, on the old one, right? And that's, that's pretty common too where you, you, you recreate a new dev set, uh, because you, uh, overfit on the current one, okay? And now- [NOISE] right? So given these, let's, let's, uh, let's kind of go through some simple exercise to see what steps we can take to mitigate each of these, okay? Let's say you measure the human level performance and it's pretty low. What do we do? Well, if the human level performance is pretty low, like you- we have not even started- [inaudible] So there's no model in question here. You've collected a da- uh, you know, a dataset and then you ask a human to, to, uh, uh, measure the humans performance against, um, um, on that task and that is pretty low. The accuracy is low? Yeah. You, you know, whatever metric that you come up with, accuracy or precision or recall. Get better humans. Get better humans. [LAUGHTER] [inaudible] Exactly. So, you know, you, you- if, if the human level performance is pretty low, then maybe go after a different problem, right? [LAUGHTER] So [LAUGHTER] um, because, um, it's, you know, especially if you want to run some kind of diagnostics and, and, you know, um, categorize your errors, if a human level performance is pretty low on a task, then you know, you can think of it as a problem where the irreducible error itself is very high. So it- it's kind of a bleak scenario, it kind of represents- it says that the problem is really, really hard, right? Now, what if, you know, the human level performance is pretty good, it's, it's, it's, you know, um, within an acceptable range but your training performance is low, your training error is high, right? But you know, uh, when, when, when, when you ask the human to perform the labeling or human to do the prediction, it was pretty good. You know, what does this represent? It represents, you know, your model probably has high bias, right? And what do we do to address high bias? Increase flexibility. Increase? Increase flexibility of the model, right? So, um, so steps to kind of overcome this, right? Increase flexibility of the model. And how do we increase the flexibility of the model? Add more nonlinearities. Add features. Reduce regularization. Add feature and somebody said, um, add more nonlinearities. So, uh, let's call it add features of, say, add more depth if you're adding a neural network. Add depth, right? And I think someone said reduce regularization. [NOISE] Right? [NOISE] And let's say, if you're, if you're, uh, um, using some kind of a kernel method, use more complex kernels, right? So these are, these are, these are like, uh, standard, standard, uh, approaches for fighting bias. If your model has high bias, then these steps will help you fight, um, uh, uh, bias. And if you remember, uh, bias is due to, uh, the inability for the model to represent like the true function or the, or the best possible, uh, function. The best possible function or the best possible model will give you, you know, the irreducible error, that's the best possible we can do, and the inability of our hypothesis class to represent the, uh, best possible model is called bias. And anything you can do to increase our model class, to make it an even bigger class will help fight bias. Okay? So all these things, you know, adding more features, adding more depth, adding more width in your neural networks, uh, reducing regularization because regularization pushes your model towards, uh, towards zero, uh, you know, more complex kernels. All these are different ways in which we can, we can, uh, fight bias. Now, what if the gap between the training performance and the training out performances is more? What can we do there? [NOISE] Okay? So what can we do here? [inaudible] Everything opposite. So yeah, so look for smaller models, [NOISE] right? And fewer features. Your features increase regularization. [NOISE] Okay? Increase regularization. You know, less complex kernels, [NOISE] and finally, there is one- you know, most of these are like the inverse of, inverse of the other, however, there is one thing that is distinct which is collect more data. [NOISE] Okay? Does getting more data help fight bias? Can it? Some say yes, some say no. So generally you don't think of getting more data as- as for- for fighting bias. If your problem is that your training performance is much, much worse compared to human level performance, which means that the-the model that you have cannot even fit the training data that you have. Right? Which means the- the- the- the problem is that the model class that you have is too inflexible. Okay? Getting more training data will not necessarily help your model fit that superset better than the subset that you had initially, right? So if your training error is low, you kind of think of it as a proxy for bias, and the thing you wanna do is not necessarily go and collect more data, but start increasing the capacity of your model, make it more flexible. However, if the gap between the Train-Dev error and the training performance is large, it means your model is doing really well in just fitting the training- training data, right? But it's not able to generalize well, which means you want to probably make your- make it harder for the model just- to just fit the training data so just collect more data, right? And with the hope that it generalizes better, right? And what do we do if there's a gap between train-dev and dev set? Change the- change the rules for labeling. Right? So if the- if the gap between the Train-Dev and Dev Set is- is high, then it's an indication that there's a distribution mismatch between the dev set and a training set. And when there's uh, a- a- a gap between the- the um, uh, when there is uh, a- a- a- a distribution mismatch between the- the dev set and the train set, then obviously, the- the thing that you want to do is minimize that difference in distributions, right? One way to do it is to make the dev set closer to the training set which would not be ideal, which would be kind of, you know, um- um, yeah, it would be kind of, uh, stupid in a way. But instead what you wanna do is make your train dev set be closer to your dev set, right? And that you can do in- in, there are many ways to go about doing that. For example, you may want to, you know, spend some more money and- and, you know, collect more real data, realistic data, label them and include them in your train dev and training uh, set. You could do that. Um, you could also, you know, do something more clever, for example, in your training set, maybe there is a subset of your training set that's somewhat close to your dev set, maybe up-weight those examples. Okay? You could also um, um, you could also maybe, uh, think of some kind of, you know, data transformations on your Training Set. For example, if your Dev Set is mostly images that are black and white, but your training set has a lot of color images. You know, maybe run some filters and make them look to, you know, closer to your dev set in some way. The- the- the- you, this is- this- for this, you need to use your creativity and somehow reduce the- the distance between your training distribution or the train dev distribution and the dev set distribution, and you want to reduce it by making this closer to the dev set, not the other way. Yes, question. [inaudible] Also just to clarify what I meant was making this closer to your training set by-by what I meant was, replace some of these examples by examples from your Training Set. You- you could potentially change this to make this closer to your training set, right? But that would be kind of against the point. The dev set is what you get actually from the situation. Yeah, that was my point. So you- you want to keep your dev set as it is and not change it to. But- but you would want to incorporate examples like as you observe them in the dev set from the training set like let's say you're- like the model you're giving is to represent places or something like that but you realize that since your training data was collected in a very noisy way, maybe it has a lot of images of vehicles or something, so you might want to actually go and like manually include more things that are similar to your dev set, right? Yes, that was my point. Yes. Okay. Yeah. That was not stupid. That's not stupid, yeah. Okay. And what if the gap between the dev set and the deployment performance is poor? Let's say you- you- you, you know, you, uh, taken your dev set in- in, um, as- as kind of uh, a pure way as possible. You've sampled it randomly from the actual deployment scenario, you've labeled it and you- you fit your model and it's working well on your dev set, but it's not working well in real performance- in real life, right? What do we do then? [inaudible] Yeah, exactly. So it's probably time to refresh your dev set, right? It's- it's- it's a good indication that you've likely overfit on your dev set by going through this cycle over and over again as- as- uh, as we mentioned in one of the earlier lectures, the more number of times you measure the performance of your model against a dev set and tweak the model to improve the performance. Think of that as your dataset getting rotted, right? It probably means your dev set has rotted quite a bit. And it's time to collect a new Dev Set and- and- and- and start all over again, okay? And these two that we saw were some of the things that we saw earlier in the bias-variance lecture, right? In order to improve your model performance, there are a set of actions that you can take and that set of actions contain contradictory actions, right? So some of the action is to add more features, whereas another action is to reduce features, right? These are all actions to improve your overall model performance. But in order to decide which specific action you want to take at the- at this moment, it is very critical that you look at both the training set error and your dev set error in order to characterize whether the current problem that you have is a high bias problem or a high variance problem, right? And only then, one- only after you've characterized whether the, uh, immediate problem is a high bias problem or a high variance problem. Only then do you- do you choose on the subset of actions that are meaningful and take that action? Yes, question. So you know when to stop when both the in facts like human-level performance [inaudible] performance [NOISE]and training performance and training performance when those both are equal that's when you- like the best you can do is make bo- make both bias and variance equal. Yeah, so, so the question is, do we stop when, you know, this error vanishes and this error vanishes? What is, is equal? Or, or, you- or the gap between them vanishes and the gap between them vanishes, which means that all three are equal, right? So the question is, do we stop them? The answer is, no. Your goal should always be deployment performance, right? And all these are components that you can tackle with different action sets to help reduce. Since the- since the- [NOISE] since the risk can replicate, both, both problems are contradictory, [NOISE] you can only do one of them. Yeah. So I'm saying, if you want to know whether you, like, you want to tackle bias or variance in your model. You have- like, the best you can hope to do is reduce both of those, uh, difference in performances to equal, that's the best you can do. Yeah. So there is- there is some amount of, um, so, uh, let, let, let me get to that in, in, uh, you know, um, uh, in a few minutes, um, of, of, uh, what, what, what you, um, what you wanna do. The- the short answer is yes, you know, you can, you can, um, you can always collect more data and drive your variance towards 0, and you can always col- increase a bigger, you know, make a- use a bigger model, and drive your bias towards 0. So theoretically, if you have enough compute and enough data, you can drive both bias and variance to 0, right? That's one way to think of it. Uh, yes, question? What- what do we do when the training error is large but dev at the dev error is short? So what do we do when training error is large, but dev- the train dev or a dev error is small? Do you mean this gap is large and this gap is small, or the training error's absolute value is large but the train dev or the dev error's absolute value is better? Yeah, like, uh, we have high bias but [inaudible] So, so again, to just understand your question better, do you mean that the gap between train and train dev is small, or the train dev itself is less than the training error? Whi- which of the two scenarios are you talking about? So are you talking about the scenario [NOISE] where the training performance has, let's say, 80% accuracy and the train dev is 79% accurate, which means the gap is small, or are you talking about the case where the train dev is 85% accuracy, [NOISE] which means, you know, the, the performance is better than that? Which of these two are you talking about? The latter one. The latter one, 85%. So this generally means, uh, that if- if the- if the, uh, training, uh, dev performance is higher than the training, uh, performance, this generally means that there is still some kind of, um, um, that there's, um, uh, still some kind of, um, um, um, distributional mismatch in the sense that maybe there is- your train devs set is not, uh, uh, perfectly randomly sampled from the- from the dev set, maybe there is some bias in the way your- you've, uh, sampled the, uh, uh, train dev set, and it could be that the, the fraction that got left out in the training set for harder examples but, you know, this biased subset are easier examples, it's something like that. Generally, you don't expect this to happen. [NOISE] So if that happens, do we train them [inaudible] So if this happens, it probably means you wanna reshuffle your- your train dev, uh, uh, um, you know, merge them and resplit them again. Uh, usually, that's, that's, uh, one way to go about it, because you should not expect- uh, there's no reasonable, um, um, reason for your train dev performance to be better than the training performance, right? I- it means you're likely doing a bad job at, you know, in just fitting your, uh, fitting your training data or, you know, there is some kind of a bias in the way you've split your train and dev set. Good question. Right? [NOISE] So now, you know, there are, there are, um, there are- wha- what we've seen is, you know, there is some action space and- yes, question? My question is, how do you measure deployment performance? Because for everything else, you have a, you know, a set data that you've kind of measured against. But for deployment performance, you kind of put it up into deployment already? That's a very good question. So how do we measure deployment performance? And usually, the way you measure deployment performance is to, you know, generally, uh, there might be ways in which the, the feedback is implicit in, in deployment. For example, let's say, if the action that, um, if- if- if the prediction that you made was right, let's say the user ends up taking a certain, uh, path of actions, and you can- you can kind of get this implicit feedback of whether your prediction was right or wrong. Uh, so so- sometimes, you ki- you kind of get that automated feedback. In other, other scenarios, you want to make this prediction on, on the real data, and then have somebody relabel them and measure the performance, right? So collect- collect some predictions that you made in deployment and have somebody label them just to see what fraction of them you got right. So it's like another dev set, essentially? Yeah, yeah. Think of it as another dev set if there is no implicit feedback, but sometimes, there can be implicit feedback in your- in overall application. Let's say, your- the user takes a certain kind of actions if the prediction was right, right? In, in that case, you get like automated, automated feedback. Good question. So, um, so here in our action space, we saw that some of these actions are- can, can, can- you know, this, this- uh, can be kind of contradictory with other actions. For example, add features versus fewer features, add depth versus reduce depth, reduce regularization versus increase regularization, right? So some of them are kind of contradictory, and it's inevitable that you absolutely need to do a bias-variance analysis to decide which of those actions you need to take. However, there is this- this, um, um, this action called get more data, right? So get more data can help fight bias, but you don't expect it to help- uh, uh, it can help fight variance but you don't expect it to, you know, affect your bias in any way. But generally, getting more data is a pretty expensive step, and quite often, you know, you need to answer the question of, you know, i- is it- i- does, does getting more data, is it going to help me or not, okay? Is go- getting more data is going to help me fight the, uh, you know, the variance in my model or not? Maybe the, you know, the, the, the amount of data that you have is, is, is not sufficient or maybe it is sufficient. How do you, how do you answer that question? So for that, there is this, you know, pretty useful plot that you would wanna make. [NOISE] so the, uh, the plot that you wanna do is something like this. [NOISE] So this, think of it as [NOISE] fraction of data or data size. [NOISE] Green line is desired performance level, or let's call it desired error level. And this axis is error, and this is fraction of data. [NOISE] And you have your- [NOISE] right? So what we do is we take the- the full data that we have and split it into smaller and smaller fractions, okay? And with each of the small fractions, [NOISE] each of the smaller fractions, get the train error and test error or the, ah, dev error from that fraction alone. And then take a bigger set, get the, uh, corresponding train and dev error from the big- bigger fraction, then get an even bigger set. So each of these here represents your data size increasing, right? So, you know, for simplicity, assume both your dev set and train set are proportionally increasing. And w- with each, uh, uh, with- with- with with increasing data size, plot both the train error, so- or train. Uh, so this is the training error [NOISE] and this is the test error, [NOISE] okay? In this scenario, the desired error level is- is somewhere over here. Now, think of it as human level performance. And by plotting your model performance of both a train and test error as you subsample your data. And a sub-sampling would go this way as you- as you, um, um, uh, measure it on the full data. This kind of a model where the train and test performance come really close, will tell you that collecting more data is not going to help you, right? Because the gap between the training error and test error is pretty small. And they're, you know, they're pretty close to converging. So if you were to collect more data and if you were to extrapolate these, it would probably kind of, you know, convert to this level, right? And their- the- the- the- the two are pretty close to converging, okay? So this is a scenario where collecting more data is not gonna help you, right? This is, think of this as a high bias scenario. And the question of collecting more data. The answer is no, right? So collecting more data should not be the first thing you are thinking of in- in- um, in such a scenario. However, you're in situations like this where this is your desired error level [NOISE] and your training error. [NOISE] That's you're training error. Let's see. I'm loosely calling this test error, but, you know, think of this as your train dev set error, dev set error, right? In- in this scenario, collecting more data. So again, this is fraction of data or data size. Here it would be reasonable to, you know, extrapolate this as this rising up a little further, and this coming down a little more. All right, so if your- if your plot looks like this, then you want to think of this as a high-variance scenario. [NOISE] Scenario. And for the question of collect more data, [NOISE] the answer here is yes. You know, it is worth collecting more data. It is- it is worth, um, depending on, you know, um, how- how much you value your performance, it may be worth spending more money and, you know, collecting more data, okay? Whereas, if you are over here, you're training error itself is a lot worse than the desired performance level. And your test error is matching your training error. Which means you probably want to increase your model capacity, right? Use a bigger neural network or use more features or use a more complex kernel, right? So that, first you bring your training error down and, you know. Any- Any questions on this? Yes, question? [inaudible] desired [inaudible] Yeah. So- so the question is, how do we set the desired error level? And- And that is, um, there is no common answer for that, that really depends on your application. And you know, as- as- as the, you know, owner of the product, you need to decide what your desired error level is. We want to [inaudible] You- well, so the desired error level, it may not always be possible to bring it down to 0. And the value to which you want to bring it to will actually depend on, you know, the metric of your choice, right? So- so if your metric is say, accuracy and your prevalence is very low, then, you know, this will be like a really low value in general. So it depends on your choice of metric. So intentionally I've just called it error, but what the actual, you know, units here are- will depend on the metric that you've chosen. Any other questions? [inaudible] question [inaudible] desired performance security like the [inaudible] So the desired error level performance, um, could be upper bounded by human level performance, sure, it could be, yeah. All right. Any other questions? Ah, let's see, are we done with the practical? A few more, uh, uh, uh, concluding, um, uh, uh, comments. As I said, ah, always, always, always our goal should be deployment performance, right? Don't lose sight of the end goal, right? All the other measurements that we do are only to aid us in taking the right actions so that the deployment performance improves, right? It's very common to lose track and just focus on, say, dev-set performance or just focus on the train-dev performance, but you should never lose- lose- lose track of the fact that your end goal is always the deployment performance. And why- why deployment performance is your end goal, it is for- for tactical purposes, you want to measure the training performance and the train-dev performance or the training performance and the dev-set performance and look at those. And the reason you want to measure them is so that you can decide which action to take, right? But the action you are taking is with the goal of still getting good deployment performance, right? And this is where bias-variance analysis, kind of, comes into picture, because with bias-variance analysis, you can decide among these conflicting set of actions, which is the action that you wanna take. If you blindly take some action that, let's say, you know, um, um, um, increasing features, is a good thing, and if you're- you're to go- go ahead and increase your features, then, um, you know, that may be wrong or the wrong thing to do if the problem that you're facing is high-variance. Similarly, um, let's say you're- you're- you're, uh, at some, uh, you're getting some, kind of, a model, uh, some kind of a performance on your dev-set, you know, you'll- you- you feel it is, you know, um, insufficient and you go about just collecting more data to increase your dev-set performance. And if you are in a scenario like this where your test set performance was lower than- than the, uh, training performance, which means that this set error was higher than the- than the desired level error, then collecting more data is not gonna help you, right? And- and therefore, looking at the trainer- uh, looking at basically the breakdown of the performance at all these metrics, it's extremely crucial to decide what the next action that you're gonna take is- is- is gonna be. And another final advice is in your homework, uh, basically, you've been, um, implementing a lot of these algorithms, implementing gradient descent yourself, implementing GDA yourself, or neural networks yourself. Please don't ever do that in- in- in- in- in, uh, in production, right? Don't implement the algorithms yourself. Always, always use some kind of, uh, a software package that somebody has already implemented and tested. [LAUGHTER] You know, don't go about doing gradient descent by yourself if- if you wanna build some kind of, uh, um, uh, a real-world application, right? That's- that's totally the wrong, uh, uh, thing to do. It's- it's very important to implement neural networks yourself for the purposes of understanding how they work, right? But- but, you know, um, uh, don't- don't- don't do that, don't go about, you know, in a- some, kind of, an adventurous path and implement gradient descent yourself for your, uh, application. [NOISE] Yes, question. So [inaudible] Yeah. [inaudible] in your experience [inaudible] Biggest pitfalls in practical stuff? [OVERLAPPING] Yeah. What- what do people do wrong, like, so many things [inaudible] So- uh, so- so- so the question is, you know, uh, there, you know, a lot of people, you know, probably, you know, know this knowledge, um, but still lots of, you know, uh, companies fail, lots of teams fail. Why does that happen? And I don't think there's a single answer to that because, uh, first of all, you may know all the theory, but, you know, your execution may be poor, right? You know, er, uh, even though you know things, you may not always execute them in the right way. That apart, there can be, you know, other reasons beyond your, you know, control. For example, even though your machine learning approach was correct, you may be solving the wrong problem for which there's no customer traction, right? So there are lots of other reasons that come into picture why teams fails, right? It's not always the- the, uh, the technology that causes failure, right? So it's- it's equally important that you go after a problem that, you know, that's kind of, what solving and there are, you know, people or- or- or, you know, something that's valuable. If- if- if solving the problem is valuable, then, you know, um, it's- it's- it's, um, it's good, but if your problem- is solving a problem, even though you solve it well, if nobody is interested in the solution, then, you know, obviously, you know, you're, like, the team is gonna fail, not for machine learning reasons, but, you know, it's- it's still a failure. [NOISE] All right. Uh, any other questions on this before we move on to, um, [NOISE] uh, topics related to the exam? Good. Okay. So- so for the- for the, um, so the next thing I wanna, uh, briefly touch upon, just for a few minutes, is to give you, you know, um, um, some kind of an expectation of the format of the final exam, um. We have posted some- some practice final exams on- on Piazza, which hopefully you've already had a look. Um, however, use those, um, those- those, uh, practice exams as- only for the purposes of, you know, getting access to a pool of questions, right? The format of the exam, uh, this time will likely be, uh, different. And in terms of the format itself, uh, there are gonna be- so, uh, so the- the- the, uh, format that I'm gonna describe now, there is a very high probability that that will be the format, but there is a small probability we may, you know, um, uh, tune a few things. Uh, but, you know, uh, more or less, what you can expect is, uh, the first question is gonna be true or false, and they're gonna be about 10 of them. The second question is gonna be short answers, and they're gonna be about five of them. The third question is, uh, you can call it a theory question. I'm just gonna call it theory, right? Uh, it's gonna be basically just math, and it's, uh, it's, uh, uh, there are gonna be sub-questions, where essentially, it is a much longer proof for which we, kind of, guide you by breaking it down into, you know, sub-problems and, you know, you solve each of the sub-problems, you will- you will have effectively, um, um, solved the full- full question. And the fourth one is, you know, you can call it theory/application, where again, um, it's- it's- it's, uh, it's, kind of, a new problem setting that, you know, it's like, ka- ed- um, um, a novel scenario, um, for which you wanna extend some of the- the ideas that you've learned in this course to this new setting, and they're gonna be like two, um, uh, theoretical parts where you, like, kind of, derive whatever the update rules, prediction rules, etc. and a third programming component, where you implement these rules in the starter code that you provide and you execute it and you get a plot and you, you know, include the plot in your write-up. So, uh, this is gonna be the- the, uh, structure of the exam. Um, you can, I would say, you know, expect it to be hard. It's not gonna be long. So, you know, this is, you know, not- not- not- uh, we don't have, like, you know, 10 big theory questions or anything. It's much shorter than the home-works that you've encountered with. Uh, but it's gonna be, um, um, it's- it would be very necessary that you have a good, kind of, overall understanding of the course. You're good at the prerequisites, so things like probability, right, uh, matrix calculus, um, linear algebra, and, you know, um, general, you know, um, um, things that we've covered throughout the course like maximum likelihood, uh, etc. It's very important that you have a good understanding of- of, um, the prerequisites and also the important concepts from the course like bias-variance, etc., um, but the- the, um, the specific, um, uh, the- the- the, you know, the different set of questions, so basically these 15 questions and these two questions, you know, kind of, more or less touch upon all parts of the course. Any questions on the format itself? Yes, question. The only coding will mean the, uh, one part of the [inaudible] Yeah. So the co- so the coding is gonna be the last part of- of, uh, question- Question 4. Yes, question. Uh, approximately how many hours is it expected to take? Is there a case where it would actually take 24 hours to finish? Uh, it's hard to say. So, uh, there are just- so these are, you know, true-false question and short answers. Um, I mean, we expect you to say true or false in one sentence for, you know, an explanation, so one or two sentences for justification. Similarly, short answers, we expect you to, you know, derive an answer, but, you know, we only expect you to provide one or two sentences, you know, of reasoning or justification. So if you know the answers, you know, you can finish one and two in a few minutes, right? But, you know, if- if- if you spend more time on this, you could potentially spend the entire 24 hours on just answering these. So it's very hard to tell, you know, how long it's gonna, um, um, uh, take. So the format itself, if you know everything, we expect you to comfortably finish it in three hours, but, uh, if you're spending time on- on, you know, the- er, if you- if you need more time to spend on each of these, you could potentially take, you know, more than 24 hours. So it's- it's hard for us to set an expectation of how long it's gonna take for a student. We could tell on average, but for a given student, you know, the variance can be quite high. But, you know, um, think of this as, like, approximately two homework questions, right, and some, you know- this again, in it, they're just true, false, and short answers, so it shouldn't take too long. Er, any- any- any questions? Yeah. Yes, question. [NOISE] I mean, I- I think [inaudible] we'll be answering Piazza [inaudible] Very good question. So during- so this is going to be a take-home exam, which means we gonna post the- the, um, post the, uh, exam on Piazza as a PDF, and you will write up your solutions, uh, I don't know, in tech or by hand. However, just the way you do your homework, preferably in tech, so that it's easier for us to grade and you will post it on Gradescope, like, uh, any other homework. Um, during the exam period itself, we will disable Piazza, right? So even though the format is like a homework, it is not a homework, it's an exam, which means, uh, we will, um, um, you cannot talk to each other, uh, you cannot collaborate. That's against the honor code. Collaborating on the exam is against the honor code. You can use any of the resources that you want. Look at the course notes, look up- uh, you know, watch the lectures again, look at a homework solutions. You are even free to look up the Internet, but you cannot post on some, say Stack Exchange, and ask for somebody [LAUGHTER] for help. But you're totally allowed to- you know, search the Internet for any- any- uh, kind of- um, um, uh, any kind of, uh, uh, questions that you may have. Uh, but no communication basically. By communication, I mean, no asking questions. Uh, but you can ask Google but, you know no asking, uh, uh, any other- any other human being, right? Uh, during the exam, um, we will not help you in Piazza for things that you don't understand. You- you can create private Piazza posts. And if we think the question is already clear about it, we will not respond to it. Just like an exam room, uh, you know, if- if you have a question, um, uh, you know, ask yourself, would this be a question that I ask a TA in an exam room and, uh, you know, create a private Piazza post. If we think it's a valid question and if there is an ambiguity, then we will make a public post and clarify to everyone. Um, however, if we think that, you know, if the- the- the- um, the- the question is already clear about it and you have not thought sufficiently about it, then we will, uh, refrain from answering those kind of questions, or maybe we will tell you, you know, the question's already clear about it, you know, something like that just so that all the students obtain the same level of- of, uh, response from us. Does that- does that answer the question? Okay. Yes, question? Uh, when you say that Piazza will be disabled, um, are- will there be like responses in questions from brought, like, before the exam by the quarter still be available or [OVERLAPPING]? Yes, you can- you can, um, you can open Piazza and search for all the previous, uh, uh, questions, but, uh, I think there's an option in which we can disable posting new questions or there- there are some such feature in Piazza, uh, that- that will be up. Yes, question. [inaudible] like the first [NOISE] does it cover everything we wanna know. Yes. So I would say for- for the math part of it, focus on matrix calculus, focus on probability, and of course, linear algebra. You know, if you're- if you're really comfortable with them, uh, we expect the- the- these parts to be pretty- pretty straightforward. Yes. Regarding the grade, uh, like [inaudible] how would you [inaudible] I'm sorry, I didn't get the question. Can you repeat? Grades- like grades on every problem. So the grades? Uh, I don't remember it offhand, but it's something like 10, 10, and I don't know, it's- it's like 15, 15 or 10, 10 or 20, 20 or something like that. Yeah. All right. So with that, let's kind of rewind all the way back and start our course review. So the course review that we'll be doing some amount today and the rest on Wednesday is to- is to kind of, uh, provide an overview or a second look at all the, uh, uh, all the topics that we've covered. And also kind of, uh, spend some time- spend some time drawing reference to the, uh, relevant homework questions that you've answered. Um, because, um, a lot of- a lot of the things that you learn from this course, we're from, you know, lectures, but an even bigger chunk was from the homeworks, right? And- and, um, some of you have probably, um, gotten all the homework answers correctly and have understood them well. Some of you may have gotten the answers to the homework questions, but haven't really kind of- you know, kind of- kind of understood like the big take-home messages from each homework. The homeworks are a huge part of the- you know, of the overall educational aspects of this course, right? And unlike, you know, many of the other math courses where homeworks are just assigned from, you know, some random textbooks with problem numbers, you know, 17, 15, and 30 or whatever. Uh, in this course, a lot of effort has been put into designing each homework so that it kind of complements your- complements the lectures and the overall learning experience. So we'll- we'll, um, at least do a quick- quick reference to each of the relevant homeworks and- and, um, at least touch upon the take-home messages from each homework as well, uh, depending on the, uh, in the order in which we are doing the review. All right? So let's jump right in. We're gonna start off with supervised learning. That was the first- first, uh, section of the course. So in supervised learning, if you remember, the goal of supervised learning is to learn a mapping from x to y. Where you think of x as an input and y as an output. And you are given pairs of x-ys as a training set, right? So in supervised learning- so in supervised learning, our goal is to learn a mapping from x to y. And in order to learn this mapping, you are provided a training set s and that training set is basically pairs of x, y examples. I equals 1 to n. Generally, small n refers to the number of examples in our training set. Okay. And x i generally resides in a d-dimensional space, and this is usually called the input, and output- output y i if it recides- if it's a real value, then we call this regression, right? If y i is either 0 or 1, we call this binary classification. [NOISE] Okay. Y^i could be, you know, um, integers 1, 2, 3, and so on. And this, you can either call it count or Poisson regression and so on. And the overall workflow for supervised learning was to start with a training set, [NOISE] training set, and then run this training set through a learning algorithm, [NOISE] and the output of the learning algorithm was a model, which we called the hypothesis. Right? So think of this as your training set could be, you know, um, uh, um, xy pairs, where y is in 0a nd 1. The learning algorithm was le- uh, say, logistic regression, and the learned hypothesis is the- is the, uh, uh, set of parameters that you got when logistic regression reached convergence, right? And this learned hypothesis will now accept new examples, x, that it has not seen before from the training set and make predictions, y. Right? This is the- this is the kind of, um, uh, um, workflow overview of what exam- what- what happens in supervised learning, right? And the first lear- algorithm that we learned was linear regression. [NOISE] Linear regression. In linear regression, we- we assume that the function h belongs to this parametric family, where we defined it as h Theta of x equals Theta transpose x, where Theta was some vector in R^d. Right? And there's also this intercept term. We assume that x will include the intercept term and- and, you know, Theta is, uh, uh, will also be the- the parameter in the Theta vector that corresponds to the intercept term would be the bias term. Right? So sometimes, uh, to make it explicit, we call it d plus 1, and, you know, other times we just leave it as Theta d and assume that, you know, it's in there. And in order to learn this, uh, learn this parameter vector Theta, right, we need to have a learning algorithm. And the learning- for the learning algorithm, we first define what is called as a, uh, as a cost function J Theta, which is equal to sum over i equals 1 to n h Theta of x minus x^i minus y^i square, right? And this is also commonly called the squared error, where the- you want to penalize the- the, uh, uh, the difference between the predicted value and the correct value and we want to penalize the square of that value and sum over all the squares of these errors. Right? And this, we treat it as a function of Theta, and we want to minimize this error by adjusting Theta to the best possib- to the best possible value, right? Another way to write this in vectorized notation is x Theta minus y square. Yes, question? So is this the choice for the cost function because of the Gaussian, uh, or the- the negative log-likelihood or is this, uh, are there other choices for cost function to do considering the factors? Yeah, so the question is, is this cost function a co- uh, a consequence of the, uh, uh, of- of assuming a- a Gaussian on the, uh, uh, Gaussian on the error term or, you know, can you use other, uh, uh, um, other- other- other loss functions? So there are, um, uh, I'll be answering that shortly. This, uh, the- the- the quick answer is that you can give this a probabilistic interpretation that had you assumed a Gaussian error term then this would be the cost to minimize, but there are other interpretations for linear regression as well. Yeah. Right? So we-we- what we want to do is find Theta is equal to arg min Theta of J of Theta. Right? And in order to- to, um, we put hats on all estimated values. [NOISE] And in order to perform this minimization, first algorithm that we saw was gradient descent. [NOISE] Right? And in- in gradient descent, we start with a random initialization of Theta and run this iterative loop where we repeat Theta equals Theta plus some learning rate, Alpha times y^i minus h Theta of x^i times x^i. And here, we include a summation plus Alpha times summation i equals 1 to n. So this is gradient descent. And we also saw a variant of gradient descent called stochastic gradient descent, where instead of this summation in each iteration, we randomly sample some example, SGD, where we repeat Theta equals Theta plus Alpha times, let's call it y- y- y^k minus h Theta of x^k times x^k, where k are sampled from the uniform distribution between 0 and n or 1 and n if you, right? So sample a random example number, use that example to run gradient descent for that iteration, right? So that was stochastic gradient descent. So this was one way in which, you know, uh, we solve linear regression using gradient descent. [NOISE] And then we saw these [NOISE] few other interpretations of linear regression. [NOISE] So this was- this was using gradient, uh, uh, based approaches where the gradient descent based approach works for any cost function, not just linear regression. But in case of linear regression, we can also get the closed form solution for the final Theta hat directly, okay? The second approach was what we called as the normal equations for getting a closed form solution, [NOISE] right? And in the normal equations, we basically get x transpose x Theta equals x transpose y. From which if we assume invertibility of x transpose x, which for now we will assume x transpose x is invertible, we get Theta equals x transpose x inverse x transpose y, or Theta hat equals x transpose x inverse x transpose y, right? And this is kind of a pattern that you should tune your eye to look for. Because this pattern we also saw that, you know, can- can show up in other places like, you know, in factor analysis. In factor analysis we saw that, you know, uh, uh, uh, one of the parameter, uh, updates looked- look very close to linear regression, right? So that's- that's, uh, the normal equation. And then we saw a few more interpretations of linear regression. So the other one was projection based, right, projection. So if- if, um- so let's assume this is, um, Theta_1, Theta_d, and this is the parameter space, right? And then there is this design matrix X, right? This is the matrix, and run it through this. And we get an output space. This is y_1, y_2, call this y_n, right? So each dimension in the input- in the parameter space or the input space corresponds to each parameter or each feature. And each axis or each dimension in the output space represents a different example, right? That's because x takes input, you know, x is in R^d, or sorry, R^n by d. So it takes us input something in d-dimension and produces output something in the- in n-dimension, right? And here, because this is a d-dimensional space, getting kind of up map to an n-dimensional space where we expect d to be smaller than n, the- the column space or the range of x is going to be a subspace of the entire output space y, but the subspace has dimension d, right? So let me use some different colors. [NOISE] So uh, the entire d-dimensional space will now get mapped to some subspace, right, and this subspace basically spans to infinity in all four directions, right, and it crosses through the origin. By definition, a subspace crosses through the origin, right, and the subspace is d-dimensional. However, for the given data that we have, the point y or the vector y may reside outside the subspace, right? So this is the y, um, that- that is- that is given in the data. So we are given x and we want to find the Theta such that x- x Theta equals y, and x and y come from the training set. And we want to find the best possible Theta y such that X Theta is as close to- to uh, y as possible. Now, the way we- uh, we cannot always solve this exactly, right? X Theta will- cannot always be exactly equal to y because y can reside outside the- outside the range of- of x. So instead what we do is solve X Theta equals projection of y onto the range of x. Instead, what we do is project y onto the- onto the subspace spanned by x, and then solve x Theta equals projection, right? This cannot be equal because y can- can be outside the range. But this, by definition, the projection of- of y onto the column space of x can be solved, right? And if you remember, the projection of y onto the, um, column space of x is given by x, x transpose x inverse, x transpose y, where this part is called the projection matrix, [NOISE] right? So this is- this is just the projection of y onto the column space of x. And once we have it in this form, when y is projected onto the column space, it is now part, you know, x- this x can be inverted in principle because there is a bijection between the input space and this column space, there is a one to one, um, um, uh, bijection, which means we can now informally speaking, invert x form this projection, uh, uh, for x. So you can cancel the x's and say x transpose x inverse x transpose y, right? So this was the projection interpretation of linear regression. And finally, we also went over the probabilistic interpretation of linear regression. So in the probabilistic interpretation, we know that- that, um, x Theta will never exactly, uh, equal to y with this linear form. So we say- so instead we assume y equals x transpose Theta or Theta transpose x plus some Epsilon, where Epsilon is distributed according to a standard normal distribution with mean 0, and the- the, uh, um, the variance for now let's just assume Sigma square, which is some unknown constant, right? And now we can- we can, uh, write this as the probability of- so with this, we can write Epsilon equals y minus x transpose Theta or Theta transpose x, which gives us probability of Epsilon equal to probability of y minus x transpose Theta, which is now a Gaussian distribution. It should be 1 over square root 2 Pi Sigma square, where Sigma square is inside the square root times exponent minus- this has mean 0. So it will be this minus 0 square, which is just y minus x transpose Theta square divided by 2 Sigma square, right? So this is just the uh, standard normal distribution. And now, if we pass- perform maximum likelihood, that is basically find the parameter Theta that maximizes this likelihood term so that it is- you know, it, it conforms to a standard normal distribution as much as possible, we get l of Theta is equal to log product I equals 1 to n, 1 over square root 2 Pi Sigma square exponent of minus y^i minus x^i transpose Theta squared over 2 Sigma square, right? And now we can, uh, take the, uh, log of the product, the sum of the logs. So this gives us sum i equals 1 to n. Log of this term is some constant, and it's the same constant for all examples. So let's just call it, you know, some c plus log and the exponent cancel minus y^i minus x^i transpose Theta squared over 2 Sigma squared, right? So we have some arbitrary constant that is common for all examples, and another arbitrary scaling constant that is common for all examples. So instead, we can now write this as some- some- some other constants as c prime plus 1 over 2 Sigma squared times the negative sum over i equals 1 to n y^i minus x^i transpose Theta squared. And now if you wanna maximize the- the, uh, likelihood, uh, objective, so let me take the negative sign out over here. If you want to maximize this, that is the same as minimizing the term inside, right, because of the negative sign. And we can also ignore this 2 Sigma square because it's just a common positive scalar across all examples. So argmax, respect to Theta, L of Theta is equal to argmin of Theta of the sum over i equals 1 to n, x^i minus- sorry, y^i minus x^i transpose Theta squared, right? So it's not by the- the- uh, the reason why we cancel the two Sigma squared is because we are only interested in the argmin and not the minimum value, but just the- the Theta value that gets us to the minimum, right? And- and the Theta value is- is the same, whether we include the scaling constant or not. And this is the probabilistic interpretation of- of uh, linear regression, [NOISE] probabilistic interpretation. And here we saw basically the Gaussian distribution, and the main, uh, takeaway from this is that whenever there is a Gaussian distribution involved, there will always- that generally always corresponds to some kind of a squared error, right? Because you take the log likelihood, the log will cancel- will- this is a log of some constant, log in the exponent cancel, so maximizing the Gaussian likelihood is generally equivalent to minimizing the squared error of some term. So that was, uh, linear regression. Any- any questions on the li- on linear regression? [NOISE] All good. [NOISE] And then after, uh, covering linear regression, we move on to the first classification algorithm, binary classification algorithm, which is logistic regression. So in logistic regression, we are given- [NOISE] logistic regression. Logistic regression y^i is now either 0 or 1. Okay? And in order to solve logistic regression, we kind of follow the probabilistic interpretation and do the same thing for logistic regression. In logistic regression, we assume that y^i is sampled from a Bernoulli distribution, right? Bernoulli distribution, and probability of y^i equals 1 is given by this 1 over 1 plus e to the minus Theta transpose x. Okay? And initially, we- we kind of construct this 1 over 1plus, uh, e to the minus, uh, uh, uh, function, you know, um, as- as, uh, a good reasonable guess, which we later re-derive in GLMs, uh, as a natural consequence of assuming Bernoulli. Uh, but for now, uh, let's just go ahead. Where the, uh, predicted y^i, uh, equals 1. So this will be our prediction, and this gives us the, uh, likelihood, uh, expression. So- so l of Theta is now equal to 1- uh, log of the product of i equals 1 to n of the Bernoulli likelihood, which is y^i- oh, sorry, y hat to the power y^i- [NOISE] y hat i to the power y^i, times one minus y hat i to the power 1 minus y^i, where y hat i is this, okay? And all of this is a function of Theta. And this allows us to write the- the objective of- of logistic regression as- so log of the product is the sum of the logs i equals 1 to n, y^i log y hat i plus 1 minus y^i log 1 minus y hat i, right? And for- in -in place of y hat i, include this as y^i here and here, and to solve it, we use gradient ascent. There is no closed form solution for this, okay? Run gradient ascent and when you converge with gradient ascent, the Theta value is basically the maximizer of this likelihood function, okay? That was logistic regression. Any questions on that? Can you repeat the last part? Can I repeat the last part? So, uh, there is no closed form solution for recovering Theta from this, uh- from this likelihood expression. Instead, what we do is run gradient ascent, and run gradient ascent till the algorithm converges, and when it converges the Theta value at the point of convergence, it's going to be the solution for maximizing the log likelihood of Theta. There was another question. Yeah, actually, my question was- so I'm assuming if some sort of those same value have a probability, and they branch at a actual value, will you consider the algorithm? Uh, can you- can you, uh, describe that a little more? I don't think [OVERLAPPING] I quite understand. I- I really confused so you could have the- total probability that y^i equals to 1 is just Phi, because y^i is Bernoulli distributed. Yes, the probability that y^i equals 1 is just Phi. Yes. But then you- so- But the probability that y^i equals 0 is 1 minus Phi. I agree. Yeah. In your third- in the third line of your exposition, I- I- Third line is this one? Yeah, so you wrote a probability y^i equal 1. [OVERLAPPING] My first question, should that be y hat i or is that like a- So this is the probability that y- y- y^i equals 1. Should it be the probability that y hat i equals [inaudible] minus 1? So this is just y hat, this will be y hat. Okay, then should you have a catch-all value saying that if y hat is above that then y^i equal to 1 [inaudible]. I see. So- so- so the question is, um, if I understood right. So here we are only talking about values in order to make a decision of what our predicted y^i is, should there be a threshold to say if this is greater than whatever 0.5, then the, um, um, then the prediction is- is- is one versus the other. And the- the, uh, answer to that is logistic regression. So that's a good question. So logistic regression, even though we call it a classifier, it is actually just a probability machine. It outputs probabilities. And the way we convert these probabilistic outputs into decisions is using some kind of an additional threshold after the fact, right? So first we train a model to just output probabilities. And then using, you know, um, um, um, um, basically we saw this in the- in the evaluation metrics lecture. You know, you can choose different thresholds to optimize for different kinds of metrics. So you can choose a threshold for maximizing your accuracy, your- maximizing your, you know, precision or- or what have you, right? So that's- that's generally done after the fact of- of choosing the threshold, right? But logistic regression by itself just gives you probabilities, okay? Next question. [inaudible] assume that it exist [inaudible] probability of y^i is given by this and then we want to find a good theta to approximate the solution. That's correct. So what we're assuming is that the probability of y^i given xi can be represented in this form. And that there is some Theta that gives us the right answer, and we go about and- and, uh, try to recover- recover that Theta. And the- the important thing to note there is that it is an assumption. We assume that probability of y^i equals 1 can be represented in this form and try to recover it. And in one of the examples that we saw in- I think it was uh, Homework 2, Question 1. Let me- let me [NOISE]. So in Homework 2, Question 1, we had this logistic regression stability problem, right? Where you are given two datasets. On one dataset, logistic regression would converge, and on another dataset, logistic regression would- would never converge. It would go on and on. In that example, [NOISE] right? The data was perfectly separable, which means- [NOISE] which means we could draw a line that perfectly separated the two- the two classes. And in- in this kind of a scenario, the assumption that p of y equals 1, equals 1 over 1 plus exponent minus Theta transpose x. The assumption that p of y given, uh, 1. So this should be given x, right? So we assume a conditioning, uh, over there as well. [NOISE] So they're corrected right over here. [NOISE] This is p of y given 1 , given x. [NOISE] So probability y equals 1 given x can be represented in this form. In a way this assumption is not met here, right? Because if you tried to solve this, then the algorithm never converges and Theta just goes to infinity, right? So this- this- this- this assumption that, uh, that the, uh, probabilities can be represented from is an assumption which may or may not hold. And in case of the perfectly separable case, this assumption does not hold. And in those cases, to handle such cases, we want to use regularization, where regularization makes it kind of a well-defined problem and- and we'll come to- we'll review regularization again when we go uh, uh, uh, um, um. Sir, one word before you carry on, I think, uh, so in the same question, you're mentioning the- actually the method of regularization was to imagine that the two [inaudible] the data- it passes through some points. And the points are so close, it doesn't- two-dimensional space, it looks like, you know, it may not be passing through- may not be separating perfectly, but there's something about separating a hyper plane in a higher dimension, you know, in a higher dimension. So can you explain in which step of the algorithm the points are visualized by the, uh, parameters as existing in two- So in the- in the, um, in the dataset that you have, uh, that was given the- the mod- the dataset where the model does not converge, in- in the, uh, you don't have to map it to a higher-dimensional space. The separating hyperplane can clearly separate them. Some of the points maybe so close that in the- in the graph, they may look as though they are- they are, uh, not separated because in- in the plot, each point is kind of represented by a small circle and the circle may cross the separating line, and the line also has some thickness as- as- as a consequence of just plotting it. But if you look at the- at the numerical values of each of the points, they are perfectly separable. Yeah, so you don't have to take it to a higher dimensional- higher dimensional space for that example. It's just that the- the, uh, the plotting involves, you know, thick lines and so it does not appear to be perfectly separated or they're too close by. But if you inspect the numerical values, it's- it is perfectly separable. So that was, uh, logistic regression, right. Yes, question. [inaudible] Yeah, good question. So well, so in- in that question, one of the- one of the, ah, ah, sub-questions was to recommend ways to fix this scenario. And one of the proposed ways was to make the learning rate of, ah, logistic regression decay over time at the rate of whatever 1 over t or t squared or whatever. So in- in- in, ah, in that scenario, it is kind of important to distinguish between convergence in code versus, you know, convergence mathematically, right? So if you were to decay the learning rate, and if you just decay to 0, then obviously the code will, will stop iterating, right? But the- the- the parameter value that you get is not necessarily the argmin or the two argmin. Why not actually? I don't think I caught your phi of y given x explanation very well. So, ah, wha- wha- what I, ah, mean by that is, um, so we define this l theta equals, you know, some- some expression, right? And now in order to minimize this, we have this iterative algorithm where we say, you know, repeat until- repeat. We have some condition, repeat, you know, until some condition. And you do theta equals theta plus alpha times y_i minus h Theta of x_i times x_i. Right? So we are- we are- we are, um, um, looping this. Now, if this is perfectly minimizable which means if your loss function is well-defined and has a well-defined local minimum, then this algorithm will eventually reach you know some- some, ah, ah, nearby point here and Theta hat will be this- this nearby point of- of the- of the global minimum. However, in- in the- in the problem that we had, so if this is theta, and the last function actually looks like this. Right. It reaches the minimum at infinity. Right. So now the question is, if we run gradient descent on this, the algorithm will keep, you know, hopping over and over and over and over and over with each iteration, it keeps updating Thetas all the way to infinity. Now the question is, what do we mean by convergence, right? By convergence, you know, the mathematical meaning of convergence means recover the true Theta. And in this case there is no true finite Theta for it to recover. So it's not even something that can possibly converge. Right. It just goes to infinity. Now, however, here, if we have a condition, you know, some arbitrary condition which evaluates to false, right, this computational algorithm will break out of the loop. Now, do we wanna call that convergence or do we wanna call, you know, this mathematical notion of convergence where our estimate has- has gone arbitrary close to the true value. Right. So there are these- these- you can- you can, um, there are these two different meanings of convergence. One is, you know, does this computational algorithm stop, versus ha- has our parameter estimate come sufficiently close to the true parameter, right. There- there are two different- two different meanings. Right. So by adding regularization, we convert this ill-framed problem into a well-framed problem, right. So regularization takes us from here to here. Right. So that was logistic regression. Next, we studied a second algorithm for optimization called Newton's method. [NOISE] Right. So Newton's method is a root finding method where given some function f, ah, let's- let's, ah, call this x. Given some function f and some random initialization, right, Newton's method takes us to the nearest root. What do we mean by root? Root is an input to f such that f of x equals 0. Right. So Newton's method is a way where with some random initialization, Newton's method will take us to the nearest root of a function. Right. And we use Newton's method for solving our cost function by feeding the gradient of our function as- as the input to Newton's method as f. So we set f is equal to L prime. Right. So the gradient function is used as the function whose root we're trying to, ah, calculate using Newton's method. And Newton's method basically, ah, gives us the update rule that Theta equals Theta plus 1 over, um, in this case, um, l double prime Theta times l of Theta. If, uh, Theta is scalar, and in case of the, ah, ah, vector-valued inputs, that is the extension of Newton's method called Newton-Raphson method. Newton-Raphson where the update rule is Theta equals Theta plus 1 or H inverse of Theta times gradient of Theta, where H is the Hessian. H of Theta equals H is the Hessian and the- this is the update rule. So a few things to keep in mind is that Newton's method is not a cost minimizing method. It is not a cost maximizing method. It's just a root finding method, which means, no matter what kind of a function you feed it, you know, it's plug and play. You don't ask Newton's method to maximize it or minimize it, it's doing neither. It's just finding the nearest root. Now, if the function- if l that you're using in- in, um, sorry. If- if l that you are, ah, ah, using in Newton's method is a concave function, sorry, a convex function, right. So the- if this is l, then l prime of Theta is going to be- so- so ah, the gradient starts out to be negative here, it becomes 0 over here, and it becomes positive later. So the grid- the gradient function is an increasing function. Right. So this is 0 and this is l prime Theta, so it's an increasing function. Right. Similarly, if your, um, ah, l Theta is concave, right, then l prime of Theta is a decreasing function. Right. And Newton's method will just recover the point where the gradient is 0. Right. Ah, because that's- that's what it's- it's- it's doing root finding. It's not- it is not checking whether at this point the gradient is decreasing or increasing. It doesn't care. It just gets you to the nearest root. Which means if you feed in a convex function or a concave function, it will automatically recover the corresponding optimum point or the stationary point. Which also means, if you feed a function that is, you know, kind of weird shaped where it does not have a well-defined local maxima or minima, has multiple local maxima, it just takes you to the nearest stationary point. What that means is, if you start over here, right. If- if- if, ah, your starting point is here, then Newton's method will maximize and take you to the nearest stationary point. If you start over here, ah, or ah, if you start over here, then Newton's method will take you- will minimize it and take you to the nearest stationary point. Depending on the initialization, it was either a local maximization or a local minimization. Right. Newton's method only takes you to the nearest stationary point. Yes, question. [inaudible] you can actually just find h of theta, right? So generally you don't want to use- even use Newton's method in these, ah, unless you know that the- the function is convex or concave. [inaudible] Well, so generally you know that by function, you- by looking at the functional form, you can tell whether it's convex or concave. Like if you can show that the Hessian is positive- positive definite, then you know it's a concave function or a convex function, and you know Newton's method is a good idea to use in those cases. If you're not sure, like for example, in a- in a neural network, you would never use Newton's method on a neural network because it might- it might even be, you know, maximizing to a local minimum- local maximum. [inaudible] Yeah. If it is- if it is, ah, convex, then the Hessian positive definite if it's negative. All right. So it looks like we have run over time. Apologies for that. We will continue our- our, ah, review in the- in the, ah, next lecture on Wednesday, and that will be the, ah, final lecture. All right. Thanks everyone.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_16_Kmeans_GMM_and_EM.txt
All right, welcome back everyone. Hope you had a good weekend. So this is Lecture 16 of CS229. And today we're gonna start a new chapter on unsupervised learning. All right? Uh, so uh, unsupervised learning will be the broad topic for the rest of this week and parts of next week. And the specific topics for today are the K-means algorithm, the mixture of Gaussians, which is also called the GMM or Gaussian- Gaussian mixture models and, uh, the expectation-maximization, um, algorithm. And, um, to- jumping into today's topics, uh, so what you've seen so far is, uh, the first few weeks, first uh, three four weeks or so we covered supervised learning where we were trying to learn a function that maps X to Y. And we're given pairs of X, Y as our training set or examples. And then after supervised learning, we went into some learning theory. We studied bias-variance, uh, trade off and the bias-variance analysis and saw a little bit into generalization. And then we, um, last week we saw reinforcement learning, where the goal in reinforcement learning is, rather than minimizing some kind of a loss function, we want to maximize the value by choosing a suitable policy, right? And here value is the long-term cumulative sum of discounted rewards. And we want to maximize the long-term reward by choosing a policy. [NOISE] Now, the new chapter we're going to start today, uh, starting today is unsupervised learning. And in the unsupervised learning case, um, we are given a dataset generally, um, a collection of X1, X2, et cetera, Xn. Right? And we do not have a corresponding Y variable associated with each X variable. You're just given a- a set of collect, um, um, examples, um, a set of X's where each Xi is in Rd, in d-dimensional real space for example. [NOISE] And our goal is to learn some kind of a structure in these X's. Right? We don't have a corresponding correct answer. We don't have what's otherwise called supervision of what the correct answer is for each X. But in general, we're just given some- some collection of X's and our- our- the goal is to find some kind of an interesting structure in these X's that hopefully gives us some new insight. Right? So, [NOISE] um, we've seen logistic regression before. And in the case of Logix- logistic regression norm, um, X1, Xd, let me use a few colors here. We had some- [NOISE] You're given a dataset like this and our goal was to find a separating hyper plane. Right? That was- this is- is supervised learning because we're given the correct answer. That is the color of each point along with the point itself. All right? Whereas in the unsupervised learning, assume the- the- the problem would translate to something like this. Right? We are given some points, just the X's. And our goal now is to learn some kind of an interesting structure here. Right? Previously we had- we had the correct answer for each, uh, for each input. Now we're just given a collection of X's and a reasonable, uh, structure to find in this are these two clusters, right? So loosely speaking, uh, we want to look for- for such structures with- when we are given just X's. However, this problem is generally not very well-defined. Right? In the first case, for each point we- we were told what the correct answer is but consider a problem like this. Right? And if- if you are asked to find an interesting stuct- structure in this kind of a data, it would be totally reasonable to say this is one cluster, this is one cluster, and this is another cluster. Right? And another totally reasonable thing would be to say, this is one cluster and this is one cluster. Right? So in a way, there is no correct answer, so to speak. And our goal is to learn some kind of an interesting, um, uh, structure in the presence of such kind of ambiguities. Right? The, um, the way to- you- you want to think of this is classification problems in the supervised setting are somewhat related to clustering problems in the unsupervised setting, right? When the cluster identity is like the class label. And we want to, looking at just the X's, find out both how many number of classes there are and also to which class each example belongs to. Right? And, um, why would this be interesting? So for example, this would be interesting, for example, for, uh, in examples where supposing you have, um, uh, you're working at a marketing department and you have information about your customers. Right? And the information about your customers can be represented in- in, you know, in some kind of a vector space where, you know, you have the age of the customer here, you have, you know, their, um, uh, annual income and on another axis you might have their, you know, geographic location. Okay? And each customer would be a point in the space. Right? Now as a- as a- as a- as a person who is working in marketing, you might be interested to perform some kind of a market segmentation to identify, you know, groups of customers so that you can do some kind of a targeted, you know, advertising or marketing campaign or some such thing. Right? So that's just one example of how- why unsupervised learning might be, you know, interesting. So the first unsupervised learning algorithm that we'll be seeing is something called as the, um, the K-means clustering algorithm. [NOISE] The K-means clustering algorithm is- is, uh, pretty straightforward. This is probably one of the simplest algorithms of unsupervised learning, K-means algorithm. So we are given a data set of n examples, x_1 through x_n, where each x_i is in R_d, right? And our goal is to group them into k clusters. right? And for the purpose of the algorithm, we will- we will assume that k is given to us. And the algorithm goes like this, initialize cluster centroids, mu_1, mu_2, mu_k. So you have one, uh, centroid per cluster where each of them are in R_d randomly, right? So each of these mu_1, mu_2, mu_k, uh, is a vector. Previously in- in our notations generally, uh, having a suffix to a variable generally meant it was a scalar, right? But in this case, mu_1 is a full vector, right? And there are k such full vectors, uh, mu_1 from mu_k and we initialized them randomly and then we repeat this until convergence. Repeat until convergence for every i, where i denotes the, uh, example number set C_i equal to arg min j x_i minus mu_ j squared. And then that's step one and step two, for every j, where j is now, uh- where j indicates the cluster right entity set mu_j is equal to sum over i equals 1 to n, indicator of C_i equals j x_i, and sum over i equals 1 to n indicator C_i equals j. So what are we doing here? K-means is an iterative algorithm where we are given a set of n examples which we index by i. And we want to identify k-clusters, where the k-clusters are indexed by j. We initialize the cluster centroids randomly, where mu_1 through mu_k are- are each a vector, uh, in R_d and we repeat until convergence where for every- for every i, we set C_i here, C, uh, C- you can think of C as an array of length n, where for each example that is a corresponding C_i, for every x_i there's a corresponding C_i and we set C_i to be the identity, that's the arg min of j, identity of the nearest mean, right? And based on the set C_i vector or C_i array, we then recalculate mu_j, where mu_j is calculated as the mean of all xi's for which C_i equals j. Yes, question? [inaudible]. So, uh, as I said already for now, let's assume k is told to us. You know, we are given what k is and this is the algorithm, right? And the way, uh, it's- it's a pretty straightforward algorithm where we, uh, where we alternate between one step, where we are- where we are calculating the cluster identities for each example and in the other step where we are recalculating the cluster centroids. And this is probably seen through a simple, uh, visualization, which- let me have a quick look at. [NOISE] Any questions on this so far? Yes, question. [inaudible]. So- so the question is, uh, what happens, uh, can we use, uh, an unsupervised learning setting to learn the different cluster centers and use that as a classification algorithm. [inaudible]. Yeah, it might or it might not have the same as a supervised learning algorithm. Yeah, so supposing this is, you know, uh, think of this as a collection of points that are given to us where each green point is, uh, you know, is- is, uh, a data point in, uh, x_i in R_d and the way, uh, the algorithm, uh, works like this. We- here we assume k equals 2 and the red X and the blue X, you can think of them as mu_1 and mu _2, which are randomly initialized. And in the first step, what we do is for each point- for each point X, we identify the nearest cluster, right? We- we- we set C_i to be the identity of that cluster, which has the smallest L2 distance between that point and the cluster centroid. So over here, the red dots are those for which the red X is closer. And the blue dots are those, uh, X's for which the blue X is closer, right? So this is like setting the- the C_i's in the first iteration. And once we set the C_i's in the next step, what we do is recalculate the mu, uh, mu_j's. So what happened here? I'm going to go back one slide. So, uh, at this stage we set all the cluster identities. And in the next stage, what we want to do is recalculate the red X, that is, uh, the mu to be at the middle of the red points, at the red circles, and similarly recalculate the- the blue mu to be the center of all the blue circles and that's what happens in the next slide over here, right? So this is- this is just taking the mean of all the points of, uh, the corresponding color. And then what we do, we repeat and assign new cluster identities, re-evaluate all the points to see which new cluster centroid they're closer to. So what happened? Uh, let me go back one slide just to see the difference. So these two points which previously belong to the- the old blue centroid now got mapped to the new, uh, to the red one. And then we re-evaluate the centroids again. And the centroids will now move to the center, right? And once we reach here in the next iteration, I actually moved a slide, you know, nothing changes and we declare that the algorithm has converged, right? It's a-it's a pretty- pretty simple and straightforward algorithm. And now the- the, uh, a few natural questions to ask is, you know, will this algorithm always converge? And will it always give us the same- same answers all the time? So it can be shown that the algorithm does always converge. And what we mean by convergence in this algorithm has a special meaning. So if we consider this loss function called J of C, mu to be equal to i equals 1 through n, x_i minus mu C_i squared and this is also called the distortion function. And the K-means algorithm is basically an algorithm to minimizing this particular loss of the distortion function in the form of coordinate descent. So what's coordinate descent? Coordinate descent, you can think of coordinate descent as a variant of gradient descent, where- what we're doing is at each step, instead of minimizing the loss with respect to all the input variables, we only minimize the loss with respect to a few variables by holding the others fixed, right? So the- the, uh, step number one corresponds to minimizing the distortion function by holding Mu fixed and optimizing C, where- where we calculate new Cs. And step number two corresponds to then minimizing J again by holding the Cs fixed and optimizing it with respect to Mu, right? So K-means is coordinate descent of the distortion function J, where in- in one step we optimize it with respect to C and the other step we optimize it with respect to Mu, and the result of the optimization results in these- in these, uh, closed form rules for recalculating the C's and mu's. And we say that K-means algorithm converges in the sense that eventually we are going to read some kind of a local minima of this J function. It may so happen that we may have minimized J, but we may end up toggling between two sets of mu- two sets of mus and Cs alternating, uh, once we reach a local minima, though that happens extremely rarely in practice. But we will eventually reach a state where J is no longer minimized further, we're gonna flatten out in J and most of the times, pretty much all the time in practice, that's going to result in some kind of a mu and C that does not change anymore. This J is non-convex, which means the- the Mus and the Cs that we end up in can change from run to run. If you start with a different initialization, you may end up with a different set of Mus and Cs. Which- which kind of, uh, ties back to a question that was asked before, you know, uh, why do we, you know, ever need to use the label identities, uh, and- and not just perform- perform clustering, and the answer is- is that this is basically a non-convex pro- problem, and we can end up with different, uh, different cluster identities depending on the initialization. Any questions on this is? Yes, question. So the question is by looking at a function, how do we determine whether it is, uh, convex or not? Uh, in general, it is, uh, the answer is not always straightforward. So it is easy to show that something is convex by showing it as, you know, a composition of convex sub functions. Uh, however, showing something is not convex, uh, is not always that straightforward. Uh, something could, uh, which may not appear to be convex at first, can sometimes with some kind of reparameterization may end up being convex, etc. Um, in this case, it happens to be non-convex. Any questions on this? [NOISE] Cool. Uh, so given this clustering pro- uh, clustering approach that we've seen, let's move on to something that is slightly different and also somewhat related. What is the problem of density estimation? [NOISE] So density estimation generally refers to the problem where we are given some number of data points. We're given some number of data points, and this is in R, you can think of it as the x-axis. And these points are residing in a continuous space. Uh, and the goal is to now, uh, we assume that these points are sampled from some kind of a probability distribution, and because this- this, um, these points are- are- are coming on a- on a- on a continuous distribution, the corresponding probability distribution has some kind of a density. It's not a probability mass function, but it's a probability density function. Given these points, points that look like this, the question is, you know, what is the density function from which these points were sampled? In general, it's a very hard problem. Because, uh, if you wanna fit this data really, really well, then the best possible fit would be a density that looks like this, where it has, you know, like direct delta function over every data point. You know, a peak that, you know, that- that's really, really peaked out each data point, and you can, you know, this- this- this, uh, this is a valid- valid density. But at the same time it doesn't feel natural. Another, uh, another equally valid density would be, you know, something that looks like this. Note this is also a valid density, right, from which it could have been sampling because there's nothing to the left, nothing to the right, and there's some values over here, and on there- there something where- where there's some data, so you might- you might have some kind of a density. And also, this is also valid. All right? So all these are different possible answers for what the underlying density, uh, is from which these points are, are calculated. And kind of the, the fundamental problem of density estimation is that, the density function has to be a continuous function. In the case if, if these were, you know, um, the outcomes of coin tosses where the support was discrete, then maximum likelihood was, was pretty straightforward. You could, you know, treat them as a multinomial and just count them. But whereas in density estimation, we need to come up with the smooth function or discrete observations, right? I, I said discrete observations because we have a fixed number of observations. And we want to come up with a, with a, with a, uh, smooth estimate. So a common approach in uh, density estimation is to use this model called the Gaussian mixture model, or it's also called the mixture of Gaussians. A Gaussian mixture model where given some, given some, um, uh, data points that look like this, right? We want to- we made this hypothesis that there are these two underlying different- two different distinct Gaussian distributions. And there is one Gaussian distribution from which these were sampling, another Gaussian distribution from which these were sampling. And together, you can take the sum of these two Gaussians, uh, a Gaussian probability distributions, and say, this entire dataset is sampled from these two different Gaussian, um, uh, these two mixture of these two Gaussians, right? And the choice of K is something again, uh, is, is, uh, similar to, you know, in k-means it, it is something that we choose by, by, um, you know, visual inspection or, or, um, or, or, or in general, seeing how well the data fit the, um, um, fit the number of k. And the, um, the problem we have now is to, is to, given this dataset, estimate the two Gaussians from which the dataset might have come, right? We are not told what the, uh, identity of the two, uh, Gaussians are. If this were to be a supervised setting, then we would have, you know, they would come with some kind of an identity. And we could have fit one Gaussian here and the other here, right? And this is exactly what we did in, if you remember, when we did something like this,. GDA. GDA exactly. So in GDA, we were, we were told that X's come from are, are sampled from Gaussians. And there are these two different classes, Class 1 and Class 2, right? And our goal was to take these X's along with their cluster identities, the corresponding y's, and estimate the mus and sigmas for the two classes, right? Now in Gaussian mixture model, we are essentially generalizing GDA in a way where we're not given what the Y labels are, we are just given the x's. And you also relax the constraint which we had in GDA that the covariates have to be the same. In this case, the covariates can be different. And our goal is to now come up with some kind of density p of x. That allows us to assign probability density to the observed values. Okay? So that's, that's the, uh, uh, that's the setting in which, uh, the Gaussian mixture model comes into picture. And the reason why, um, why we are interested to even calculate this p of x. There are many reasons why calculating p of x could be interesting. So here's one example. So here's, a, a completely made up example, supposing, uh, supposing you are, um, an aircraft manufacturer where let's assume the, the, um, the parts that we manufacture have two kind of attributes. Let's say, you know, um, heat tolerance. And if any of you are, you know, are in aeronautics, what, what might be another, let's say- let's call it, um, um, um, heat tolerance and, and, um, power output, whatever that means. So let's assume there are, there are, there are these two kinds of, um, um, attributes that are, that are, um, therefore some part that we manufacture. And in general, what we observe that if we were to plot the, um, the, the, every manufactured part as a point here you might observe that most of the normal parts fall along some kind of, um, uh, a distribution like this. Maybe there are, you know, two different kinds of, uh, sub-types of parts may be based on the material or something where some of them, uh, uh, fall in distribution, some of them fall in this kind of a distribution. You know, uh, whatever be the reason. But generally let's assume that, you know, normal looking parts fall in this kind of, uh, um, belong to this kind of a p robability distribution. Now suppose we want to, we want to, uh, have some kind of an automated anomaly detection where we want to detect that, you know, some part is, is, is faulty for some reason, right? For example, a part that may- that has this attribute, right? We want to identify that this point over here is, is, uh, is faulty. And at first, it, it might be, you know, um, even though this looks visually away from this kind of a heatmap, if you were to look at any one of the axis alone, it looks pretty normal, right? Just from the heat tolerance point of view, it's kind of, you know, close to the mean. And if you just look at the power output, it's also kind of near the mean, but it's this combination that makes it kind of, you know, an anomalous example, right? And the, the way, uh uh, this kind of an anomaly detection is, um, can be carried out in practice is to construct, you know, a density estimate p of x for these points. Where the- where this p of x assigns high probability for anything that falls in this region. And p of x assigns low probability for anything outside this region, right? And a common approach to doing this kind of a density estimation is to use mixture of Gaussians, right? And the way we go about, uh, doing mixture of Gaussians is so first what we're going to do is to provide you an algorithm to do mixture of Gaussians. And you're going to provide, uh, construct this first algorithm based purely on intuition. And then what we're gonna do is describe this general framework called expectation maximization, and then re-derive Gaussian mixture- the Gaussian mixture model using this framework and see that we end up with the same algorithm that we got using intuition. The expectation, uh, maximization that we're going to see next is a more general framework that works for a broad class of- um- of- of- of- um, generative models. These are examples of generative models. And being a Gaussian mixture model is just one such, one such model which can be solved through expectation-maximization. Yes question? So even given that you know [inaudible] difficulty to scale it based on, you know, this triangle, the [inaudible] shouldn't we also mix- doesn't someone need to tell us what is the highest restrict, like, what is the upper bound [inaudible] at the number of data points, right? Yeah. Don't we need to be told what the frequency [inaudible]. So the, the the question is, uh, I guess, um, to- to kinda summarize this, how do we find out k? And, uh, it is true that as we increase the number k, uh, we kind of fit the data better and better. And, um, in order to kind of, uh, think of what's the best value of k, uh, I will leave this as a thought exercise for now and we'll come back to this probably next week. Um, try to see how you can apply, you know, uh, what we learned in, um, um, in learning theory bias and variance. You know, how, how can you, you know, um, uh, give some thought on how you can apply bias-variance analysis on this kind of a problem, right? And we will come back to this later. For now we'll, you know, for, for, for today and for, uh, the rest of this week, we're going to just cover more algorithms and we'll, you know, come back to it later with, you know, and see how we can kind of approach it in a more principled way. You know, for now, you know, as, as a mental exercise, see how you can apply bias-variance analysis in this kind of a setting. [NOISE] So in the mixture of Gaussian, our Gaussian mixture model. So we are, um, you're given attaining the set of just xs. And then we're going to assume that is this, um, z^i which belongs to a multinomial distribution with parameter- with parameter Phi where Phi_j is greater than equal to 0, and the sum over all j equals 1 through k, Phi_j equals 1. And phi_j is basically, um, um, Phi_j is the probability that z^i equals J, right? And then we have x^i given z^i equals j to be distributed from a normal distribution of mean Mu j and covariance Sigma j. Right? So this is describing the model. So the- the way we assume the model works is that first we sample the class identity z from some kind of a multinomial distribution, right? And then wha-, you know, once we, once we have a sample, the identity, we generate an obs- an observation x condition on the z value that we sample from what- some particular Gaussian distribution with mean Mu j and, and covariance Sigma j. This is very similar to GDA, right? In GDA, uh, the- the- the- the differences between this and GDA is that in GDA, we call z as y, here we're just calling it z. And that is a common pattern that you will see when in- in- in the algorithms, where we're doing full observation, which becomes a supervised setting, the variables that we call y, we end up calling them as z in unsupervised setting, because they are not observed, okay? In this case, there is this underlying z that we do not observe, that are sampled from some multinomial distribution. And depending on the identity of the cluster that we, uh, that we sample, the observation is then sampled from a corresponding Gaussian distribution that has a mean and covariance specific to that cluster. In these cases, z^'i's, because they're not observed, are called latent variables. So latent variable is a fancy name for a random variable that you've not observed. You've just not seen what- what um, what its value is in, and that's why we call it latent. Have question. Yes, question. So is there Phi in class prior? So Phi over here is the, you - you can think of it as the class prior. The- the- the class prior that we had in GDA. Uh, this just tells us of all the examples x that we have, what fraction of them belong to cluster j? Good question. And now in GDA, we perform maximum likelihood estimation in GDA. And our maximum likelihood objective was GDA it was log p of x, y Mu Sigma and Phi over the parameters, right? So this was the log-likelihood objective in GDA, right? Whereas in Gaussian mixture model, we do not observe y, right? And so in the- in the Gaussian mixture model, our objective will be to maximize log p of x given for Phi Mu Sigma. Like this will be our likelihood function. We test the only difference. Over here, your objective was the full joint distribution. Over here, we would have liked to do the same, but we haven't observed the y- the corresponding ys, which you call as zs here, right? They-, they- they're not observed. So we cannot construct a likelihood function because we won't know what value of z to put in this expression. If you had observed them, it was pretty straightforward, we would have been just doing GDA. And so instead what we do is maximize log p of x, and log p of x can- can also be written as log sum over z, p of x,z Phi Mu. Right? So we write out the full joint distribution and marginalize out the latent variable. Any- any questions how we went from this to this? Yes, question. [inaudible] So the question is, shouldn't z also contribute to our likelihood objective? And the answer is, if we had observed z, then yes, it should have. But we don't know what z is. So it we know it cannot. Assuming [inaudible] So the- so the- so the question is, we are assuming there are k clusters and given a setting, ah, you know, some value for k shouldn't be therefore account for- for the cluster identity. Uh, we are making an assumption about k, but we don't know which of those k clusters each one belongs to. Okay? All right. So our objective is to now maximize this expression, which is the same as this expression. Right? And for the rest of- of today's lecture and throughout, uh, it- it can be useful to set up some kind of terminology. So for example, p of z, we will call it class prior, or, in cases where z is not discrete but it's continuous, we'll just call it prior. Right. P of x,z, we will call it the model, because it- it describes the full data generating process. The joint distribution always gen- describes the full data generating process and that's always called the model, right? And z in unsupervised settings is called latent because we don't observe it. Latent is just a fancy word for unobserved. Okay? And P of z given x, we would call it the posterior. Okay? And finally, P of x. So P of z is called the prior and P of x will be called the evidence. Evidence because x is what we observe, That's the evidence-based on which we are performing inference. This is just terminology and this terminology is- is pretty standard and used in many papers, many- many books. So our goal is to maximize the likelihood using the evidence. Right? And this- the way we go about doing that is- right? So, to directly maximize this evidence, uh, um, directly, uh, maximize this log, uh, likelihood. If we were to attempt it the way we did it with GDA of taking derivative, setting it equal to 0, and solving for the parameters, you will observe that you ju- you won't get a closed-form expression for this. You can try it out, and then you won't get a closed-form expression for the way we got it with GDA. In GDA, we got a closed-form expression because we had observed both xs and zs, we called it y's. And if we had observed both xs and ys here, we would have gotten a closed-form expression. But because, um, the zs are not observed and we are taking this, ah, ah, marginalizing them. If- if we work it out, we will not get closed-form expressions. So instead, what we will do is just like taking inspiration from k-means, we are going to first, um, imagine, or, or, or come up with some kind of an estimate for zi's first. So the algorithm that we're going to do is repeat until convergence, but this is just taken inspired by k-means. [NOISE] We'll call it the E step. For each i, j set w_ i j [NOISE] to be equal to P of z_i equals j given x_i parameters Phi, Mu, Sigma. [NOISE] And the M step, where we update the parameters. P_ j equals 1 over n, equals 1 to n w_i j. Mu _j is sum of i equals 1 to n w_i j x_i over i equals 1 to n w_i j, and Sigma_j is equal to sum over i equals 1 to n w_ i j x_i minus Mu_j, x_i minus Mu_j transpose, divided by i equals 1 to n w_i j. So what we did is we start with some random initialization, so uh, randomly initialized parameters. And so the repeat will start after we, uh, randomly initialize it, randomly initialize Mu, Phi, and Sigma. Right? Start with some random initialization, and think of this as the way we randomly initialize the cluster centroids and k-means. Right? And based on the random initialization in k-means for each point, we associated it to the nearest cluster centroid, right? And the- the- the kind of similar operation that we're gonna do here is for each point, we're gonna assign a weight to each cluster centroid specific to that point, where the weight is the posterior distribution of P of z given x. Right? So given a point, we calculate a posterior distribution of the probability that this point belongs to a particular centroid. And this is just the posterior, uh, um, the posterior distribution. We-We're gonna call it a weight. And once we calculate this weight, we're gonna re-weight all our data points to calculate the corresponding Mus and Sigmas. Right? So for example, x_i will-may belong to- will have a weight of, let's say, um, uh, if there are- if there are, uh, say, three, uh, centroids where k equals 3, then P of z_i equals, uh, uh, j given x_i, right? This could be some kind of, um, multinomial distribution like 0.1, 0.7, 0.2, right? Where this belongs to k equals 1, k equals, uh, 2, k equals 3. Where if the centroid Mu was- was close to- um, uh, Mu 2 was close to, uh, x_i, then it would have a higher weight, and, you know, these two are farther away, so it has a lower weight. In - in case of k-means, we would do a hard assignment of every point to one cluster only. But over here, you're doing a soft assignment, where every point has- has a soft assignment in the form of this probability distribution for all the cluster centroids. And the probability assigned to the centroid that is closer to the point. By closer here, we mean in a probabilistic sense where it is- uh, it has a high likelihood in that- in- in- in that, um, clusters Gaussian distribution, then it will have a high posterior, uh, uh, probability. And we do this kind of a soft assignment of every point to- to the set of all clusters. And using the- the correct uh, calculated weights, we recalculate the Mus and Sigmas, using the weighted dataset. Here, every- every, uh, point i, contributes to every centroid j, and the contribution is weighted by the corresponding w_i j. Questions? So that first step that you, yeah. So that will have- also will that be- will that have like a closed form of expression where you calculate all the differences and then divide by the sum over all of them? Yes. So the question is, will this have a closed-form expression? Um, [OVERLAPPING] It'll be 0.7 over? Yeah. Yeah. So the question is- so for this, I would- I would, uh, remind you, uh, in case of GDA, we had calculated a posterior that was very similar. And if you remember, the posterior had a form of? A logistic. Yeah, it had the form of a logistic, uh, uh, uh, uh, regression. In that case, however, we limited ourselves to two, uh, two points- to two classes, and we also had a constraint that Sigma was common to all of them. But in this case, when we relax that, um, uh, constraint, what we will observe is that the posterior takes the form of a soft max, right? It- it takes the form of a soft max that uses quadratic features, quadratic features of Xs. Right? But, uh, for-for a fixed value of, you know, for- for a small case, you can, you know, come up with- with- with an expression for this, and in fact, you'll be doing this in your homework. So that'll be clarified in your homework as well. Right? So, um- so inspired by k-means, here is a version of- you can think of- of the Gaussian mixture model as soft k-means, you know, people will call it the soft k-means as well. Uh, we call it soft because, um, in the assignment phase, in k-means, right? In k-means it was a hard assignment, right? This one- this corresponds to the E-step. The, uh-in k-means, uh, the cluster identity was a hard assignment. You can think of, uh, k-means as a way in which we calculate a posterior distribution, which is always one-hot, right? If- if- uh, if you- if we calculate these kind of posterior distributions, then we effectively get k-means out of, uh, GMMs, right? And in the equivalent of the M-step, was the way in which we were recalculating Mus. Uh, the Mu_js, we use only those- those, uh, xs for which the cluster identity matched the, uh, uh, the corresponding cluster. And over here, instead of- instead of having, um, an indicator function, we're gonna use this soft assignment weight. Okay? Any questions? So you just claimed that if Ws [inaudible] becomes k mean? Yeah. So if- if Ws are one hat, then this- this will essentially be, you know, uh, k-means. Yes. Question? The w_i j, how do we calculate it if we haven't observed-observed Z_i? So the question is, how do we calculate w_i j if we- if we have not observed Z_is- the- the Zs? Is that the question? Yeah. So here, uh, in order to calculate this, we don't need to know Z_is. And so we don't need to- so we've just, uh, cons- uh, uh, [NOISE] we're just constructing the probability that z_i could be equal to j. Right? And the way we go about doing this, is to use the Bayes rule. Right? And the way you use the Bayes rule, [NOISE] um, [NOISE] right? So- and this can be calculated as P of- you know, so P of z given x is equal to P of x given z times P of z, or sum overall of z, P of x given z times P of z. Over here P of x given z is normal distribution, so this is Gaussian [NOISE] Right? And P of z is just multinomial. And in the denominator, uh, you know, it is same- the, uh, the- the same terms, Gaussian and the multinomial, but you sum it over all the classes. [NOISE] And you will see that, you know, um, just the way you showed it in Homework 1 for GDA, where in GDA this took the form of a logistic, uh, regression. Uh, here, you can actually show that it takes the form of a soft max. It's- it's very similar calculation. [NOISE] Any questions? Right? Yeah. So this is basically, um, the Gaussian mixture module, where we derive the steps to be- by taking inspiration from k-means, right? And we are intentionally giving it the names, the E-step and the M-step, because next, we're gonna, uh, talk about the EM algorithm and, uh, we're gonna derive it in a more principled way, and we're gonna end up with the same update rules. Right? And this is- uh, you know, think of this as soft k-means. [NOISE] Soft k-means. [NOISE] Right. Now, we're gonna switch gears and talk about the EM algorithm. So the EM algorithm is also called the expectation-maximization algorithm. That's an algorithm where it gives us a framework of maximize - of - of performing maximum likelihood estimation when some variables are unobserved, all right? It is - it - it - it is used in cases where we have a functional form for the joint P of x comma Z, and x and z could be anything but Z's are not observed, so expectation-maximization. All right. Expectation-maximization is an algorithm where we perform MLE in the presence of latent variables. All right. So where the true model [NOISE] is P of x comma z with some parameters Theta. And if we observe x and z jointly, if everything is observed, then the problem is very simple, we perform simple maximum likelihood estimation. But when z is unobserved, we want to instead perform maximization of the objective l Theta equals log P of x, where z is marginalized, and the EM algorithm, or the EM framework, gives us a framework for achieving this. Where we maximize log P of x in an indirect way rather than directly, uh, taking the derivatives and setting them to 0 and so on. Now, before we go into the EM algorithm, some kind of a - it - it - it can be useful to have some kind of a context here. So the EM algorithm was - was, uh, discovered some - sometime in the - I think early 1970s where, uh, you know, where - where people were trying to, uh, perform MLE in the presence of unobserved, uh, uh, unobserved data. And it so happens that this - this framework is so general and so powerful that there are - it has been adapted in so many different ways, and there have been minor, uh, uh, you know, exceptions to the EM algorithms are - are there in so many different forms. But this framework is - is somewhat central and understanding EM in a deep way will be extremely useful if you are interested in things like deep generative models. All right. So in - in, um, over the last few years, there has been, uh, tremendous growth in deep generative models, uh, where, um, you might have heard of, you know, variational autoencoders, generative adversarial networks or GANs, or flow-based models, glow based models. And all of them, uh, understanding all of them would be a lot - much easier if you really understand the EM algorithm well, because the EM framework gives you a - a kind of a mind-map where you can place all these different algorithms and kind of understand their strengths and weaknesses and - and you know, what's common between them, what's different between them, and so on. So EM algorithm is - is, um, one of the key algorithms to, um, for - for even modern, um, um, um, deep learning or deep generative, um, um, um, deep generative models. All Right? So before we jump into EM algorithm, we're gonna first look at something called as Jensen's inequality. [NOISE] So Jensen's inequality is a - is a very general, uh, probabilistic inequality that's used, you know, in - in very many places, uh, in probability theory and applied probability theory. And it will show up in - in our derivation of, uh, EM algorithm as well. So you can think of Jensen's inequality as a probabilistic tool that we will use in deriving, uh, EM. But Jensen's inequality by itself is - is - is a very generic and you know, commonly used inequality in - in, uh, probability theory. So let's - let's assume a function f to be convex, right? Assume f to be a convex function, which means, um, f double prime of x is greater than equal to 0 for all X. [NOISE] All right. And we say that f is strictly convex, if f double prime of x is greater than zero for all x, okay? So the, uh, the mental picture to have is f of x, and this is x, f of x and x. So this is an example of a convex function. Convex functions are bowl-shaped functions, right, and they can have a ze- zero second derivative in a few places where f prime of x, um, is greater than or equal to 0. But in a strictly convex function, the second derivative is never exactly equal to 0, it's always greater than 0. So if there are straight lines for - for certain input ranges in your functions, then it can still be convex. But for a strictly convex function that cannot be straight lines, all right? It should always be curving upwards. Now, the Jensen's inequality tells us that expectation of f of x, where x is some random variable, is always greater than or equal to f of expectation of x, where f is convex. So this is Jensen's inequality. So the Jensen's inequality tells us that, the expectation of f of x, where f is a convex function and x is a random variable, and the expectation is taken with respect to the randomness in x, will always be greater than equal to f of the expectation of x, right? And moreover, if f is strictly convex- [NOISE] strictly convex, then, [NOISE] expectation of f of x equals f if, then- then- [NOISE] if expectation f of x equals f of E of x. Then x equals expectation of x, with probability 1. There's a lot of jargon here, we'll- we'll uh, dissect it in a moment. So to, uh, just to restate it again, Jensen's inequality says, if f is a convex function and x is any random variable, uh, then expectation of f of x is greater than equal to f of expectation of x. And moreover, if f is strictly convex, then this statements holds true. And this whole statement holds true, only if f is strictly convex. Now, if expect- the- the statement is, if f of x equals, uh, expectation of f of x equals f of expectation of x, then it must be the case that x equals expectation of x with probability 1. Which means essentially x is a constant, right? Now, what does this mean? Um, to- to, kind of, understand this [NOISE] more intuitively, uh, this- this picture can help. [NOISE] Right, let this be some function, um, f of x. [NOISE] Okay? And this is basically x, all right? And I will use a different color here. [NOISE] Let's also assume that x is- has a probability distribution associated with it. Right? So the green dotted line represents the probability density of- of the random variable x, and f is some function of x. Right? Now, expectation of x or E, ah, would be somewhere here. So let's call this [NOISE] expectation of x. Right? So expectation of x is- is, um, so think of it as like the mu if this is a Gaussian, right? That's the expectation of a random variable. All right? And now, um, f of expectation of x. So, um, in- in- in the case where x, um, let's assume x takes only two possible values. So let's assume, um, let's draw another picture here. So this is x, this is f of x. Let's assume x takes only two possible values. Let's assume it's a discrete distribution here with- we- we- we, uh, here x was continuous, but to understand Jensen's inequality, let's, uh, assume, uh, a discrete, uh, setting, where x takes only two possible values. This value and this value. Maybe they are, you know, 0, 1, 2, 3, 4, and- and 10. Let's assume take- x takes any one of these values. And the mean of x, if it takes, uh- uh, with probability half the value 1, and its probability half the value 10, then expectation of x will be 5.5? All right? So expectation of x equals 5.5, is 5.5? Yeah, 5.5. And this over here is f of expectation of x. Okay? Does that make sense? Does the expectation of x, you evaluate f at expectation of x and you get f of expectation of x. That's the, uh, right hand side. Right? Now similarly, um, with probability half, f of x can take this value, and with probability half, another half f of x takes this value. Right? So this is- let's call this a and b. So this is f of a, and this is f of b. Right? And expectation of f of a, f of b, is basically the midpoint between f of a and f of b. All right? Because with the probability half it takes this value, with the probability half it takes this value, so the expectation of- of f of x is this one. Right? And this is, expectation of f of x. Right? And it- it so happens that this point will always be the midpoint of- will always be the midpoint of the cord connecting f of a and f of b. Right? And what Jensen's inequality is telling us is that this point, the point that- that's the- the midpoint of the cord connecting two points on f, will always be higher than this point. Right? So f of expectation of x is always less than the expectation of f of x. Okay? Is this clear? Can you raise your hand if you understood this? Some of you have not. Okay, can anybody tell me what- what's- what's- what's still confusing here? I can just go over it again. So f is a convex function, which is, kind of, bending upwards. Right? And the x axis is- is- is, uh, denote some random variable. Right? And in this case, just for the purpose of understanding, um, Jensen's inequality, let's assume x is- is- is, uh, takes on two values- one of two values, either 1 or 10, with equal probability. So the expectation of x is therefore 5.5. Right? That's- that's over here. Now, f of 1, you know, let's call it a, um, f of a is- is, um, this point. So this is f of a. So the height from the x axis to this point, this is f of a. And similarly, if this is b, this is f of b. And the expectation of f of x is therefore the midpoint connecting these two points. Right? And that comes over here. And the expe- f of expectation of x. Where expectation of x was- was, uh, 5.5, is this point, comes over here. Right? And Jensen's inequality is- is therefore essentially saying that the cord connecting any two points of a convex function, always lies above the cord itself. Right? The expectation, uh, expectation of f of x is- is higher than f of expectation of x. Okay? Kind of, under- understood. All right? Okay? Let's- let's, uh, move on. And it also tells us that if f is strictly convex, right? F is strictly convex, then, um, f of x equals, ah, if we do f is strictly convex, and if expectation of f of x equals f of expectation of x, then x equals the expectation of x, uh- uh, itself. Which means x is essentially, uh, a constant. What does that mean? So here's an example of f of x that is strictly convex. Right? Now, um, if expectation of f of x equals f of expectation of x, when can that be possible? Right? Expectation of f of x equals, uh- uh, f of expectation of x? When can, um, let's assume, you know, a and b are here. Right? This is expectation of f of x. And let's say this is f of expectation of x, right? If f is strictly convex, and if the two are equal, f of x equals expectation of x, then the only way that it's possible, that, you know, f of x equals expectation of f, uh, f of x, is if [NOISE] x has a probability density. [NOISE] Okay? Now over here, in this dotted line, essentially, I'm drawing, uh- uh, the probability density of x, which is like a direct Delta function where it has, all its mass concentrated at just one point. Right? And in this case, this is expectation of x and f of expectation of x is here. And also, because x always takes on this value with probability 1, f of x also always takes on this proba- this, uh, um, uh, value with probability 1. And therefore, f of x equals, um, f of expectation of x equals expectation of f of x. Because essentially, you know, the equivalent of the cord connecting two points, has length 0 here, right? All the values of f of x are always here, and x always evaluates to the same value. Yes question? [inaudible] When x is continuous- continuous [inaudible] what is the expectation of [inaudible] So what's the, ah, expectation of, ah, a continuous random variable, if x is a continuous random variable and it has a PDF, let's call it, um, small p of x. Right? Then, expectation of x is equal to the integral of x times p of x dx. So you have p of x in between f of x? So p, in this case is some probability. So the green line- the green doted line is p of x here. My question is what is e of f of x. So what's e of f of x? So e of f of x- so if e of f of x equal to the integral of f of x, p of x dx. Good question. Okay? So this is- this is Jensen's inequality. And- and- and the reason why we require f to be strictly convex is because if f were not strictly convex then, you could have a case where x- x is- f of x is flat in someplace. And x has this density and expectation of f of x would be here and f of expectation would be here. And also f- expectation of f of x would also be the average f of x in this region, which is also a constant. So expectation of f of x would be equal to f of expectation of x even though x is not constant because f has a flat region somewhere. Yes, question. [inaudible] So the question is of- in the- in the convex case can we assume that- [inaudible] So for- for- for- I mean for this- for this case, yeah, So if- if- for this to hold without x being a constant for x_2 and f of x should be the same. Then, all of x should be distributed in a region where f is flat. Yeah, that's right. Okay. So what are some examples of convex functions [NOISE]? Anybody, example of a convex function? Y equals x squared. Y equals x squared. Examples for concave function? Minus x squared, y equals minus x squared. Yep- yep, that's good. So examples of convex, concave and strict, yes or no, right? So convex function we saw x squared is convex, minus x squared is, therefore, concave, right? And is this strictly convex? And therefore also strictly concave. Right? Now, another function, y equals- or- or f of x equals- [NOISE] equals mx plus c straight line. It's convex. By definition it is convex. A straight line is convex and it is also concave. [NOISE] But is it strict? No, okay, it's not, right. Now, what about e to the x? Convex? Minus e to the x is therefore concave. Is it strict? Yes. What about log x? No [inaudible] So log x is concave. And therefore minus log x is convex. right? And it is strict. Okay? Cool. Now, how does this- how is this useful for expectation maximization? Yes. Question. [inaudible] Yeah, a straight line is always convex and concave because- [inaudible] Is- is- so f double prime is equal to 0. And the definition of convex is that f double-prime should be greater than or equal to zero. So- and it's equal to 0, so it satisfies greater than or equal to 0. And similarly, for concave it is less than or equal to 0 and it's equal to 0, so it satisfies less than or equal to 0. So- okay? Now- so using Jensen's inequality and with these observations, that expectation of f of x is greater than equal to f of expectation of x and- and so on. We can adapt Jensen's inequality to the concave case. Where basically it says, if f is concave, right? Example f of x equals log x. right? Then, the inequality will switch. So the expectation of log x instead of greater than or equal to, will be less than or equal to log expectation of x. Okay? This is also Jensen's inequality. So now, let's derive the EM algorithm. Right? So in the EM algorithm, our goal is to maximize log p of xi theta, where by theta mean, you know, all the parameters i equals 1 to n. Okay? We want to maximize this. That's the goal. What- what we're trying to achieve, this is our end goal. We want to maximize log p of x. But however, maximizing log p of x can be hard because the z's are unobserved. If z's were observed, this was very easy, but z's are unobserved, so it's hard. That's the case where- that's the setting we're in. And now for- for the derivation, I'm going to assume one example. So I'm just going to write it as log p of x comma theta. And I'm going to leave the summation out. But basically the whole- the entire derivation that we're going to do, you can improve the summation and everything will hold, right? It's just to simplify notation, right? So log p of x, we want to maximize this. That's our goal, right? So the first thing we're gonna do is write log p of x, theta is equal to log of the sum of z, p of x, z theta, right? First, we're going to marginalize out z. Okay? And then, once we do that, we will define an- you know some arbitrary probability distribution called q or z's. And write this as log sum over z, q of z times p of x, z theta divided by q of z. Where q of z is greater than 0 for all z. Some arbitrary probability distribution of z, it could be anything whatsoever, any kind of probability distribution or z such that q of z is greater than 0 everywhere. Yes, question. [inaudible] So- so- so the question is it's, you know why is this a hard problem? [inaudible] So it is- it is hard because we are having a summation over here, right? And in general, when in- in the cases where z is continuous, this would be an integral. Right? And that integral can be, you know, arbitrarily co- complex. Sir, do you mean comp- computationally expensive? It can be computationally expensive, it can be analytically not possible in cases when we want an analytical solution. Okay. Good question. So we come up, you know, q can be any distribution whatsoever, as long as q of z is greater than 0 for all z. And- and now, we can see that this can be written, as log expect. All right, what did I do here? Nothing, basically. So this is the definition of expectation, right? So this is a function of, you know, Z- some function of Z, all right? And um, think of this as the probability, and this is some function, and therefore this is just the expectation. Is this clear? Yeah. Yeah, okay. So this is- this just, um, um, the expectation. And now, we make use of Jensen's Inequality and note that log of the expectation of something is greater than the expectation of the log of the same thing, right? And so this is gonna be greater than or equal to expectation of Z, Q log P of X, Z theta, over Q of Z. Yes question? [BACKGROUND] F here is log, and log is concave. Log is concave, right? And this is our random variable x, right? Any- any- any- any questions on how we apply it? How we went from here to here, this is probably the most crucial step, all good? Okay. And this, what we see here, we will call this- and I'll give it a name. We'll call it ELBO, evidence lower bound. All right, so it's the ELBO of Q and- ELBO of X cubed theta, right? And Jensen's inequality tells us that the ELBO, you know, we just defined to be this term is always less than or equal to our objective that we want to maximize, right? Which- which means now if we find thetas and Q's such that we are maximizing the ELBO, then implicitly for the same values of theta log P of X is also going up. Does it make sense? ELBO is defined to be- i- is by Jensen's inequality, ELBO is always a lower bound for log P of X, our likelihood. Both of them have theta in them, right? Now, if we find values of theta such that we are maximizing the ELBO, then it necessarily means that log P of X at that value of theta is higher. And Jensen's inequality gives us that inequality. Yes, question. [BACKGROUND] We'll- we'll- we will to come to that. All right, so this is the, um, so this is the ELBO and this term ELBO is something you will very commonly encounter if you're reading- reading research papers about you know, generative models or deep generative models, um, you know this is- this is a widely used term and ELBO means, you know, the- the- the lower, the lessor side of the Jensen's inequality of log, uh, log P of X. All right, and our goal is to now, before we go into our goal, let's- let's make a few more observations. Now, log P of X is greater than equal to ELBO at all times. That's, uh, what Jensen's inequality says, but other cases when log P of X is exactly equal to the ELBO, right? Are there cases- are there cases when log P of X theta is equal to ELBO of X and Q theta. [NOISE] And the answer is yes, it is yes, because of the second part of Jensen's inequality that we saw, right? So, Jensen's inequality, uh, we also saw that if F is strictly convex, log X is strictly convex, right? If uh, F is strictly convex, then expectation of F of X- expectation of F of X equals uh, F of expectation of X if and only if X is a constant, right? So this is one side of Jensen's inequality. This is the other side of Jensen's inequality, right? And the two will be equal if and only if the term inside is a constant. That's what uh, Jensen's inequality uh, told us, because log is strictly concave. Yes, question. [BACKGROUND] In this case we want- we want this entire term over here to be a constant. Now, are there cases- so the next uh, question is, are there you know, under what circumstances is this entire term over here always a constant? That's- that's- that's what we're going to answer next.[BACKGROUND] It has to be- it has to be independent of Z, so it is constant with respect to Z. Right? So the question now is, in order to make this inequality an equality, because- because log is- because log is strictly concave, the inequality becomes an equality if and only if P of X, Z theta over Q of Z equals some constant C, all right? And now this implies that P, Q of Z over there, Q of Z is equal to 1 over C times P of X, Z. [NOISE] All right? And we also know that, because this is just a proportionality constant. We can write these as Q of z is proportional to P of x, z. So, Q of z is just proportional to P of ah, x, z. And in order to make this equal to, [NOISE] we just use the- calculate the proportional normalizing constant. That you're summing over z, P of x, z. [NOISE] Right? And this is basically P of x, z theta. Divided by, when you marginalize out z, you just get P of x theta. [NOISE] And this is equal to P of z given z theta. [inaudible] So the question is, why did we normalize it with P of x? Yeah. So, because Q of z is proportional to this. And in order to- and we know that this is a probability ah, distribution. Which means it has to sum up to 1. So this must- the normalizing constant necessarily. Must necessarily be the, uh, uh, sum of -sum of this. Was there another question? Yeah. The denominator is a probability distribution over x and z? So the denominator is this term that is summed over all possible values of z. But if you are summing, [inaudible] over x? So this is a distribution over Z. P of x, z could be anything. X could be continuous, right? Q of z is a distribution over Z that must sum up to 1, right? And that's proportional to ah, ah, p of ah, x, z. And ah, the- the- it is proportional and the corresponding normalizing constant, must necessarily be the- um, um, the-the- the- ah, ah, the sum over the numerators for all possible values of z. Because it must- it must ah, sum up to 1, right? So, when Q of z equals P of z given x, Then Jensen's inequality, will change into an equality, right? Lots of moving parts. yes. So, we started with- we started with log P of x, right? And wrote it out as -as, you know, the sum over z of the joint. You know, and there's, there's nothing fancy going on here, right? We're just marginalizing out the um, um, um, z. And then we multiply and divide it by some arbitrary distribution q. And by multiplying it and dividing it. These, the numerator allowed us to write it in the form of an expectation. All right, and the- and this state in the denominator. And, once we wrote it as an expectation. We had log in the expectation, right? So initially we started with the log likelihood. And therefore we use the concave version of Jensen's inequality. So this log and this expectation are then swap, right? And we get a greater than equal to expectation of log, right? And once we get these, you know, this is basically Jensen's inequality. The one side and the other side of Jensen's inequality. We call the lower- the lower end of the Jensen's inequality. We just call it ELBO, we just give it a name, right? And then we use the- the uh, the corollary of Jensen's inequality. To look for conditions when this inequality will be exactly equal to, right? And because log- because log is strictly convex. The corollary of Jensen's inequality gives us a condition when this inequality becomes an equality. And the condition is that this term must be a constant. And then in order for this to be a constant, with respect to z. It is necessarily the case that Q of z must be equal to the posterior distribution of z given x, right? Whenever Q equals z given x at that value of beta. Then, Jensen's inequality becomes a strict, ah, becomes an equality, right? [NOISE] And now, given these two- [NOISE] yes question. Can you show once again why q of z equals [inaudible]. So, why is ah, ah, ah, yeah, this, all right so, we want to find the condition that P of x, z divided by Q. This is equal to some constant, right? So now, let it be equal to some unknown constant c, right? So, you can take Q of z over here, right? And once you take it over here, this becomes proportional because of some constant times, -. Right. - right? So, now what we- we also know that, sum over Q um, of z equals 1, right? Which means um, the sum over all of -all of the right hand ah, ah, terms. Must be equal to, C itself,- Yeah, that's C. - right? And we divide it by C and we get, exactly. So Q of z is equal to the posterior of z given x at that value of theta. Thank you, thanks for asking. Um, okay, so, based on this, we- we, we write the ah, [NOISE] the EM algorithm, or the more general form of the EM algorithm. [NOISE] We call it the more general form because throughout this- throughout this ah, setting. We have not assumed any specific form for P of x and z, right? It could be the mixture of Gaussian. If it could be any algorithm. The- this derivation holds for any such latent variable model. [NOISE] All right, so that gives us the ah, the algorithm. So the EM algorithm. The EM algorithm basically tells [NOISE] for- so we have the E step [NOISE] for each i [NOISE] set Qi of Zi, [NOISE] is equal to P of Zi given [NOISE] xi, theta. And the M-step [NOISE] set theta [NOISE] equal to arg max theta of i equal to 1 to n ELBO of xi, Qi, beta Right? So what did we do? We basically, um- we basically get this EM algorithm where the corresponding E-Step is to set Q to be the posterior distribution of P of z given x, and in the M-Step, we set Theta to be equal to the arg max of the EL BO. Now why will this- why will this work? To see why this works, let's see this diagram. [NOISE] Now, let's suppose this is Theta. This is not x, this is theta. [NOISE] Right? And this is- is- is- um, so let's us assume this is our- [NOISE] so this is log P of x Theta, right? As we vary Theta, log P of x gives us different values. This is not the density, there's a likelihood because the x-axis is Theta, and not x, right? And there's a dotted line because we don't know it, right? It- it is hard to calculate. We will have to modularize our x, which may be an- an integral. And what we got from this ELBO is that for any given value of Q, the ELBO of, uh, ELBO of, uh, uh, uh, of x for Q and Theta will always be less than or equal to log P of x. That's what Jensen's inequality, uh, gave us. So Jensen's inequality tells us that, you know, this is- all right? This is, right? This is one possible ELBO x, Q and Theta, and let's consider another ELBO, right? So for example. [NOISE] This is another ELBO, [NOISE] x, Q, Theta, right? So for different choices of Q, we get different lower bounds of P of x, right? And what Jensen's inequality tells us is that, um, or the other corollary tells us is that for a given value of Theta, let's- let's assume this is our randomly she- randomly initialize Theta of -, right? For this value of- of- uh, uh, Theta, if we choose Q to be P of z given z Theta 0, for this choice of Q, let's call it Q_0, right? For this choice, if supposing this is, uh, the EL BO for Q_0, and then it basically tells us that the ELBO value equals log P of z at this value of- of which means, the EL BO touches log P of z at this value of- of- of Theta naught, right? When we are at Theta naught, if we choose the Q of, uh, if we chose to construct an EL BO, using Q as the posterior distribution with that parameter value, then the EL BO will be tight, um, with respect to the invisible goal that we're trying to maximize at that value of Theta. And now, if we maximize in the M-Step, if you choose a new Theta, such that we maximize, [NOISE] okay, if this is Theta 1, that is- that- the- the- the Theta of for the next iteration. And now, we construct yet another EL BO, right? Now the new EL BO will be constructing- constructed using, you know, this Q- Q^1, which uses state, uh, uh, Q^1. Uh, um, then this ELBO will be- [NOISE] will be tight at log P of x at Theta ^1, right? Depending on the, uh, um, the choice of the Q to be equal to the a posterior using the corresponding, uh, Theta value, the EL BO that will get will be tight. At- at, uh, uh, will be touching the- the log P of x. That's- that's what Jensen's inequality is coronally- corollary tells us. Now, if we start at Theta naught maximize the- the EL BO, that is the M-Step, we get the new Theta^1, construct the new EL BO, and maximize that one, and we reach here so this will be Theta^2, and here, we construct yet another EL BO, such that it is tight at- at this value of- of- of Theta^2 and so on. If we repeat this over and over by constructing different lower bounds at each value of Theta, and we are maximizing the lower bound, and so on, then we would eventually reach a local optima where the algorithm converges which means, you know, Theta stops changing. And that's essentially what the EM algorithm uh, um, does. So this is the, uh, the rough intuition of- or- or the- or the visual in, uh, intuition of how the EM algorithm works. And in the next class, we will, uh, we will go through a proof to show that it actually converges, you know, more than just, you know, just simply drawing some pictures. Yes, questions? How do you, you know, compute the EL BO P of x if you just take P of x [inaudible] Yes, exactly. So how do we comp- compute the ELBO? The ELBO is- is, uh, exactly this. Um, exactly. Yeah. But you don't have a [inaudible] So in- in the next class, we'll see an example of how we apply it to Gaussian mixture models where it- it might be more, uh, clear. But for now, you know, for the purpose of this lecture, uh, it's enough to have this abstract view of how the EM algorithm works in general, and in the next class, we'll apply it to Gaussian mixture models and see exactly how we took the step forward. [NOISE] Is there any other question? Yes, question? Is Theta z equals Theta transpose x? Is Theta, uh, given as z equals Theta transpose x? That's how do we do algorithm? No, Theta- Theta here, this is- is some unknown, um- um- um- parameter of the module. [inaudible]. No, no, that we- we don't make any linearity of course.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_3_Probability_and_Statistics.txt
Welcome back to the third lecture of CS229 So in the first two lectures, we've been mostly going over the course prerequisites, linear algebra probability. So on Wednesday's lecture, just- just to recap, uh, we went over the, um, concept of determinant, its geometrical interpretation. And we went through two different kinds of decomposition of a matrix, the eigenvalue decomposition and the singular value decomposition. Um, after that, we, we, uh, quickly went through some matrix calculus. Um, the different ways in which linear algebra plays a role in calculus when we are dealing with, uh, multivariate functions. And after that, we switched gears into probability to review some basics of probability theory. Um, today we will [NOISE] finish up the review of probability theory and maybe in a few, you know, just, uh, spend a couple of minutes on, on some basics of statistics of why statistics plays a role in machine learning. And after that, we will jump right into our first learning algorithm, uh, linear regression. That's the plan for today. Any questions about what we've covered so far before we get started? All right. Okay. So, um, we were reviewing probability, um, and maybe let me just go back a few slides to just touch upon a few, um, important concepts, right? So we covered, um, what's a sample space? Sample space are, um, samples are basically the collection of outcomes of random experiments. And events are subsets of the sample space. Um, we assign probabilities to events, not to random outcomes, but to, uh, um, sets of random outcome, uh, er, the sets of, uh, outcomes, and those are the, uh, axioms of probability. We went over what is, uh, the meaning of independence with respect to events. Um, an event is independent of the other if the probability of their intersection is the- is the, is the product of their individual probabilities. And then we, um, discussed this concept of a random variable. A random variable is something that maps the, the space of outcomes, which could be anything. It could be a string of heads and tails. It could be the color of a dye, it could be, uh, anything whatsoever. And we map it on to the real line. And that's when we can start doing mathematics with, with, with, with randomness. You know, the concept of a random variable maps outcomes which could be in, in any sample space onto the real line, right? For example, we could have, uh, a random variable that counts the number of heights on- in a given event, uh, and to map it- map that, uh, event, uh, I'm so- outcome. Let, let me, um, uh, say that again. So here we have an example of a random variable which, uh, takes as input. Um, the outcome, the outcome, uh, in this case is a string of 10 heads or tail, uh, coin flips. And that is just a string of, you know, um, heads or tails symbols, it's not a number. And a random variable- an example of a random variable in this case, is a random variable that just counts the number of, um, heads in, in, um, in that sequence of 10, ah, coin flips and maps it to, uh, uh, a real number, okay? And, um, by, by, uh, values of x, we mean the set of all possible real values that the random variable can possibly map, uh, any of the, uh, outcomes into, okay? And then we, uh, spoke about the cumulative distribution function. So, uh, previously we saw that, uh, a probability measure is defined on events. But once we map it to the real line, once we convert, uh, to, uh, define a random variable, the probability, the probability measure, kind of, induces some, kind of, uh, uh, a probability measure on the real line, right? And that can be captured by, uh, something called as the cumulative distribution function. It is basically the measure or the, the, um, probability assigned to the set of all outcomes which get mapped to a value less than or equal to the, uh, the desired threshold, right? Um, so this is how, um, um, a CDF would, you know, uh, look for a continuous random variable. So the, the height of this function is measuring the amount of, uh, probability assigned to the set of all events. The set of all, um, uh, the set of all outcomes that map less than or equal to the, the, uh, uh, desired threshold. So that's why- what we have here. So it is, er, the P over here is now measuring an event, a set of outcomes, and the set of out- outcomes is defined as the pre-image of the random variable, right? The random variable maps outcomes to, to the real line. And, um, if you look at the pre-image of that function, that is the set of all inputs that get mapped, uh, to a value less than, uh, or equal to the, the, uh, uh, desired threshold, that will give you a probability and that's going to be a value between zero and one. And which is why, you know, this, this, um, uh, the values of, of the CDF lie between zero and one, right? And, uh, we spoke about discrete versus continuous variables. Discrete variables have a probability mass function, and continuous variables have a probability density function, right? Um, a CDF or the cumulative, uh, the, um, distribution function exists for all random variables. Um, that is the, the CDF that we, um, saw over here. This- a CDF exists for all random variables, whether it's discrete or continuous. But, um, the density function exists only for, uh, continuous distributions, which is basically the derivativ e of the, the, um, CDF. It's important to note that the density of, um, the, the value returned by the density function is not the probability. Uh, what I mean by that is if, um, suppose this is x and we have some density function, and suppose this is 0.3, okay? And this has some height, right? It's important to, to note that, uh, the probability of x equals 0.3. is not equal to this height, right? The probability for any continuous random variable, uh, the probability that though- the random variable takes a specific value is always 0. That's, uh, that sounds, that may sound a little counter-intuitive if you're, um, if you're, uh, hearing this for the first time. But, you know, uh, let, let me say that again. If we have a discrete probability, um, uh, uh, discrete random variable with the probability mass function, I think 1, 2, 3, then the probability that the random variable takes on the value 3 is given by the height of the probability mass function. But the probability density function is something that's fundamentally, uh, different in the sense the height of the, of, of, or the, the density measured at that point is not the probability that the random variable takes on the value of, for example, 0.3, right? Probabilities in case of continuous random variables are only defined on intervals, or sets of intervals. So the area under the probability density function for a given, uh, range of your, uh, input, the area under the probability density function is the probability that the random variable takes a value in this range, right? But the probability that a continuous random variable takes any specific value is always zero, okay? So that's, that's, uh, that, that's- yes question? [BACKGROUND]. Well, so the, the, the question is, um, if I understand you correctly, let me rephrase it and then you can tell me if I understood your question correctly, is, uh, we don't know where the, uh, what value the continuous random variable is going to take, so how can you define, um, the probability to be zero? Well, so this is, you know, uh, you can think of probability as the, the, um, so there are two, two, kinds of, um, uh, interpretations to probabilities. One is, um, looking historically what's been, like, the distribution of, of, uh, um, you know, the- your true random variable, uh, taking on in the past. And then based on that, you can probably construct some kind of a density or estimate some kind of a density. What it actually means is just because if you take a continuous random variable, the number of possible values it can take is uncountably infinite, and that makes it, um, impossible. Or, or a continuous random variable is, is, um, does not have point masses in the sense, uh, any given value in this, among this infinite, infinite, uncountably infinite number of, um, um, possible values, any, any, um, any given value of, of, of, these has zero probability of occurrence. But if you take a range, you get a probability for the value falling in that range, right? Okay, so that's, uh, discrete versus continuous probability. And, and then we, we, uh, started talking about this very important concept called expectation, right? Expectation is, um, expectation is, um, ah, um, a concept that's only associated with a random variable, right? So, uh, we spoke about, um, outcomes. And we spoke about events, right? And we define a probability measure on events. So if we limit ourselves to, uh, just these concepts, then the concept of expectation does not come into picture. Yes. Question. [BACKGROUND] Lights. Is it okay? Okay. Can you see the slides as well? Is it too bright for the slides? All right. Okay. Okay. So, um, if we are only, uh, limiting ourselves to the concepts of outcomes and events, then there is no such thing as an expectation. But once we define a random variable, okay, a random variable, uh, as we saw last time was- it, um- we're going to draw the, um, outcome space as- or the sample space with a squiggly line just because it's- it's- it's not like a linearly ordered line. And then we define a random variable. So assume this is some random variable x that maps outcomes to the real line. All right. You could have another random variable which maps the same sample space to the real line in a different way. Let's call this y of Omega. Now, uh, only after we define this concept called a random variable, the concept of expectation comes into picture. So, uh, expectation is, what is the, uh- well, an informal way to think about expectation is, what value does this random variable take on average? Right? That's- that's, uh, uh,- uh, so we have a function that maps, um, um, outcomes to the real line and, um, generally we- we kind of flip this axis over and we- we- we, uh- we- we start with the vertical line, uh, flipped over. And we ask ourselves, suppose we, uh, there's a function g defined on top of this random variable, so let this be, you know, some function g of x. [NOISE] And this is x. And the values over here are- those values are- are- are coming from random out- uh, coming from outcomes that got mapped onto the real line. And then, uh, the expectation is now asking, what is the average value that, uh, g- g will return? Right? And the average value that g returns is gonna depend on the way on- depend on what values we- we are feeding into x. Right? And the, uh- one expectation informally is, if we feed values into g, according to the distribution, that is induced by x. Okay? So events happen, you get sample events, all right, map it to x through the random variable, feed that- that value of x into g and you get some value. All right? So suppose this- this- this, uh, um, outcome happened, map it to x, right, x maps it to g, and this is the value of g. Right? And similarly, uh, you repeat this experiment over and over. All right? Different outcomes map you to different values of x and different values of x map you to different values of g of x. And you're measuring what, you know- the various values of g. And you're trying to ask the question, what's the average value of g in general? What- right. And you can ask the question of average value only because now you have things on the real line, right? On sample space, there is no such thing as an average. [NOISE] [LAUGHTER] There's no such thing as, uh, an average sample because, you know, you have a die with six colors, what's the average color? You know, it's- it's- it's not meaningful. Once you map them to numbers, you can ask what's- what's the- what's the, um, uh, average value? All right. And, um, for a discrete random variable, the expectation is, uh- is defined like this, right? Sum over all the possible values x can take. And for- for- for each of those values multiply g of x, that is the value, uh, g of x returns times a probability x takes that value, right? And it- this is just a weighted sum of the different values of g of x according to the probability with which x can occur. All right. Very- very intuitive definition of, uh, expectation. And similarly the, uh- for continuous random variables, um, it is defined like this. In place of the- the, uh, summation, we have an integral. Right? Now, you're integrating over all possible values of x, right, and for each value of x, we measure the density. Right? Not the probability, but the density. And because we are now using densities, we- we now have an integral instead of a sum. Yes, question? So do you [inaudible] this is like a distribution of [inaudible]? X. So what- the question is, what's g of x? So x is a random variable and g of x is some function of x. It's just some function of x, right? Uh, G is defined here. G is some- some function that takes a real value as input and outputs a real value. Right? And you can- if- g is just a function. You- you know, um, in this case, the input of x- in- input to g is going to be the random variable, which means, you know, uh, it- it can take on different values according to the, uh, outcome of some random- random experiment. Okay. So that's the expectation of, uh, g of x. And we saw, uh, uh, uh, an important, uh, concept, uh, last, uh- in the last lecture that you can approximate g of x by taking random samples of x and calculating the average of g of x for that sample, all right? And that's called the Monte Carlo estimate of the expectation. And as the number of samples tends to infinity, right? As you- as you estimate the expectation with more and more number of samples, the- the Monte Carlo estimate converges to the true ex- true expectation, right? And that's basically the law of large numbers. All right. Any questions about that? Okay. So that was still, um- that- that's- that's, uh, what we covered, um, on Wednesday and today, we're going to continue further. Now, we talk- we're gonna talk about variance. So what's the variance of x? Uh, variance of x is defined as- is there a question? All right. So the variance of x is- is defined as the expectation of x minus expectation of x squared. Now what's- what's- what's happening here? It basically means, um, suppose x has some distribution. For example, let's suppose this is the probability density of- of- uh- of x. The expectation is also, uh, the point that's kind of, uh- the point that's- you can- you can- you can think of it as kind of the, uh- if- if you imagine this probability density function to have some kind of a shape, then the expectation is that point on x, which- which is basically like the center of gravity. Like- right? If you- if you- if you- if you take an actual physical thing of- of this shape and you try to balance it on the- on the expectation, then, you know, your- your- that shape will kind of stay balanced, right? It's like the center of gravity, right? That's- that's the intuition for expectation. Uh, the, uh- the variance is kind of trying to measure how spread apart is this random variable around its, um- around its mean. So this is going to be, um, the expectation. Um, the- the- the variance is trying to, uh, measure how spread apart is this, uh, distribution in general. All right. Uh, there is also, uh, another concept called the median of the distribution. So the median of the distribution is- supposing this is the median. The median of the distribution is, uh, the point that divides the mass into- [NOISE] into- into, uh, uh, equal masses on either side. All right. And for symmetric distribution, the median and the expectation are- are- are the same, right? However, uh, there could be distributions which are- which are- you know, for example, you have a distribution like this, right? In this case, the median could be somewhere over here, but the expectation could be, uh, uh,- so this could be the median and this could be the expectation, because the- the- the point where you balance a mass is not necessarily the point where, you know, you have equal masses on both sides, because, you know, um, I think- I think that's called the, uh, moment of inertia or something like that. But if you have a mass that's- that kinda spread, uh, all the way on the other side, then that has more effect. So if you want to balance things, that's the point where- that's- that's the, uh, mean or the expectation and median is the thing that divides it into, uh, equal masses on both sides. All right. And the variance is trying to measure how spread apart is this. If you could have a distribution that has, uh, a given expectation and a small variance, or you could have a distribution that has a given expectation and a large variance. Right? [NOISE] These are two different expressions for, uh, uh- for- for, uh, the variance of x and they're- they're, uh, both equivalent. That's, uh- showing this is- is pretty simple. It's in the notes. Um, and here are some examples of various, uh, distributions and their parameters. All right? Parameters. Now, this is the first time we're talking about parameters. What's a parameter? Right? When we're talking about distributions there are kind of two concepts. One is the- the, uh, space over which the distribution is being defined. Supposing it's a random variable, it is, you know, um, the real line on- on which the- the distribution is being defined. And then you have a parameter. A parameter is basically a number that tries to, uh, summarize the shape of the distribution. All right. For example, um, if you have a- a Gaussian random variable, which is [NOISE] also called the normal distribution or the bell curve. [NOISE] It has two- two parameters. So one is the, uh, so this is the real line. And this is the, uh, Gaussian, uh, uh, C- PDF and Mu is the- the mean of the Gaussian distribution and Sigma is like- it's- it's called the, uh, uh, standard deviation, right? Uh, so a Gaussian distribution is- is, uh, summarized by a Mu and, um, um, and the variance Sigma squared, where Sigma is the standard deviation. Similarly, there are, you know, many different, uh, uh, uh, distributions and each of them have- have, um, you know, parameters. Some distributions have just one parameter, some have two, some have three. Uh, but most of the distributions we're gonna be, uh, uh, interested in- in this course are- are- are, you know, gonna have one or two parameters. All right? You can have, um, you can look at two random variables, um, jointly, right? And for that, um, what that means is supposing you have a variable X and variable Y, together, they have a joint CDF, which means the probability that X- the variable X, is less than the X threshold and the probability that Y is less than the Y threshold, right? So you think of the comma over here as- as- as the logical and, right? Um, similarly, you have a- a bivariate, uh, probability mass function, which is, you know, the probability that X, uh, in this case, X and Y are discrete. In- in the, uh, continuous, they were, uh, here, they were continuous. So this is the probability that, uh, X takes the value, you know, small x and the random variable Y takes, uh, the value small y. Given to, uh, given the, uh, joint P, uh, PDF of two random variables, you can construct their marginals, right? So if you have two random variables, let's call it, um, X and Y, the- the, uh, the distribution that- that- that, uh, captures the most information of these two is the joint distribution of the two. And that will have, uh, assuming they are continuous, you know, you write it as P of x, y, right? That's the, um, um, the, um, joint, uh, distribution of the two. Similarly, you can have the marginal distributions of, uh, the two random variables. And the marginal distributions is gonna be, um, written as P of x or P of y. And the way you get this is to- what's- what's called as marginalizing out, or summing out, or integrating out the other variables. So sum of all the Y, P of x, y, and you get this. Or similarly, if it's a continuous random variable, then you integrate out P of x, y dy, right? And- and this is just the other way. So here you sum out x, if it is, um, discrete. So this is discrete and this is continuous, right? And here it would be summing all, right? Now the, uh, the way to think about the- the, uh, joint, uh, PDF is that the- the joint PDF or the joint PMF has to satisfy the property, right? Right? So and if you integrate out both the variables, the- the joint PDF has to integrate to one, right? And, um, if you integrate out only one variable, then you get a distribution with respect to the other variable, right? And that's called the marginal distribution, right? So, uh, given- given these concepts of, uh, the joint and the marginal, we can define the base theorem. The base theorem is probably one of the- the most important, uh, theorems in- in probability theory. It's gonna show up all over the place, especially in this course and even beyond. And the base theorem basically, uh, gives you, uh, uh, the relation between the conditional distribution, the joint distribution, and the marginal distribution, right? So the, um, base- the base theorem looks like this. So P of x, y. You can write this as P of x times P of y given x, right? So this is the joint, this is the marginal, and this is the conditional probability, right? So the- the way you wanna remember this is the joint is the product of the marginal and the conditional, right? But you can also write this as P of y times P of x given y, right? And the way you decompose the joint into- into these two parts is called the chain rule, right? So there's a chain rule in calculus, there's the chain rule of probability theory, right? So the chain rule, it tells you that you can- you can always- for any pair of random variables, you can decompose the joint into the marginal times the conditional, right? And a- a- a trivial consequence of this, uh, chain rule is the base theorem. So the base theorem, uh, basically equate these two. So if you write P of x times P of y given x equals P of y times P of x given y, right? And assuming P of x is never 0, you can divide both sides by P of x, and that gives you P of y given x equals P of y times P of x given y over P of x, right? So this is base theorem, right? It's- it's- it's a very simple consequence of- of the, uh, uh, of chain rule. You apply the chain rule in two different ways, equate the two different ways and divide, uh, divide over by one of the modules, and you get the base rule. Yeah, so here we're, um, um, writing this in terms of x, F, but, you know, um, um, um, you can- you- you can use F or P, doesn't matter, right? And another way, um, in which Bayes- the, uh, Bayes theorem is, um, is also commonly written is retain the numerator as it is, P of y times P of x given y. And, uh, the- the denominator is- instead of writing it as P of x, you're gonna write it as- assuming it's- it's a discrete, sum over y prime, P of y prime times P of x given y prime. And these two are the same. Why are they the same? Because P of y prime times P of x given y prime is the joint. All right? It's- it's the joint. And you're marginaling- marginalizing out y prime from this, which gives you P of x, right? May- you know, just- just to add, this is basically equal to summing over y prime of P of x, y prime. You're gonna use this form of the Bayes theorem also in this course, right? Any questions. So some of you sitting over here, you may not be able to see this, uh, uh, read this because this, kind of, comes in the way. So maybe you can move over next time or you can move over even now if you want to, um, right? Example of Bayes' rule, let's skip over this. Independence- so the independence of two random variables is very similar to, uh, the independence of- of two events. They are distinct concepts. Independence, uh, we- we- we spoke about independence of random events a few slides ago. Here we're talking about independence of two random variables, right? And independence of two random variables is defined like this, that the, uh, the joint probability is equal to the product of the marginal probabilities, right? Or the joint density is equal to the product of the marginal densities. Uh, that also, you know, um, that's basically telling, you know, if it is independent, right? You're saying P of- this is equal to P of x times P of y. So if- if two random variables are independent, then, you know, you- P of y must be equal to P of y given x, right? We saw something very similar for random, um, um, for independence of random events. It's, uh, you know, it's very, very, uh, uh, very similar. So P of y given, uh, x of two variables will be equal to, uh, P of y? Yes, question. [NOISE] [inaudible] The- so the question is, if two random variables are independent, can we say that the, uh, events that they are, um, um. Defined over. Defined over are also independent? I- I. Yeah, it is [inaudible]. The- it- the- it- it- it- it must be the case, right? Yeah. Yeah. Yes, question? Yes, question? Can you go back a slide? Go back a slide. Yeah. This is defined [inaudible] This is defined when f of x is not equal to 0. [OVERLAPPING] [inaudible] So the, um, when- when, uh, the probability, uh, so assuming we are talking about discrete, um, um, in the discrete case, the uh, conditional of y given x itself is not defined if the probability of x is 0. Right? Assuming that the condition is defined, you know, the def- um, um, the Bayes' theorem tells you that, you know, you can represent it like this. [NOISE] Does that make sense? So- so the conditional probability itself is defined only when, uh, you know, uh, conditional of y of x, uh, is defined only when x has a non-zero probability. Can you explain the interpretation for the function f of y given that, what does it actually mean [inaudible]? So, um, over here, this is the um, conditional distribution of y given x takes on some, uh, you know, a- a given value. Right? [NOISE] Right? Next we're gonna talk about this, uh, you know, uh, so yeah, independence. So we're gonna- I don't- independent random variables are- are, um, are super important because it lets you decompose the joint probability into just product of the marginals. And this independence makes, um, this independence, uh, um, is what makes a lot of machine learning theory kind of, you know, easy to- to, um, uh, work with. We- we- we make, we're going to be making, um, assumptions about, you know, your training examples, um, which you think of them as random variables being independent of each other. So the concept of independence is- is, uh, is- is- is very important. [NOISE] Let's skip all this. Right. This is the expectation of two random variables. Uh, [NOISE] you can go over these on your slides. Um, the- just like the concept of, um, um, of variance, there is the concept of covariance when we're talking about two random variables. Right? Uh, and the covariance, um, of- of, ah, random variables are defined like this. So, um, so the variance of- [NOISE] so the variance of x was defined as expectation of x square minus expectation of x square. We saw this on a few slides ago. Similarly, uh, the covariance of x comma y is defined as expectation of x, uh, times y minus expectation of x into expectation of y, which basically tells you that, um, the covariance of x with x is just the variance of x. [NOISE] Okay? [NOISE] All right. The multivariate Gaussian. So this is, uh, one of the most commonly used distributions defined over a collection of random variables, right? And this is something that's gonna to show up a whole lot in this course. And hopefully towards the end of the course, uh, if you're woken up in the middle of your sleep and asked you what's the probability density of a multivariate Gaussian? You'll be able to say this, you know, instinctively, right? Um, this looks- looks a little, um, uh, scary at first, but let's- let's, um, let's kind of diss- uh, dissect it a little bit. So- so x is in some n-dimensional space, and the probability, uh, the density of x given two parameters, Mu and Sigma is defined as 1 over 2 pi to the n by 2 times the determinant of Sigma to the power half times the exponent of minus half x minus Mu transpose Sigma inverse x minus Mu. Wow. So, um, first of all, let's see what's happening here, uh, and add some color to it. So x is the space over which we are defining, ah, the- the- the, uh, probably density. So we see that x appears only here inside the exponent, right? And you have two parameters, Mu and- and- and Sigma. The Sigma is called the covariance matrix and Mu is the- is the uh, uh, is the mean. And the covariance matrix shows up over here, so let's [NOISE] right? And then we have the mean. And the mean- [NOISE] maybe I'll use black for this. [NOISE] Mean, mean, mean. All right. So, uh, this is the- is the, um, um, the joint, er, the- the joint distribution of a multivariate, uh, Gaussian, and we right away recognize a few things that we've seen in our linear algebra course before, right? So first of all, uh, Sigma is a matrix, right? It's called a covariance matrix. It turns out to be a positive semi-definite matrix. Uh, any covariance matrix for any- any, uh, uh, joint distribution, no matter what the distribution is, it's always going to be positive semi-definite, right? So this and this is a positive semi-definite matrix, and- we- it is going to be full rank, which means the inverse will exist. Right? And recognize a few more things. So this is the- is the determinant of the matrix. Right? That's the, uh, notation for, uh, the determinant, and it's- it's like the square root of the determinant. Um, and we also see this form of it over here. Does anybody recognize this form? [BACKGROUND] This is the quadratic form. So x minus Mu is sum vector, right? And what we're doing is x minus Mu transpose sum matrix times x minus Mu, right? So this is the quadratic form that we also spent some time on, right? So all- all the- all the, uh, uh, review that we did with linear algebra is gonna, um, um, apply on this, uh, shortly, right? And this term over here, uh, is just some constant. It has no mean, no- no data, no, uh, uh, covariance, th- this is just some constant. And the purpose of this 1 over, uh, 2 Pi^n by 2, uh, the purpose of its existence is to only make this- the integral of this with respect to x be equal to 1. So this- this is just some normalizing constant, right? Yes, question. [BACKGROUND] Yeah. [BACKGROUND] Exactly. So the question is, uh, if this is a multivariate distribution, what does the integral with respect to x mean? It- it means, uh, exactly what you say. You're integrating out all x1 through xn. All right. You hear me? Okay. Yeah. Okay, sorry. Um, so, um, the- the integral would- would look something like this. You would have, um, p of x Mu Sigma squared times dx_1, dx_2. Right? Its- it- it would be, uh, a double integral or a triple integral, or, uh-. Right? Right. So this is the- the, uh, the multivariate Gaussian density, and we're gonna be working a whole lot with this, so, you know, you- you'll get familiar with this, uh, if you- if you, uh, if you're not already. Okay? And here are some intuitions about what the mean and covariance matrix of a multivariate Gaussian, uh, looks like. So what we have in the, uh, in the left are examples of particular instances of Mu and Sigma. The figure on the center is a plot of the probability density function. Um, here it- it's a two-dimensional Gaussian, and the- the figure on the rightmost column is what's also called a contour plot or a heat map, where a lighter color, a color of orange, means, um, that that region has a higher value and blue means it has a low value. So imagine, um, looking at the- the- the plot in the middle directly from the top and using different colors to represent different heights, right? That's- that's the, uh, uh, uh, plot on the right. Right? So let's just get a feel for, you know, what the different, uh, um, um, different parameters mean in terms of the probability density. So a Mu equal to 0 means the center of the distribution, or- or- or the- the distribution is going to be centered around 0, 0, right? We see this to be the, uh, you know, 0, 0 over here and the- the peak of the distribution is over 0, 0 in this case. And the- the covariance matrix, the diagonals of the covariance matrix are the variances of the marginals of x1 and x2, right? So it- it- what- what this actually means is that x, the variable, uh, x1, so if you- if you just look at x1 over here, it's gonna have like a standard deviation of 1. You know, it- it- that looks roughly correct. Um, uh, if you- if you imagine this to be, uh, the x direction, it has like a- a- a- a standard deviation of 1 and x2 also has a standard deviation of 1. And the cross entries in the covariance matrix are the actual covariances between those two random variables. A covariance of 0, uh, here roughly means that there is no, um, strong relation between x1 and x2, which means it take, um, a spherical shape. Um, moving on, so here's another example where the, um, the covariance matrix has smaller diagonal elements, which means the distribution is still centered around 0, but it is more concentrated. It has a smaller variance, right? Which means it is more peaked. Okay? It is more peaked compared to, uh, this distribution. And in terms of the, uh, contour plot, um, the- the- the circles just appear to be, uh, smaller. [NOISE] Right? And similarly, if you increase the variance of, um, of your distribution, the covariances are still zero, which means that it is still spherically shaped, right? Um, now, it- it- it looks more flattened, right? Your probability density looks more flattened, and the concentric circles are also more, uh, expanded. Okay? Let's look at a few more examples. Now, um, over here, you know, this is still the- the, uh, the, uh, first picture. Now- now let us scale the x1 to have a variance 0.6 and x2 to have variance 1. [NOISE] So x1 is now more, um, compressed, but x2 still maintains the same variance. So you see from the contour plot over here, um, that it- it- it is more, uh, vertically, uh, aligned. Probably the slides- the shape- the prediction of the slides make it look a little different, but that's- that's the, um, you know, it seems on the- on- on- on the picture over there, this looks like a circle, but it's supposed to be, you know, compressed vertically and- and it's- it's, uh, it's supposed to be longer. Now, instead of shrinking x1, if we expand x1, right? Now, you see that it's- it's- it's stretched across more widely. Right? But still there is no correlation between, um, between x1 and x2, uh, so you don't see any particular, you know, um, um, correlation over there. Now let's add some correlation, right? Now, um, x1, and, um, in- in this case, x1 and x2 still have a variance of 1 and, uh, 1, but now we've added a correlation of 0.5, which means, um, as you increase x1, x2 also tends to increase. That's- that's the, uh, general idea that x1 and x2 tend to co-occur, uh, you know, uh, uh, more of, uh, you know, as- as you increase the values of x. If you increase the, uh, covariance further, you know, the- the relation becomes even tighter, even stronger, right? Uh, the marginals in this case are still 1 and 1, which means from, uh, if- if you- if you were to integrate out x1, you would get, uh, a probability density of- of a unit, uh, of- of, uh, mean 0, uh, variance 1 on- on x2, and similarly, if you were- were to integrate out x2, you would still get, you know, a normal curve on- on- on the x1 axis. Right? So that- this, um, you know, intuition for how a multivariate Gaussian, uh, would- would- would look. You know, with- with two dimensions it's easy to- to, uh, um, visualize and get some, uh, intuitions. Right? So, um, if the, er, this is the next slide, and, uh, if the correlation is negative, now, what- what- what this means is, uh, larger values of x- x1 tend to co-occur with smaller values of x2, right? So, um, for example, over here, for x- x1 equal to, you know, uh, for x1 to have a particular color, you know, um, x2 has to be, you know, negative. Uh, that- that's what it means. So- so you see- you see a reverse relation between, um, uh, x1 going up and x2, uh, going down. And if you, again, uh, make it negative and make it a bigger number, you see a stronger correlation between x1 and x2, but in the- in the- in the, uh, reverse direction. Similarly, uh, for a given covariance matrix, you can, uh, move your, you know, use a different Mu, which means, uh, the covariance matrix is deciding the shape and orientation of the- of the, um, of this function, and Mu, uh, tells you where to place it, where to center it, right? So, uh, this is basically the same, uh, uh, bell-shaped but placed at x1 equal to 0 and x2 equals 0.5. So here you see it's just- in terms of x2, it's moved up a little bit. Right? And- and, uh, this- this is, uh, another example where x1 is 1.5. The- the- the center of the distribution is that x1 equals 1.5 and x2 equals minus 0.5. That's where the center is, and the, uh, covariance matrix decides the shape and orientation. Right? Any questions? Yes, question. [BACKGROUND] So if you- if you, uh, the question is, uh, this is, uh, with respect to two variables and what if we have more variables? Is tha- is that the question? [BACKGROUND] Yeah, so if you have, uh, a three-dimensional Gaussian, then you would have a three-by-three covariance matrix [NOISE], and each of the, uh, uh, pairwise correlations will be the values in the corresponding cell. All right. Okay, so conditional probability, uh, um, we went over this already. Um, so, uh, here's a useful, uh, identity, which is the conditional expectation, right? So the conditional expectation is another important concept in- in, uh, probability. And the conditional, um, expectation is, uh, defined like this. And the, uh, conditional expectation is actually- is actually somewhat subtle, right? So conditional expectation, um, is- [NOISE] so the conditional expectation of [NOISE] E of X given Y. So you think of the conditional expectation in this form where X and Y are random variables to be a random variable [NOISE], right? The expectation of X [NOISE] is a constant [NOISE], right? But the conditional variable of expectation, uh, the conditional expectation of X given Y is another random variable, right? X is random- X is random, Y is random, expectation of X is just a constant, it is not random, right? The conditional expectation of X given Y is a random variable, right? It's a random variable defined over the space of Y. [NOISE] However, conditional [NOISE] expectation of X given Y equals small y. This is a function [NOISE] of small y, right? The conditional expectation of X given Y, where you're, you know, you're- you're- you're just placing the random variable over here is another random variable, right? It doesn't look like it, but X given Y- co- a conditional expectation of X given Y is another random variable. But conditional expectation of X given Y equals some specific Y, is a function of that specific Y. Yes, question? [BACKGROUND] These two are not the same in the sense, this is still a random variable. The- this whole thing is a random variable, and this whole thing is a function of small y, right? This is- this is, uh, um, a subtle concept and you may want to go back to your probability book to, kind of, uh, uh, go through this again. But, uh, this can be- this can be relevant in some of your homeworks. So that, you know, uh, you- you want to have a good- good- good, uh, clear understanding, uh, that or- or view these as distinct things. Conditional expectation of X given Y,oo where Y is still random variable is another random variable. It is basically, you're saying, you're feeding a random variable Y as input, and this becomes a function of Y, and therefore, it's still a random variable. You can think of it that way. Yes, question? [BACKGROUND] Exactly. So the question is, if this is a random variable, then can we apply all the- all the rules that we've seen so far on this? You- exactly, you can, but you cannot on this one. Yes, question? So is the random variable going to be defined [inaudible] as only a function of X and Y? So over here, the random variable is defined over the space of Y only. Yes, question? [BACKGROUND] Random variables are functions already. This is a function of y, of- of- of, uh, you know, of small y. But, you know, this is a random variable in the sense, this is a function over your sample space, you know, you can- you can think of it that way. [BACKGROUND] This is just, uh, over the sample space of y alone. [BACKGROUND] So may- may- maybe the next- next slide might make it, uh, a little more clear. Uh, so maybe, you know, hold your question until the end of the next slide. Uh, and- and there- was there one more question? Yes, question? [BACKGROUND] Yeah. So, uh, in- in the case of Gaussian, you can think of it like this, right? So, uh, assume this is the contour plot of- of- of Gaussians where it is centered on some, let's call this a, uh, uh, plus 1 and plus 2. And the expecta- and let's call this X and let's call this Y. So, uh, the expectation of- of, uh, X given Y equals, you know, say, some small value y and assume the small value y is this, right? Uh, what it basically tells us if- if you, kind of, limit yourself with this slice of the- of the Gaussian, and that's going to be, you know, that's going to have some kind of a bell shape and what's the expectation over here? Yes, question? Did you say it's a function of small y? So, um, this is a function of small y, expectation of- But here you said it's a function of big Y? That's a typo, that should be small y, sorry. Yeah, this over here should be small y. Thank you. Yeah, so you, um, um, so now we have, uh, something called the law of total expectation, right? So the law of total expectation tells us that, expectation of X can be written as the expectation of- expectation of X given Y, right? Now, this holds true for any X and any Y, right? Y could be completely independent of X, it could be dependent on X. But the expectation of X can always be decomposed into this nested form where, um, you condition on Y, you get a new random variable, right? And you take the expectation of- of this random variable and you get back the expectation of X. This is called the law of total expectation. Um, and- and, um, while this holds true for any Y in general, you have the choice of choosing the Y, and you want to choose a Y that makes your problem easier to solve by, you know, breaking it down this way, right? And- and the choice of what Y you want to choose is more of an art and you got to, um, uh, when you're asked to calculate, you know, the expectation of X and it's- it's, you know, it's- it's pretty, uh, it's pretty complex, uh, you want to use some- some, uh, uh, creativity in choosing a Y to defining this, uh, um, random variable, and- and then take the expectation of that, and by breaking it down into, like, sub-problems, uh, you can- you can, uh, solve some complex problems. [NOISE] So, uh, the expectation of X given Y, where Y is left unspecified is a random variable. And that's what shows up here in the law of total, uh, expectation. [NOISE] And when you take the expectation of that, you get back the expectation of X itself. Yes, question? [inaudible] [NOISE] X given Y, that is a random variable for the function of big Y, right? This one? Yeah. Yeah, you- you- you- you can think of it as a function of big Y, uh, that's one way to think of it or- or just think of this as a random variable, right? This is a random variable like Z, you know, think of this as Z, you know, some random variable. [NOISE] But here's a proof, uh, to show, you know, this is just proving, uh, law of total expectation. I'm not going to go with this, you can, uh, uh go through it yourself pretty straightforward. There's- there's nothing, uh, complex going on here. And there's one more version of Bayes' rule, that's going to be, uh, helpful for you. Uh, [NOISE] Right? So- so let's write the Bayes' rule that we've already seen. P of a given b is equal to P of b given a times P of a over P of b. Right? So this is the Bayes' rule that is already, right? And the Bayes' rule basically also allows you to have another variable that is conditioned throughout, right? So you can think of this as the conditional Bayes' Rule, which means P of, um, b comma c. So you- you can have c condition in all the terms. And this is also Bayes' rule, the- the expression that you, uh, see over here. All right? Similarly you can also- um, so this is the Bayes' rule where we are kind of swapping a and b or here. Um, similarly, you could have also just as well kept b conditioned throughout and just swapped a and c. And- and that's also, um, um, you can think of that as Bayes' rule. All right, anymore- any questions? Okay. With this- okay. So here's a proof for this. You can go over this, but, you know, you can just- you can just remember it this way and you can apply it. You- you know, this proof is just to convince yourself, we're not gonna test you on these kind of proofs. Ah, with that, ah, we finish the- the review of probability. Any questions? No. I wanna take a few moments to, um, to kind of give you a- a- a- a slightly a big picture overview of the role statistics plays in machine learning, right? Um, what we saw in the slide so far, right? We had, um, parameters. [NOISE] Okay. For example, Mu Sigma of the Gaussian, okay? And we had observations. [NOISE] The parameters and we had observations, for example, in the multivariate Gaussian, you had x in some R_n, right? So probability basically is the field where you're trying to make statements about the observations given your parameters, right? Probability. So given- given parameters, you are trying to, um, make statements about- about the data, you know, what is the probability of this? What's going to be the marginal probability? What's the conditional probability? What's the joint probability? Things like that. Um, so given obser- given- given- given parameters, ah, we assume that the parameters are given or they're fixed, and you're making statements about observations or- or data. You can think of this as data. All right? And statistics is kind of doing like the opposite, right? You're given data, you start with data, and you're trying to make statements about your parameters. [NOISE] All right? You make some assumptions in statistics about what the distribution is, um, given- given- given the data. And you take the data that's given to you, right? You're not making statements about, you know, what the probability of some other data is or, um- or anything like that. But given the data you are trying to make statements about the parameters, right? And- and- and that's statistics. And you're- you're trying to, um- um, generally the- the statements that you make about the parameters in statistics are about things like, is this parameter value equal to 0? or is this parameter value much farther away from 0, right? And you- you're going to assign, you know, p-values for those observations. You're going to define confidence intervals where, you know, what- what you believe, ah, the- the range for those parameters are, right? So you start from- from data and make statements about the parameters and statistics, but in probability you're kind of, you know, going the other way. And- and these two are kind of, you know, like, yin and yang, they're kind of couple to each other, right? And there are many- many techniques to, um, estimate parameters, um, given- given your data, right? So, ah, things like method of moments or maximum likelihood estimation. [NOISE] And it is this approach that's kind of relevant for us in this course and for most of machine learning. And, um, using these techniques, you- you kind of feed in x as the input for these techniques and it's going to output some- some- some parameters for you, right? Now, where does machine-learning come into the picture? So with machine learning, what we are trying to do is you're given what's called as a training data. All right? Your training data is going to be some collection of x- um, you know, you can think of them as x, y pairs for supervised learning, and you're given a collection of them, right? And given the- given your training data, you want to learn the parameters of your model. You're going to define some kind of a model for- for this data and learn the parameters. And using the learned parameters, so for machine learning, you know, feed training data, right? And you're going to learn your model. And using those models, you want to make predictions about future data, [NOISE] right? So machine-learning, we are actually- ah, the goal that we are interested in is actually given the training data, make predictions about future data, right? But incidentally, we kind of take statistical approaches where we build a model and use the model to make, um- um- um, predictions on future data. And that's where kind of statistics comes into the picture where, you know, you're given- given training data. We use tools from statistics like maximum likelihood estimation, um- um, and other things to learn the model. Even though we're using the same tools of statistics, the goals are very different. In statistics, the goal is to make statements about the parameters itself, right? In machine learning, we really don't care about the parameters at all. All we care is how good is this model in making predictions about future data? And that's how we measure the quality of machine learning methods. You know, how good is it in making predictions about- um, about- about future data? Whereas in statistics, you know, the game ends here. We measure how good the model is or how good our method is in terms of making proper statements about the parameters. All right? In machine learning we, this is- this is a black box for us, right? We- it does estimate your parameters, but they are not of direct interest to us. They're only- we are only interested in the parameters because they help us make predictions about future data, right? [NOISE] And, um, this- this part of the cycle is also what is called as, um, learning or training or fitting; fit a model with data, right? And in- in classical statistics, you also call this statistical inference. All right? You're inferring what the parameters are given- um, given the data. And once you have data, this is called prediction. You're making predictions about- about future data, about unseen things- things that you haven't seen before, about data that you haven't encountered before, right? And- which- which brings us to maximum likelihood estimation, right? In maximum likelihood estimation, um, I'm going to call it MLE. [NOISE] Now, MLE, um, we- let's start with a very simple example of MLE to- to kind of give you the flavor of, um- uh- let's- let's start with Gaussians- Gaussian data. So in the- here we are going to talk about a simple situation where we are given data x in R_d, and you're given a collection of them. I'm going to write them as x_1 through x_n. In this notation, what I mean is each x_i is the i-th example. It does not mean x raised to the power of i or x raised to the power of n. Ah, whenever I write the superscript with parentheses, it means it's the i-th example, right? Ah, and there are n such examples. Each example is in R_d. So it's each- each x_i is a d-dimensional vector, right? And we are given- um, let's assume we are given, um- um, n such examples, right? Now, the probability- we already saw the probability density of, um, of the Gaussian, which is P of x given Mu Sigma is equal to 1 over 2 pi to the d by 2 [NOISE] sigma half exponent [NOISE] Right. So this is the probability [NOISE] density. Okay. Now, we are now gonna define something called as the likelihood function. [NOISE] All right. So let's define the likelihood function of the parameters, mu comma sigma [NOISE] given- given the data X. Okay. [NOISE] And for notation, let's- let's call X as the- as the collection of, uh, your N examples. And over here, the semicolon means, um, things to the right are kind of, um, um, think of them as parameters or think of them as- as, you know, given values. And the function is being defined on the variables that are to the left of the, uh, semi- semicolon, right? In order to define the likelihood function, we are gonna make yet another assumption. The assumption here is that each of these are sampled [NOISE] independently and [NOISE] identically distributed. [NOISE] Right? So this is also commonly called [NOISE] IID. Um, and this is gonna be a very common assumption in all our- in all machine learning methods that- that- the- the training data that you have are sampled in an IID fashion, which means X^1 through X^n are all independent of each other. They were sampled independently. Right? Now the like- uh, de- we define the, uh, likelihood, uh, function as- so, um, so this was for one example. Let's make this, um, for all the examples. Um, s- so the probability of X^1 through X^n mu comma sigma is gonna be- we though- we- we thought independence, um, that P of X^1 through X^n, if X^1 to X^n are independent of each other, we can write this as the product of i equals 1 through n, P of X^i. Right. This is- this means product. You're- you're- you're- you're multiplying the- the terms over here. Okay? Yes. Question? Can you explain about [inaudible] All right. Correct. Let- let- let me- let me- I- I- I haven't, uh, finished this yet. So let- let me come back to this in a moment. Okay? So the, uh, the joint probability of independent, um, independent pieces of data is the product of the individual marginals. Right? Now, if you want to define the, uh, probability density of your full data set, X^1 through X^n mu sigma, [NOISE] you're gonna write this as the product of i equals 1 through n, 1 over 2 Pi^d by 2 [NOISE] half exponent [NOISE] minus half, here X^i [NOISE] minus mu [NOISE] transpose. [NOISE] Does it make sense? Because we are making, um, think of this as, um, um, this was for the case of one example. But if you are given, you know, uh, a- an entire training set, the probability density of the full training set. Because of the IID assumption, we can break it down into the- the probabilities of individual examples and just, uh, uh, multiply over them. Yes. Question? [inaudible] [OVERLAPPING] X is a vector. D dimensions. E- Is a vector of d dimensions, yes. [inaudible] So, this- in this case, X is also a vector. Right. So you have X minus mu transpose, you know, this- this is still in a- in a- in a vector setting. Okay. So this is for one example, that the example is still vector-valued and this is, um, an example where each of the example is still the d dimension. Yes. Question? What is the semicolon, sir? So the semicolon is a way of saying things that come to the right. You're treating them as given. Uh, you don't think of them as variables. They are just some- some- some- some constants. [NOISE] Okay. Now, um, this is the probability density and it is precisely this IID assumption that lets us break down a complex problem into smaller pieces, right? It's, um, representing this as one large Gaussian, would have- would- just makes it a- a harder problem. We don't even know, um, you know, what this distribution might look like if there is, you know, um, um, uh, if the- if the interactions are pretty complex. So instead, when we make the IID assumption that your training data just came to you indep- each example was- was generated independently, we break it down into, um, you know, smaller problems. Now, once we have this, uh, you know, this as the probability density, we're gonna now define the likelihood function. [NOISE] So the likelihood function is, you know, in- in the spirit of this, a probability density function takes X as your data, uh, X as the- your data as the variable, and assumes your parameters to be given. But for going the other way, you're gonna take your data to be, you know, given and your parameters are the thing you wanna, um, uh, uh, make a statement or- so over here, the likelihood function is gonna be over mu and sigma X^1 through X^n. Right. And, you know, uh, to no surprise, this is gonna look exactly like this. [NOISE] Yes. Question? [inaudible] [OVERLAPPING] Yeah. Ho- Hol- Hold that question for a moment. Yeah. Okay. Um, what we see is that this expression over here, in case of the Gaussian, this- the- the- the- uh, what we thought of as the density function. Also, you- we can repurpose it as our likelihood function. The difference is that in case of- when we- when we think of this as, uh, density function, right, [NOISE] the data is kind of- the function is defined over the data space. Right? But when we define it as a likelihood function, um, product from i equals 1 to n, 1 over 2 Pi^d by 2 sigma [NOISE] to the half exponent of [NOISE] half X^i minus mu inverse X^i minus [NOISE] It is basically the same expression, but we are interpreting it differently. We consider in the likely, uh, when we see this as a likelihood function, we think of the very- the, the parameters to be the variables, right? And your data to be given, right? When we think of- reinterpret this as the probability density function, we think of the data as the variables and your parameters to be fixed. Yes, question. [inaudible] Right, so the question is, um, when you think- when, when you write probability of Mu, Sigma given x. So for- so, er, er, a small correction over there when we, we use the term probability of data, right? We never use the term probability of your parameters. And we use the term likelihood of your parameters, right? The, the, the correct phrases to use his probability of the data given parameters, and the likelihood of the parameters given data, right? You wanna, you wanna- [inaudible] That is, that is the only difference in terms of, uh, you know, uh, um, mathematically, but semantically, we are treating the, the, um, um, the data as, as the variable here, but the parameters as variable here. Other than, other than that, the expression is just the same. And, um, the, the- one of the obvious- another obvious way in which they are different is if, if you integrate your probability density over its variables, you get one, right? But if you integrate your likelihood with respect to, you know, for your parameter space, it may not even exist. Yes, question. [inaudible] Does the likelihood form one particular slice in the, in the- that, that's a good question. So let's, let's, let's, kind of, go over that again. [NOISE] So assume this is your probability density function, right? Now, the shape of this curve and, and the position and, you know, everything about this curve is, kind of, fixed by the Mu and Sigma, right? And the x axis is data or observation, right? Now, the likelihood, uh, uh, for this is going to be Mu, Sigma and this is likelihood, and here it was the probability density. And the likelihood function is going to take, you know, some shape, and your training data is going to decide what the shape of this is, right? So that's, that's, that's, you know, you've got to think of it this way where the, um, this would probably again, be some, some, some, kind of, uh, uh, um, up- up- upward bending. But, um, the x axis over here is data. The x axis over here is Mu and Sigma. So this is probability, and this is likelihood. So if you are given a different collection of training data, your likelihood function is going to take a different shape, be located somewhere else. It may take a different, you know, uh, it, it may look very different and this may not even integrate over to 1, you know, um, the area under this or the volume under this, un- under the likelihood function. But for a probability density function, the, the parameters will decide the shape and the volume under, under this function will always integrate to 1. So, um, yeah, yeah. It's, it's, it's the, the, uh, the analogy of taking a slice, it doesn't quite hold, um, you know, they- you- they're just two different, you know, think of them as two different functions. Yeah. [inaudible] Exactly. So, um, the, the, the problem that we're gonna attack with the likelihood function is, given the training data that we have, we want to choose some Mu and Sigma that maximizes the likelihood the most, right? And that's, that's, that- that's basically, uh, um, like, the big picture of what we do in statistics. You are given a data, that data is gonna, you know, implicitly define some, kind of, a likelihood function, and now we want to find the parameters that make the given data, you know, most probable. [inaudible] It's, it's, it's, uh, I'm not intentionally not gonna go into the details of, you know, will a maximum always exist or not. Um, but this is the intention- intuition you wanna have. There are technical conditions that make a maximum exist, uh, uh, and things like that, but, you know, that's beyond the scope. You know, for most of our problems, you know, you know, this, this is the intuition that you want to have. You know, the, the, um, the likelihood is a function defined over the parameter space. And the shape of the function is gonna be different according to the data that we have. And the probability, you know, which is- which is like the other way, it is defined over the observations on the data and different parameters is gonna give you, you know, different probability density functions. There was another question? Yes. [inaudible] We- again, hold that thought. We're gonna, we gonna, uh, uh, um, um, talk about what we're, you know, how we're gonna extend this into machine learning. But this is, you know, gene- uh, you know, classical statistics. What's likelihood? What's probability, right? Yes, question? [inaudible] Yes. Yes. So, uh, the question was when we, er, you know, uh, based on, you know, in- in based on the, uh, training data that we're given, we can estimate a Mu and Sigma and that Mu and Sigma is going to define some probability distribution. And that's the- and using this distribution is what we're going to use for making predictions over future data. That's- that's the, uh, rough idea, right? So that's- that's, uh, maximum likelihood. Um, and [NOISE] the procedure, you know, that we're going to follow is, um, in some kind of, you know, uh, an abstract, let- you know in an abstract way, if L Theta given x is the likelihood function, x on form where x is uh, um, so suppose we are given something data and it- we- we make an assumption that it belongs to some distribution, for example, the Gaussian, right? The likelihood is going to take some form and that's going to be of the form of some, you know, product of i equals 1_n, you know, L _Theta x_i. I'm intentionally not expanding out what L is. It could be a Gaussian, um, like that. And the maximum likelihood procedure basically tells us that Theta hat- so in general, we use, um, the, uh, the- the notation that we're going to use is, we're going to put a hat over things that are estimated, right? Things that are given to us like training data are, you know, we just use them as variables, but things that we estimate as the output of some- some estimation process, we're going to put a hat over it, right? And Theta hat and I'm going to use the subscript MLE to say that we use the maximum likelihood procedure to estimate this. This is going to be arg max of Theta, uh, of i equals 1_n. Likelihood of Theta for each x_i, right? So think of, you know, for example, in case of the Gaussian, you can think of this expression coming over- coming in over here, right? And further, wha- I'm going to make this claim. And this is always going to be the same as arg max of log of product of i equals 1_n L theta. What will I do here? There was this- this function, the likelihood function defined over all your training examples. And I make a claim here that instead of maximizing this, the product of these, I'm going to maximize the log of the product of these, okay? And why is this true? Log is a monotonically increasing function and if we want to the- the, uh, the Theta that we obtain with this will always be the Theta, uh, will always be the same as the Theta that we obtained with this, okay? Because it is a monotonically increasing function. Yes, question. [inaudible]. Okay. Let- let- let- let- let's keep that question for- for, uh, uh, uh, uh, you know, let- I'm going to answer that after that. Let, uh, uh, just because I'm running short of time, I want to, you know, uh, cover a few more things. Uh, but in general, we're going to assume some form, some probability, density and just going to work with that, right? Now, um, here the, um, um, so we replaced the, uh, uh, the thing that we want to, uh, maximize with- with the log of- of the same thing. Um, it's important to note that here we are calculating the arg max. You know, if you were just calculating the max, then of course these two are different. But here we are calculating the arg max. So the, uh, the Theta that maximizes this will also be the same Theta that maximizes the log of this, right? Because we're doing the arg max. Um, and this can now be written as arg max of Theta, sum of i equals 1_n small l_Theta x_i. Where l_Theta is equal to log of L of the other theta, right? So the log of a product of a few terms is the sum of the log of the individual terms, right? And, um, the- the- the- and we replace the- the function L with the log of itself, right? Now, um, what would happen if L were to be a Gaussian? If- if- if we assume that our training data x is coming from a multivariate Gaussian, we would then plug in- so- so far, it's a standard template. No matter what problem that we have, you know, this is the recipe that we're going to follow for maximum likelihood estimation. And to take this further, we now have to get into the functional form of what the likelihood function actually is for that problem, right? And in this case, for Gaussian, it's going to be arg max of. Now, so far we were, uh, just talking about Theta in- in general. But now we're going to talk about the parameters of the Gaussian. Here we were, you know, by Theta, we were just, uh, mentioning the parameters of whatever distribution that we're going to use in an abstract way. And sum over i equals 1_n log of this whole thing, 1 over 2 Pi to the d by 2. This is going to look a little big, but it's going to simplify, I promise. V_2 exponent of minus half x minus Mu transpose Sigma inverse x minus Mu. So can we simplify this further? Yes, we can. So there is an exponent and there's a log, right? So the- the exponent and log cancel out. And it's the log of the product of two terms, you know, and that can be broken down into the sum of the logs of the individual terms, right? And here I'm gonna use this as Mu hat, Sigma hat. And this is um, our max Mu Sigma sum over I equals 1_n. So I'm gonna break this down into three parts, right? 1 over 2 pi to the d by 2, and the log of that. It's gonna be some constant, at some constant, there are no variables there. So I'm just going to write that as some constant k, right? Plus log or half log of the determinant of Sigma, right? Is it plus half or minus half, sorry, minus half plus log of the exponent, they're going to cancel and there's a minus, minus half x minus Mu transpose Sigma inverse x minus Mu, right. And now this is some function, this whole thing is some function of Mu and Sigma. We're going to think of x as- as given. Oh, this should be xi. Now how do we do the argmax? Calculus, right. Take the- take the derivative, set the derivative equal to zero and solve for Mu and Sigma. Right. And do we have time to do this today? Yeah, we have some time. So, here- here, um, and this is where the things we reviewed over the last couple of days are gonna, you know, ah, come in handy. We saw how to take the derivative of quadratic forms. We saw how to take the derivative of a log of the determinant of a matrix. Right. And those things are gonna be used right away over here. [inaudible] What's the question again? [inaudible] So this is the- the- the summation applies for the whole thing. The summation is for all the terms in there. So I'm going to write it out here. So we need to maximize this with respect to two variables, Mu and Sigma jointly. So we're going to take partial derivatives. So let's- let's solve it separately. So this is for Mu and this is for Sigma, right? And so the derivative with respect to Mu [NOISE] of summation I equals 1 to n, k times minus half log Sigma minus half xi minus Mu transpose Sigma inverse xi minus Mu, right. And when we take the derivative, um, and this is gonna be sum over I equals 1 to n. Derivative of k with respect to Mu is 0. Derivative of log determinant of this with respect to Mu is 0. And over to your left it is d, derivative with respect to the quadratic formula, right. And so the- we saw that the derivative of x with respect to, uh, x transpose x is equal to 2ax. If it is symmetric and Sigma is symmetric, it's a covariance matrix. So covariance matrices are always symmetric. Right. And that is a half and there's, um- and the half is gonna cancel with the 2, right? And this will give us, uh, minus xi minus Mu. So let- let me expand this out to- to- to, uh, uh- in case it caused any confusion. We're gonna expand this out. So that will give us, um, so minus half is a common, is a common for everything. Xi transpose Sigma inverse xi. So this is just a distributive property of- of, uh, multiplication. You take every- every pair and write it out, uh, minus xi transpose sigma inverse Mu, and the other way, minus Mu transpose Sigma inverse xi plus Mu transpose Sigma inverse Mu. Is it clear how we went from this step to this step? It's just you know the distribution- distributive property of multiplication. Um, and now you still wanna take this with respect to Mu. Right. And this is equal to sum over I equals 1_n minus half. Right. This has no Mu in it. So this is just gonna be 0. This and this are the same thing because it's a scaler and you can take the transpose of each other. Right. So I'm gonna write this as- let me still write it with respect to Mu, um, it's gonna be Sigma inverse. And the transpose of that is- is, uh, uh, the same, xi transpose Mu minus 2 of that plus Mu transpose sigma inverse Mu, right. And now take the derivative in. It's gonna be I equals 1_n um, minus half and um, the two cancel out. And this will be- this will be, uh, uh- with respect to Mu, so this is just Sigma inverse xi. And this is the quadratic form. And that's gonna come with a 2. The 2 and the 2 cancel, that's gonna be a minus 2- the 2 cancels, sorry, Sigma inverse Mu. Right. And then we set it equal to 0 and solve for Mu. And this would basically give us that n times- because it's a summation over n, n times Sigma inverse Mu equals summation I equals 1_n Sigma inverse xi. And Sigma inverse you can- you can, uh, ah, take it out. Um, and I'm gonna continue it here. And that- that will give us basically Sigma inverse Mu is equal to Sigma inverse 1 over ni equals 1_n xi. Right. And multiply it with Sigma on both sides, we get Mu equals 1 over n I equals 1_n xi. Right? So Mu is just the average of the xs that are given to us. It's- it's- it's a- it's a- it looks pretty tedious. It's something you want to, you know, go back home and- and rederive it on your own just to make sure you're comfortable with it. And this is going to be probably the last time we're going to do such detailed deri- derivations on board, right? Um, going forward, you know, I'm just going to defer to you to verify the derivations, but this is the, um, um, essence. Now, what about Sigma? [NOISE] We want to estimate um, the parameter Sigma, but we see here that we have a Sigma inverse, right? How do you take- you know, that makes it a little complex. Right? [BACKGROUND] Sorry, uh, uh, what's the question? Is Sigma inverse the same as 1 by Sigma? Well, so the Sigma inverse is not the same as 1 by Sigma because Sigma is a matrix. It- what is 1 over a matrix? There is no such thing as 1 over a matrix, right? Uh, you have a matrix inverse, which is another matrix, right? [BACKGROUND] Right? So, uh, now we- for- for doing it with respect to Sigma, we're going to use a change- of- variable trick, which means we are going to [NOISE] consider a variable S, [NOISE] which is equal to Sigma inverse, and make the observation that the- the gradient of sum function L with respect to S is equal to 0, if and only if, gradient [NOISE] of- with respect to Sigma inverse of L equals 0. Does it make sense? [NOISE] Right? And then we're going to solve it for x- solve for S, and once we get the S, we're going to invert it, and the inverse of that is going to be the Sigma that we want. Right? This is a- a- a pretty standard classical change of variable trick that you do with calculus. Right? So this thing over here- so now we're going to take the derivative with respect to S [NOISE] i equals 1 to n [NOISE] half log. There's a minus, and um- so this will make it [NOISE] similar to S and minus [NOISE] half x minus Mu transpose S, x minus Mu. [NOISE] It's going to be x^i, [NOISE] x^i. Right? And the derivative of a log determinant with respect to the matrix is? [BACKGROUND] S inverse, S inverse. It's not 1 by S, 1 over S, it is S inverse. Um, and we're going to, um, sum over n, such things. So that is going to be n times S inverse and there's a half outside, [NOISE] so- minus- what's the- what's the- [inaudible] with respect to A of x transpose A_x. So we saw what it is with respect to S, what is it with respect to A? I'm going to leave it as an exercise and you will have to verify that [NOISE] this is just x x transpose. [NOISE] Right? [BACKGROUND] Yes, question? [BACKGROUND] So I took a half- as outside is there another negative? There's probably another negative. [BACKGROUND] Oh, yeah. So that's where the negative comes in so that the- the log- so log of 1 over something becomes the negative of the log of something. [BACKGROUND] Right? So that's where the- the- the- the negative got canceled out. [BACKGROUND] I'm sorry. [BACKGROUND] I mean, there's one, um, but this should be- yeah, that makes sense. Sorry. And minus sum i equals 1 to n, x^i minus Mu, x^i minus Mu transpose. Which means S, and- and you set this equal to 0, the half cancels and you get S inverse equals 1 over n, i equals 1 to n, x^i minus Mu, x^i minus Mu transpose. And S inverse is equal to Sigma. Right? So Sigma is 1 over n, x^i minus Mu x^i minus Mu transpose. And this, you know, it's- it's similar to expectation of x minus expectation of x square, right? You can kind of see the similar, exactly. And- and that's- that's your covariance matrix. Yes, question. [BACKGROUND] There is a minus that got- that absorbed the 1 over s s denominator. [BACKGROUND] Yeah, right? So anyway, so if you have questions about this, you know, come up to stage after the lecture. But this is- this is- you know, this gives you a flavor of how to do maximum likelihood estimation, right? And the- the- you know, um, you're expected to be comfortable to do this kind of matrix calculus to derive maximum likelihood estimates, right? And- and we're going to be doing a whole lot of such matrix calculus, you know, throughout the course for different kinds of models. And this is just, uh, a simple model, a multivariate Gaussian, where, uh, you know, uh, where this gets used. Now, what are we going to be, uh, uh, doing for linear regression? Uh, I don't think we have time for linear regression, but maybe, you know, I'll just give you a- a- a flavor of what linear regression is about. So for linear regression, it's a supervised learning problem. And a supervised learning problem is a problem in which we want to learn relations between x and y. So we are going to be given training sets in the form of x^i, y^i pairs, right? You're going to be given pairs of x^i, y^i, where x^i is [NOISE] some kind of a vector- a d-dimensional vector. And we're going to be given n such pairs right? So this terminology means this is- it's- it's- it's a pair of x, um, um. x^i and y^i, and you have, you know, i from 1 to n, and you have n such pairs, right? That's- that's the terminology we use over here. And y [NOISE] is going to be in R and you want to learn a function, right? We're going to call that function as [NOISE] h of x. And we want h of x to output y, right? And this h is going to be parameterized by some Theta, and in this case, Theta was Mu and- and Sigma over here, Theta is going to be something else that we're going to see. And from the tra- given training, um, uh, data set we want to learn, the function h Theta of x, which means you are given a training set. [NOISE] Right? From this training set, you're going to construct h Theta. So this is called the learning process, the learning algorithm, right? This was x, y pairs, and the output of the learning process is an algorithm, right? So the h Theta is the output of- of the learning algorithm itself. And into h Theta we're going to feed new x- let's call it x star unseen xs. And it's supposed to output ys for the unseen xs. Thi- thi- this is like the- the- the macro setting of regression problems, or supervised learning problems. And in the next lecture, we're going to, uh, um see a couple of algorithms for- for, uh, uh, learning linear relationships between x and y, and that's going to be, uh, linear regression.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_12_Bias_and_Variance_Regularization.txt
Okay. Welcome back everyone [NOISE] to Lecture 12 of CS229. The topics for today are bias-variance trade-off, model selection and cross-validation, and regularization. The three are- are kind of somewhat related to each other. And I would say bias-variance trade-off is probably one of the most important, uh, topics that you need to take away from this course. It- it- it's at the heart of machine learning and the concept that kind of, uh, is unique to machine learning and distinguishes it from, uh, other fields. All right. So a quick recap of what we covered, uh, over the last, uh, two classes. So the last two classes we covered neural networks and deep learning. And the- the main takeaways from the last, uh, two classes was the neural networks are basically a composition of simple building blocks, where the simple build- building blocks are non-linear models that we've, uh, seen in the past. So we- we, um, we take the output of one- one, um, simple model and feed that as input to, you know, another simple model and so on. And the crucial, um, the crucial thing while composing them and composing them is having non-linearities, right? And non-linearities are crucial, uh, because if there were no non-linearities then the entire network can- could be represented as just a single linear, uh, single linear layer with one, uh, one matrix. And also, um, neural networks and deep learning, they are- they are, um, non-convex, and it's important that we initialize the parameters randomly. Whereas in the previous models that we saw, the simpler models, they were mostly convex and initialization did not matter. But, uh, for neural networks, for example, initializing all your parameters to zero will not work, uh, because, uh, because of the, uh, symmetry, uh, symmetry properties that we, uh, uh, that we discussed. And then, uh, the approach for training neural networks was basically backpropagation. Backpropagation is just a fancy name for the chain rule of multivariate calculus. And what we saw was, um, we- we calculate the gradients of the final loss with respect to every parameter at all layers. And in order to calculate those, um, those, uh, gradients, we use the multivariate, um, chain rule. So the last layer was, you know, a- a linear- a linear model that we've, uh, uh, you know, very similar to GLMs. And then from- from, uh, beyond the last layer as we- as we compute the gradients of parameters from earlier layers, we see- we saw that you- we encounter a- a chain of very simple Jacobians. They're either diagonal, uh, Jacobians where each diagonal entry is the derivative of the non-linearity, or it's going to be the weight matrix. So the weight matrix itself will be, uh, the Jacobian. Right? And, um, we- we- we construct the simple chain of Jacobians until we arrive at the layer at which we are calculating the gradient from. And then from- from the, um, the branch that's specific to that layer, uh, it was pretty- pretty straightforward calculus. Uh, that- the notation was a little heavy, but, um, it's- it's pretty much, you know, there's nothing- nothing, uh, fancy going on there. It's just simple- simple, uh, gradients that we need to, uh, that we need to take. And to the end of, uh, of, uh, the last lecture, we saw this, um, or we just discussed this theorem which is, you know, it's not gonna be in your- in your, um, uh, syllabus and we're not gonna test you on that theorem, uh, but it's a- it's a good kind of segue into bias-variance, uh, because the universal approximation theorem suggests that neural networks can be very expressive. So it can represent any possible- any- any, um, smooth and differentiable function on a given, uh, input range. And that- that kind of, uh, brings us into the bias-variance and asking the question, Is that a good thing or a bad thing? You know, having- having expressive models, is that, um, is that good or is that bad? So that brings us to bias-variance. [NOISE] Any questions on neural networks or deep learning before we, uh, jump into, uh, bias-variance? Any remaining questions? Okay? So, uh, bias-variance. So in- in your- in your homework 1, in- in the last question, where we- we- we saw feature maps, right? So in feature maps, we, uh, supposing we had a data, uh, a dataset that looked like this. [NOISE] I'm plotting the same dataset again here and here. It's the same dataset. [NOISE] Right? Now, um, if we fit a linear model where the feature map is just, uh, a polynomial of degree 1, so k equals 1, we get a straight line fit through the data. Right? And if we try with, uh, k equals 2, where the- the polynomial degree, uh, is 2, we get say, a quadratic fit. Similarly, if we try with say, k equals 20, right, we would get a fit that might look something like this. Right? Now, we could have done something very similar with say a classification problem. Right? Supposing the x's here [NOISE] mark one class and o's mark another. [NOISE] Let's say there's one x here and one o here. [NOISE] Supposing we had a dataset like this, and this is the third, um, I'm just copying the same dataset in three different places. [NOISE] Here the, um, x's, uh, mark one class and the always mark another class. Now, if we try to fit, say, a logistic regression with, uh, polynomial features, say with k equals 1, you would again get a straight-line fit that tries to separate the two classes. If we try the logistic regression with k equals 2, you would get a quadratic fit. Right? And let's say we tried it with k equals, you know, maybe 20, right, and we- we would probably get something that would potentially look like this. Right? So, um, loosely speaking, we can call these models as being underfit, these models as overfit. Right? So in machine learning, we are given a training set, but the- the- the, uh, error that we actually care about is not how well the model performs on the given dataset, but how well does the model perform in general on data that it has not seen before. Uh, you can call it, loosely speaking, the test set, but in general, you know, you- you call it generalization error. So generalization error. [NOISE] Generalization error is the error or the loss of the costs that a model incurs when it is tested against possibly all the infinite data out there in the world from whose distribution we got the training set, right? In machine learning, what we care about is that our model needs to have a low generalization error. But the- the fundamental problem is that what we have access to is just a finite sample from the- from the, you know, the- the infinitely many possible examples from the data distribution. We have a finite sample from uh, the data generating distribution. And we need- using this finite sample, we need to build a model that has good error rate or low error when it is tested against all the possible, you know, infinitely many examples in our data distribution. So this um, this concept of under-fitting and over-fitting is um, you can- you can think of under-fitting and over-fitting as corresponding to different uh, um, components of the generalization error. So loosely speaking, generalization error, which means taking the model that we obtain and testing it against an example that we have not seen before. And when we test it against an example that we've not seen it before, there's gonna be some error. It's never going to be perfectly right and that error can be broken down into sub-components. So the generalization error can be broken down into two parts. A component called the bias and the component called the variance. And loosely speaking uh, bias is the component due to what you can call as- this is the component of generalization error. I'm going to write it as GE, component of generalized error due to expressivity handicap, and variance is the component of generalization error due to finite sample of training set, right? And the generalization error that is incurred by this model, you expect it to be pretty high. And the generalization error incurred by this model is also expected to be pretty high, but they are high for fundamentally very different reasons. The reason why this model, or these kinds of models are expected to high- have high generalization error is because they are linear models and they're kind of hand- handicap and cannot capture this quadratic relation between- between uh, the x's and y's. Similarly, this model is extremely expressive. It can twist and turn in wild ways. But it- it- it cannot generalize well because it just- it does not have enough- enough data to kind of uh, fit it- fit it well rather. So it does extremely well on the given training set. And in order to do well on the given training set. Now we're kind of losing the big picture of wanting to generalize well on unseen data. And we do overly well on the training set. And this is due to the fact that we just don't have enough training data, right? If- if- if we had a lot more number of training data, then even though we had uh, higher expressive model, it wouldn't have over-fit the data like this, right? And we saw some amount of that in uh, your homework question as well. Where if- if you start with a small data-set where the number of examples is very small, then the higher-order polynomials can over-fit it by- by forming highly regularly hypothesis functions. Whereas if you had a larger data-set, even polynomials of higher degree can do a pretty good job of- of fitting the data. So loosely speaking, bias is the generalization error of a model due to it being kind of handicapped in its expressivity. And it just cannot even do a good job of fitting the training set itself very well. And variance is- is- is due to the fact that your model was highly expressive, but it just did not have enough data for- for it to- for it to generalize well. And these concepts, um, these definitions are very loose. These are not standard definitions, but this is a good- good kind of intuition to have of- of- of what bias and variance are. In fact, when we are using the squared error as the loss function. In the setting of the squared error and where the model is- and we assume that our data comes like this. Y equals F of x plus epsilon, where F is some function, some hypothesis, right? If- if this is the true data generating distribution where the expectation of epsilon is 0 and the variance of epsilon is sigma square. For- in- in this particular setting, the generalization error or the test error, and just also the generalization error is equal to the expected y minus F hat x squared. Now, let's- let's analyze this for a moment. So F hat that means suffix it with n. F hat n is the- is the model that we get by fitting on n training examples. And n training examples were obtained with this data-generating process, which means the n training examples, grouped in with y^ i equals F of x^ i plus epsilon i, right? So the n training examples from which the F hat uh, was obtained had noise embedded in them. And therefore, F hat n is a random variable. It's a random function- it's a random variable because it's a function or inputs that were random, right? So F hat n is- it accepts an input as x. Now x is not random, but the way F hat n was constructed was using n training examples and the n training examples had noise in them. And therefore, because the n training examples had noise in them, F hat n is random. It is random, but it accepts an input which is deterministic. But the way F hat n itself was constructed was, it's a random variable. And what is this expression? What's this expression calculating? It is calculating over here, x and y are a pair from the test- test set or on an example that the model has not seen before. So on an example that the model has not seen before, what is the prediction of our learnt model on the new input? And this is the error of that model with the new output. So this is generalization error and this expectation is over the noise that is in the test example and noise in the training set. So this is the expectation on all the noise- noise variables, both in the training set and the test set. So their expectation is over all epsilon in the training set and test example. And in the case of the squared error, we can see that this can be with some pretty simple- pretty simple uh, algebra. You can show that this will be equal to sigma square plus expectation of F of x minus F hat x squared plus variance of F hat x. So what here? F is the true function that we don't have access to, and f-hat- [NOISE] f-hat_n is the model that we obtained by training on the finite, uh, uh, training, uh- training data, and we can decompose this- this error on a new test example as into three components. Okay? This component is called irreducible error, and this component is called the bias squared- bias squared, and this component is called the variance. Right? So what's happening here? So what- what this is telling us is the test error of any model can be broken down into- into these three components. Now, all of these three components are non-negative because this is- this is, uh, Sigma squared, that's the variance of the noise, that is non-negative, this is the bias squared, so that is non-negative, and this is the variance of something and that is non-negative. So the test error or the expected test error can be broken down into three components. Now, each of these- each of these three errors is, uh- is a- is a fundamentally very different kind of error compared to the other two. So irreducible error tells us that no matter what kind of model we choose, no matter how big training- uh, how big a training set we start with, we can never do better than irreducible error, and why is that? That's because fundamentally, data is noisy, which means, for the same given value of x, for different examples, y can be different because of the noise term, which means at test time, when we're given a value of x on- on which we need to make a prediction, the y-value could be multiple possible y- y values and there's just no way we can- we can, uh, you know, get the right-, uh, get the right prediction all the time because our function f is a deterministic function, right? And that noise is essentially what's captured in this irreducible error, right? So irreducible error is because the data is noisy and for given value of x, there is no single right answer, right? That's irreducible no matter what kind of model you choose, you choose a neural network, you choose a linear regression, you choose any fancy model you want, you can never do better than irreducible error. Any questions on this list? [inaudible]. So the question was, uh, why is this irreducible error? Because by increasing k, we- we actually made it go through all the points, right? [inaudible]. So- um, and- and the question is, you know, isn't Epsilon 0 in this case? Um, the- the, uh, answer to that is, here, we are talking about test error, which means now we're measuring how well this model performs on a test point that was not included in the training set. Right? So this is generalization error, which means we are measuring how well the model performs on unseen data, right? Because doing well on the given training set is an easy thing. You know, it's- there's- there's- you know, you don't need so much of theory to do well on just the given training set that you have. In fact, if you're just given a training set and you want to do well on the training set, just memorize the training set, right? The whole point of machine learning is to start with a limited set of training data and still do well on generalization error, right? So generalization error is, when you are given a new example that you've never seen before, the expected error can be decomposed into an irreducible error component, which is due to the fact that the data is noisy, and then there are two more components here. Now what are these two components? So the first one is the expectation of fx minus f hat_n x. So what does this mean? There is a true underlying signal that we don't have access to, right, and the true underlying signal will be different from the prediction made by our model, and in general, what is the expectation of this- of this- uh, of the difference between- of- of the difference between the true underlying signal and our, uh- uh, and the prediction that our model makes in general. And if we take the expectation across all the- all- all the possible different training sets that we get and the expectation across the noise in the, uh, test example, that gives us how- how systematically wrong are we. All right.. What's the expected- expected difference between the true signal and, uh- and and- and the predicted, uh- predicted value, and this is the square of the bias, right? And finally, we have a- a third part called the- the variance of, uh, f hat_n x. Now, what is- what is the meaning of the variance of f hat_n x? It means if we were to repeat this experiment with a new set of n examples, right? I don't need to repeat the experiment of refitting on the same dataset. Repeat this experiment by collecting a new set of n examples from the same data generating distribution. F hat_n x will be a different, uh- uh, the prediction on the test set will be a different value, right? And now, if we were to keep repeating this process over and over by collecting a new set of n examples and making a prediction on this, uh, test set, you know, I repeat again, a new set of n examples, fit the model, make a prediction on, uh, the new test set, uh, x. All these different predictions on the new test set is going to have some variance in it, right, and this is the variance of the prediction made on a new test example when we are changing the training set to be, uh, uh, resampled every time, right? And this variance, you can kind of see, was completely unrelated to what the right answer for f of- of- for the new test example was, right? It's just how- how sensitive to noise we are in the- for- for the noise in the training set, and- and that's captured by variance. Yes, question. [inaudible] So the question is, uh, because, uh, we- we said variance is- is due to, uh, the limitation of having a finite sample, uh, test set. Now, you can imagine, uh, if your population is- has, you know, uh- theoretically, it has an infinite number of- number of- of points, infinite number of examples, if you were to take the entire infinite set of, uh, attaining population and fit your model, and were- were to repeat it by taking the- the entire infinite set again and fit your model, you'll always get the same model, right? And the- the variance that we get from experiment to experiment, where the experiment includes collecting your n training, uh, examples, the- the- the- the variance that you get from experiment to experiment is due to the fact that you're collecting a small number of finite, uh, examples. If you were to collect more number of examples in each experiment, then the variance will come down, and we'll go a little bit deeper into this, uh, shortly. So coming back to your question about the polynomial case, um, in- in our, uh, homework, uh, question 5, what we saw was, with, uh, k equals 20, it was a pretty, uh, wiggly function. Now, if we were to repeat the k equals 20 case with a dataset of say, uh, 10,000 examples, then what you'll see is that it- it will do, uh, you know, fairly well, it'll fit it fairly well. You'll resample a new set of, uh, what was the number I said? 20,000 or whatever, you know, uh, 10,000 examples, you repeat it, it will be slightly different, right? And you repeat it again, you'll get a new hypothesis, which is, again, very slightly different, and if you were to repeat this experiment with a new number, say, you know, 10 million examples, then the- the- the, the variance from ex- experiment to experiment is going to come down even further, right? The more number of examples you have, they may be noisy, right? You know, you take 10,000 examples. Each of those examples has noise in them. But the variance in the fitted model from experiment to experiment is going to come down, the more number of examples you have. [inaudible]. We- we- uh, we will come to that. So the question is- is it- uh, is the relation linear or quadratic? We- we'll come to that shortly. All right. So this is in the context of machine learning, right? Uh, however, uh- however, I think understanding bias and variance from a more classical statistis- statistics, uh, setting is a little more intuitive then understanding this. So- so let's have a look at bias-variance from a more classical statistics setting, because here we were focused on prediction, you know, how well does the model- uh, or what does the model do when it's presented with an x that has- it has not seen before, right? All right. So bias variance in a more, um. [NOISE] So, let's assume there is a data generating distribution. Um, so data x, y are pairs, um, come from some- some distribution, [NOISE] that's parameterized by Theta, right? And what we do is from this, uh, data generating distribution, we collect [NOISE] n, um, examples. So each of these is a pair, x1, y1, until xn, yn, right? We- we sample x and y pairs from the data generating distribution and we can- uh, sample n such, uh, um, n such pairs, and we run it through a statistical model. [NOISE] Right? And by a statistical model, um, what- what I mean is say- the MLE estimator. You know it? And what we get out is a Theta hat. Okay? This is what we did for example in- in linear regression. You know, if- if- if we assume a linear relation between x and y where the, uh, Theta captures the, uh, linear, uh, coefficients, then this is just the normal equation. Right? But in general, it is some- some statistical model. If- if- if this is, you know, x is, um, x is some features and y is 0, 1, then, you know, think of this as logistic regression. And once you- once you fit the model, you get the, uh, the estimated parameters. Right? So now this is random. Random because there were random samples from our data generating distribution, right? And this is not random. This is, you know, deterministic. [NOISE] Right? And what happens when you feed a random variable into a deterministic function, the output will be? Random. Random. Right? So this is also random. [NOISE] And the way it is random, um, we can- we can visualize it- it. So let's assume, um, we have four different statistical models, [NOISE] right? So this is Theta_1, and this is Theta_d. Theta_d, and this is our parameter space, Theta_1. So over here, we were looking at the data space. [NOISE] Right? So here, this was x and this was y, and here this was x_1 and this was x_d. Right? [NOISE] Over here, we are in the parameter space, right? That's- that's, uh, uh, so it's a switch from that view. Theta_1, Theta_d, Theta_1, and Theta_d. Now, the- the- the data that we sample from came from some true distribution, let's call it Theta star, right? So Theta star is some unknown- [NOISE] unknown constant that, you know, we don't have access to. So let's, you know, assume Theta star lives here. So this is Theta star. [NOISE] Drawing it in the same position in all the four. Theta star. This is Theta star. One- yeah. So- so this is- this is basically in the same position in all the four, um, four. And let's- let's assume we have four different statistical models, four different estimators of our parameter, right? And we start an experiment, we sample n examples from this distribution. Here we are assuming we have access to, you know, or potentially sampling an infinite number of points and assume sampling is cheap, right? Sample n examples where n is some fixed number, run it through the stat- statistical model. Run it through each of the four models. So we have- [NOISE] so let's call this model A, model B, model C and model D. And these four plots correspond to A, B, C, and D. Right? And for a given sample of, uh, n examples, we get some estimate, Theta hat, right? So maybe, you know, this was the, uh, estimate by A and by B, maybe this was the estimate. Right? And maybe from C, this was the estimate. And maybe from D, maybe this was the estimate. Right? So unfortunately I do not have colored pens here, but, um, maybe I'll highlight this with- with, uh, the true parameter and- and, um, I- I make it a little bigger as a block because I don't have a different colored pen today. Right? And- and supposing we repeat the experiment, get a new set of n examples, fit it on the four different models, and because the new set of examples had, you know, noise in them, the estimator Theta is going to be different again. Right? And in this case, let's say the new e- estimate was here, and in this case, the new estimate let's say it was- was here, and our- in this case, let's say it was here, [NOISE] and in this case, let's say it was here. Right? And then we repeat it. Set of n, run it, fit it through four different models and plot the estimated values. Right? So let's say it was here, and let's say we keep repeating it. [NOISE] So each, each dot in these plots corresponds to one experiment, right? It is the output of one experiment, right? The number of dots is not the same as the number of examples, so the number of the- each dot corresponds to one experiment. And what we see is, um, we might get plots like these, and what we would call- what we see here is, in- in these- in these two examples, the- the variance of the expected, um, th- the variance of the estimator is pretty small because they are- they are- because they're pretty tightly concentrated in one region, right? Whereas in these two cases, the variance is pretty large, right? So you would call these two as low variance, and these two as high variance. For a low variance estimator, and high variance estimator, right? And similarly, what we see is over here, the distribution of Theta hat. So you can- you can think of the dots that we obtained by, you know, repeating this process as samples from Theta hat, the distribution of Theta hat, and that's also called the sampling distribution, right? So these- each of these dots have samples from the corresponding sampling distributions, right? And what we see is the sampling distribution is kind of centered on the true value, right? So th- so the center of the sampling distribution coincides with the true value in both the cases, right? So this is, so- uh, this is therefore called a low bias estimator, and this, the center of the sampling distribution is- has moved away from the true parameter. So these, you would call them as high variance estimat- high bias estimators. So what we see is that bias and variance are a fundamentally different things, right? Having one, each- having- having low variance or high variance says nothing about whether the model has high bi- high bias or low bias, okay? The way to think about, um, high bias is that the estimator has, uh, has a fundamental systematic error in estimating parameters. In that, no matter how many times you- you sample different datasets and fit them, in this case, they are, you know, fundamentally kind of, there's a systematic bias, say in this case, to be closer to 0, right? And that has got nothing to do with whether they have low variance or high variance. Similarly, this model has low variance. These- these- these models have low variance, which means no matter, um, no matter how much noise, uh, no matter the noise that you get from, you know, different, uh, training sets, the estimated value is somewhat, you know, tightly, um, tightly grouped in some region, right? So the- the- um, it is less swayed by the noise in the training set. That's one way to think of it, right? In this case, it may have a systematic error. In this case, it may not have a systematic error, but these estimators are less swayed by the noise in the training set. Whereas these- these two models are pretty swa- uh, uh, they are sensitive to noise in- in the training set. So, um, depending on the specific set of N examples, you know, the- the variance in the estimated values are pretty large, right? Any questions on this? Yes. [inaudible]. Yes, so I'll come to the trade off, why we call it a trade off. Um, what we- what we then notice is, instead of N examples, let's say we collect a much larger set of, say N. Which is a much larger training set, say a thousand times the size, right? So x^1, y^1, all the way till x^N, y^N. If you increase your dataset by a- a- a large fraction, uh- uh, by a large amount, what we observe is that in all the four cases, the variance will come down. Right? So- so with a large- larger dataset, so this is a, b, c, d. Let's say these were the true values. Yes. So instead of n number of samples, we are repeating this- this, uh, cycle of experiments with N examples in each experiment. What we see is that the variance is gonna come down, and in this case- All right, so this is with N examples and this was with n examples. So what we see is that this is still has some bias left, but the variance has reduced much more, right? And in this case, in all the cases, the variance has come down, right? But the, the- um, these two still remain unbiased. These two still remain biased. And as n is increasing, we see that the bias is also kind of reducing as n increased from here to here. And that's a pretty, [NOISE] excuse me. That's a pretty common [NOISE] phenomenon as well. As you increase- as you increase n your- your, um, variants will almost always come down, and the, um- um, bias also generally also, um, comes down. So- so in- in a- in a classical statistical setting, the expected difference between Theta hat- so the expectation of Theta hat is gonna be them- the- the center of your sampling distribution. Right? So in- in, um, in classical statistical setting, the expectation of Theta hat minus Theta star is called the bias. If the- the mean of the sampling distribution is centered exactly at Theta star, then the bias will be 0, right? And the variance of Theta hat is called the variance, right? And if the- as n tends to infinity, so if the bias tends to 0 as n tends to infinity, then this is called a consistent estimator. Right. So what does it mean? As- as, um- as the number of examples, um, goes to, um- goes to infinity, the variance, you know, will generally come down. But also if the bias also goes to 0, as n tends to infinity, then it's called a consistent estimator. For example, um, in- in case of Gaussians, the maximum likelihood estimator for the variance, um, so Theta Sigma square hat is equal to 1 over n, equals 1 to n x^i minus Mu hat i square. So this is the- if you recall, in case of the Gaussians, the maximum likelihood estimator for the variance parameter is this, where Mu hat is the estimated, uh, mean. This is a biased estimator. This will always systematically underestimate the bias, right? But it's also consistent, which means, as n tends to infinity, the bias vanishes. Question? Yes. Do you mean x hat or Mu hat? Yeah, x^i minus Mu hat. Oh, so- sorry, there's no i there. So can you say why it's underestimating? I won't go into details as to why it's underestimating, but, uh, the- the- the general idea is that, um, so Sigma hat square, if it were to be 1 over n equals 1 to n x^i minus Mu. So if this- if you use the true Mu value, then it's an unbiased estimator, but if you use the estimated Mu hat, then this is a biased estimator. And then there are ways to, you know, um- if you use n minus 1, there is a correction, uh, you can apply to make it unbiased. [NOISE] So this would be, uh, an unbiased, uh, estimator. So this is unbiased. This is biased. [NOISE] And this is again unbiased. [NOISE] And MLE will use this one. If you- if you perform MLE on- on a Gaussian, you will get this, but that is biased. But it so happens that even though it is biased, as n tends to infinity, the bias vanishes. So it is still a consistent estimator, right? If your bias vanishes as the number of examples goes to infinity, then it's consistent. Yes, question? [inaudible] This one? Yeah, like you model the second and called it MLE, is there a name for the last one? So this one is- is, uh- it's- it's generally called the, uh, unbiased esti- estimator, where you apply this n- you know, n by n minus 1 correction to get n minus 1 in place of n, and that becomes a- a bias. But yeah, so, uh, a consistent estimator is one that eliminates the bias as- as the number of examples goes to infinity. And you can also see, um- so that the limit as n tends to infinity, variance of Theta hat. Um, the rate at which this goes to zero is the- is also called the statistical efficiency. All right? So how many examples does your estimator need to be really confident of the estimated, uh- estimated, uh, parameter? Yes, question? In- in context of this, like, biased variance idea, what exactly is it that MLE does? Because [inaudible] MLE have returned to that second one, the n minus 1. This one? Yes. So the MLE is not always an unbiased estimator. So what does the MLE do? MLE performs maximum likelihood. So write out the- the likelihood, take the gradient, set it equal to 0, or do gradient descent and you- you end up with some parameter estimate. But, uh- so- so bias means that- that the answer is consistently wrong. Is it centered around the wrong, uh- Yes, that's correct. That's correct. -estimate. Yes. So why would that be my [inaudible] So the 1 over n- n minus 1, that would be centered around the true, uh, Sigma squared. So shouldn't- wouldn't that be maximum likelihood? So the question is, why isn't this the maximum likelihood estimate? Yeah. So maximum likelihood is, uh, a recipe or a procedure for you to follow to construct an estimator, right? For- in case of Gaussian, the maximum likelihood estimator for Mu is unbiased, but if you follow the same maximum likelihood recipe and construct an estimator for Sigma, it is going to be biased. And there is no- there is no, um, fundamental reason to tell when MLE is biased or unbiased. Sometimes it's biased, sometimes it's unbiased. And how did you come up with that? With this one? The next one. This one. So that is, uh- I can give you more details about this if you post it on Piazza, but that's not relevant to the discussion right now. Uh, but I'm happy to give you more, uh, links to why this is, uh, unbiased. Anyway, that's- that's- the- the whole point of this was to give you an example of a consistent estimator, right? As n tends to infinity, the, um, the bias, uh, vanishes, right? And the rate at which, um, the variance reduces as you increase, um- as you increase the number of examples is called the statistical efficiency. And that could be something like at the rate of 1 over n or 1 over square root n. In different scenarios, it's- it's, uh- you get different rates, right? So in the context- in the context of classical statistics, this is bias and variance, because in statistics, we are interested in estimating parameters, right? And- and our goal is to estimate a parameter and call it a day. Whereas, in machine learning, our goal was to make predictions on unseen examples. In machine learning, our goal is generalization error, which means we are interested in how well the error or how low the error of our model is on unseen examples, right? And- and so in- in machine learning, we are now interested in this- in this- in the squared errors on- on seen examples, right? And [NOISE] this was a good visualization for the statistical setting, uh, where you- where you have a true unknown parameter and, you know, the estimator gives you a- uh, you know, a- a sampling distribution either centered on it or away from it, either with the low variance or high variance. Similarly, we can also kind of, uh, try to visualize, uh, this one to get a good sense of what bias and variance means in case of the squared error for a prediction problem. So I'm just going to use the space right on top here. Maybe I'll push it up in case you want to refer to it in the meantime, right? [NOISE] So to kind of understand- understand this, so assume we have a function. So this is x, this is y. And let's say this is f of x. All right. And the training data that we get are basically samples of some x calculate the f of x, and add some noise. So this could be, you know, similarly, you know, sample some other x, calculate f of x and add some noise. So our training set might look like this. Because that's our data generating distribution. Now pick some x- some x side. Choose some x side. Evaluate f of x, add some noise. The noise is mean 0 so the noise could be positive or negative. And that's the observed y, y^i. All right? And now say this was the- uh, the true distribution. So let me instead make this a dotted line, because in general, we don't have access to y hat. It's invisible to us. We don't know what a- what a f of x truly is, right? And we take these this training set and construct an estimator, say, using maximum likelihood, say, you know, linear regression with a polynomial set of features. And let's say the- the resulting model that is specific to this training set was something like this. So the dotted line is the true f of x. And this line is f hat n of x, right? So for a given x, let's call it x star, test example, right? So for a given test example, consider this vertical slice. The true distribution of y is centered around the dotted line. So this is- this is the distribution of y equals f of x plus Epsilon, right? And if this were to have been in our training set, then the corresponding y would have been a sample from this distribution, right? Is- is that clear? If- if- if the distribution of y's for this x star is the corresponding y-value that you might observe, right? And if- if- if this- this x star was in our training set, then the corresponding y would have been a sample from this distribution, right? And that distribution has a variance of Sigma squared. This has variance Sigma squared, right? And that's the variance of epsilon, right? Now, let me draw this part a little bigger over here. So I'm just gonna focus on- this is f of x and it has a distribution for y and you are interested in x star, right? Now, if we take n examples from our training set and construct an estimator, that estimator, say estimator 1 for- from experiment 1 might go like this. So this is from experiment 1. We collect another set of n examples, fit our model from those n examples, and plot it. That might look like- I don't know, this. This is experiment 2, right? What are we- in- in- in this- in this example, each experiment resulted in a dot, right? In this- in this setting, each experiment results in the corresponding, um, hypothesis that looks like that, and so on. So as you keep repeating n, we will get different hypotheses. And what this is measuring is that the expected difference between the point where the dotted line, so this- this point over here is f of x. And the points where the different hypotheses are crossing this line, each one of them are samples from f hat n of x. This looks a little messy because unfortunately, I don't have different colors, but let me repeat what's- what's happening here. So this- this vertical line is like the slice that's of interest to us because we are measuring the- what- what's happening at- when we predict that x star. The dotted line that passes through it is the- the true function that we do not have access to, which is why it's a- a dotted line. And this distribution represents the y-values that- from which we, you know, from which observations are made for the, uh, um, for this specific, uh, x star. For example, uh, y in the- in the test error would be a sample from, uh, from the- from the y- corresponding y distribution or y given x distribution. And because we are using different number of training examples or different sets of training examples to fit our model, we get different hypotheses, uh, that are kind of near to the true dotted line. So this- to the- to the true f of x, right? And these points where the different hypotheses cross the vertical line are the different values of f hat n of x. Does that make sense? Question. Yes. Question. So you're saying that we don't have access to the dotted line? Yeah. Do you usually use the average prediction you estimated? Because how else do you- We don't know. So- so the question is, how do we know- how do we calculate the dotted line? We don't know. So in this expression, um, that- that's a very good question. So the test error, the expected test error, can be decomposed like this. And this is a mathematical expression, right? We cannot compute these components, right? You, er, you know, you- you- you mentally, you kind of break it down into these subcomponents. But it's very hard to actually compute the two different components because we don't know what f of x is exactly, right? And if we knew what f of x is then we could compute it, but then why even do machine learning? Because we already have access to f of x. Right, but I've seen that every time we are in an experiment, we wanna measure our equipment or we wanna measure some in the effect of different circumstances then we could always know the truth and as a result, we use that to measure the effect directly. I'm just saying that that there is something unusual but generally speaking we don't have access to that. We don't yes, the, you know, we don't know what f of x and that's- that's the, ah, uh, fundamental assumption. Next question? Based on the same question, if you want to like systematically know, er, after repeating many experiments that- whether you are underfitting or overfitting can you redo, let's say 1,000 examples, 1,000 different connections of fitting and then see on average, if you take the average perimeter and then from that average perimeter this certain perimeter is less than and then use that average of the same experiment as the true estimate? So the question is, can we repeat the- repeat the experiment multiple times to guess whether we are systematically underestimating or overestimating? From experimentally, it's very hard to- to calculate because we don't have access to the true y distribution. If we had access to the true y distribution, if we could take an infinite number of samples from the true y given x, then we could do such a comparison. But in- in general, it's- it's, er, hard to do. Anyway, so moving on. So the- the way to think of, um, to think of this test error is that, let's say we end up with this hypothesis and our prediction is over here, right? And let's say the test example was also sampled from this y distribution. Say, maybe we sampled this value, right? So this squared error- so the expectation of the square of this distance can be broken down into the variance of- of the y- y given x distribution itself plus the systematic bias of, er, whether our distribution of Fn, is it going to systematically be below f star- below the dotted line or is it going to be above the line? The square of that plus the variance of f hat n x, which means in general, do the different, er, how- how spread apart are the different hypotheses, of the value of the hypothesis at that line? It's a little hard to visualize because, you know, er, I'm using black color everywhere, but, er, but maybe you can sit down and kind of write it out yourself on your own book and- and, um, understand this, right? So the square error- so the squared error is the only error that allows for such a clean decomposition for other areas, for example, er, the logistic regression loss, there is no well-accepted, you know, decomposition in this way. But there are a few papers which kind of unify this, um, to other kinds of losses as- as well. And- and, um, I- I can share those links if anybody's interested. But the- the idea here is that squared error can be decomposed into these parts and you can think of this decomposition visually like this. Unfortunately, this figure is a mess. And also kind of see the similarities with the more classical statistical setting where the visualization is much easier, okay? What- what's happening over here in case of prediction is actually very similar to what's happening over here if you were to kind of put them all in one- one single axis. Maybe, maybe you can, er, do that on your own as an exercise. Right? So that's- that's biased various, er, decomposition. The general takeaway from bias various- variance decomposition is that underfitting roughly means you have high bias, right? Overfitting approximately means you have high variance and these are approximate terms. So, you know, you can have models that are- that have high bias and high variance which means your model can simultaneously be underfitting and overfitting at the same time, right? It could mean that your model has fit to the noise of your data, but just hasn't fit to the signal at all. So your model could theoretically be underfitting and overfitting at the same time, right? And- and this is- this is just a- a- a heuristic or a rule of thumb to talk about how, you know, how our model is- is performing question. Yes, question? You said that a model can be certainly overfitting and underfitting at the same time if it's too organized. Don't we just call that overfitting? So, er, overfitting, er, if it's overfitting and underfitting at the same time, don't call it just overfitting. Um, no, no- if it's underfitting and overfitting at the same time, one example could be, let's say you're- Is that the k equals 20 case? So in the k equals 20 case, it is overfitting and not underfitting, right? So if you have data that is pretty noisy, right? You could have, theoretically, you could have a model that might give you a hypothesis. And if you take a different sample, the hypothesis might look very different. And it still doesn't fit your data at all. It's just capturing the noise, but maybe hasn't captured the signal. So it's weighs a lot depending on the sample that you get. But still it doesn't go through it so that it- it is kind of overfitting and underfitting at the same time. But these are just heuristics, right? So what- what you, um, what you want to do in general, um, what- what we've discussed so far is- is- is mostly for you to build a mental model of what's going on under the covers, okay? But what you actually do in practice, the actions that you take, is what we're going to discuss next, and that's called cross-validation. [NOISE] So remember, our goal is to do well on generalization, generalization error, right? Of course in- in- in- in cases when the generalization error is measured as- as say the uh, squared loss or the squared error. We can decompose it into uh, into bias and variance. But in general, what we care about is that we just want to have low generalization error, right? And the irreducible error is like a lower bound on how low the generalization error can be. But you're- you're, um, but we- we- we still want to do as well as we can. And the idea of measuring how well we are doing on generalization error. So the idea behind how we do it is something very simple, we call it cross validation. What that means is when you're given a dataset, we don't use our complete dataset to fit our model. Instead, what we do is split it into a training set, a dev set, and a test set, right? So you split it generally into three parts. Into train, call it validation or dev set, they are synonymous and a test set. And historically there have been um, you know, um, a rule of thumbs that, you know this needs to be, you know, 70% and 20%, 10% or, you know, some people do it 60%, 20%, 20%, or 80%, 10%, 10%. There is no single right answer. All right? And you split it into three different, uh, three different uh, um, splits. And you fit your model on only the training set. And you- you, uh, when you- when you do maximum likelihood, you are doing it only on- on the training set, which means you're doing it on some fraction of the data. And then you need to- you need to make uh, a decision of whether you want to increase your feature map size, reduce your feature map size. Should you use logistic regression versus, you know, SVMs? Do you want to use neural networks that are all of these decisions that you need to make. And broadly speaking, we're going to call them hyperparameters. Do you want to increase the number of layers? Do you want to add regularization? We're gonna talk about regularization next, right? All these decisions, we call them hyperparameters. And the way you go about deciding on what hyperparameters you want, is by not measuring how well your model is doing on the training set. But use the fitted model and make predictions on the validation or dev set and see how well your model is performing in those predictions, right? You would expect you're training- your training error to be overoptimistic because the objective is to minimize the training error. So there is- there is a systematic bias to um, um, your- that your training error will be uh, lower than the generalization error. So to get a better estimate of how well we're doing on our- on our goal- our goal, which again is to do well on generalization error, is to measure- measure it, you know, have a holdout set, and pretend it is, you know, unseen examples that you're encountering in- in production and see how well the model does on- on those examples. And it is common practice to repeat this cycle a multiple number of times. So you- you- you take the training set- training data that you have. Start with some model, say logistic regression, see how well it did on your validation set. You know, maybe you're happy, maybe or not, very likely you will not be happy the first time. And say you- you go about increasing your feature map size. You start with k equal to 1, you set it to k equal to 10, right? And you fit your model again and measure the performance on the validation set with a new model which has you know, a higher uh, a feature map. And if it has over-fit, your validation error is gonna go up. Even though your training error came down, your validation error is going to go up if it has over-fit, right? And you realize that you over-fit, and you- you know, take back your decision of going with k equals 10 and maybe try k equals 2 or maybe k equals to 5, and so on. And the decision could be things like the size of your feature map. It could be things like use neural networks versus logistic regression. It could be the number of layers in the neural network. So no matter what your hyperparameter decision is, the process of cycling between the training set and a validation set is essential, right? And this is- this is one of those things that is so pervasive and so universal for all of machine learning that it is something you should- you should spend some time thinking about. And- and it's- it's always good practice to follow. And that- that process of having a holdout cross validation set against which you are measuring the quality of your hyperparameter tuning is so universal that it works for any kind of hyper parameter in general, right? So it could be- it could be something as simple as, you know, what is the regularization parameter? What is you know, choice of different algorithms. All those- all those kinds of choices come under the same umbrella of being evaluated using this cross-validation approach, right? Now, why do we have this test set? We- we- we had this uh, holdout validation set. But why do we still have this test set? The reason is because just in the same way how the training set gave us an over-optimistic estimate of our model's performance after repeatedly performing hyperparameter tuning to do well- to do better on the validation set. We've also kind of over-fit on our validation set as well, right? By repeating this cycle, we are over-fitting on the validation set more and more, even though we do not explicitly minimize our training loss on this, right? One way to think about it is the- when you obtain a dataset for the first time, before you evaluate your model or fit your model on it, think of it as fresh. And every time you fit a model and look at its outcome, or every time you use it as a validation set and look at the outcome. Mentally, you need to think of that dataset as rotting. Every time you look at it, it rots a little bit. And- and the more number of times you- you run your model on that dataset or you train your dataset on that model and look at the outcome, that dataset is rotting, right? [NOISE] And to- to- as a way to kind of counter that rot effect, you have this test set that you always keep away all the way until the very end. All right? You may be going through this cycle, you know, a lot of number of times. But all this while you need to keep your test set away and just not even look at your test set, right? In the end, once you are satisfied with your performance, you know, in order to get a realistic expectation of your generalization error, only for that purpose all the way- all- at the very end. You measured your- the- the performance of the model and the test set. And that gives you- you know, a rough approximate of how well it is going to do in- in generalization error, right? So the purpose of the training set and the validation set. So the purpose of the training set is to do well on training set, slash minimize loss. The purpose of the cross validation set, is to do well for generalization error- do well in generalization error. The purpose of the test set, which strictly speaking, you need to evaluate your test set, only ones all the way at the end, and never again is to give you- get an estimate on generalization error, right? And in practice, because going through the cycle, you know, rots your validation set. And over time, if you- if you repeat this- this cycle too many times, your validation set is no longer, you know, a good sample of your uh, of- of generalization performance. It is sometimes quite common to have multiple levels of validation sets, right? You can have multiple validation sets where, you know, once you kind of uh, uh, repeat the cycle with one validation set and you feel that you kind of, you know, over fit on this validation set for your hyperparameters, it's quite common to discard a validation set and move on to the next validation set, right? And that's- that's common too. And you wanna keep your test set all the way to the very end, and- and the purpose of the test set is to give you an estimate of how well your generalization error will be at the end of the cycle. Yes, question. [inaudible] Yeah, so the question is, how do we know that we'll fit on our uh, validation set? One uh, there is no- there's no clear answer to know how whether we would overfit on our validation set or not. Uh, the- the one way you can- you know, check whether you will overfit on your validation set is to- is to have yet another validation set and- and you're making your decisions again, one validation set, but you could just measure the gap between the moderate performance on the validation set with which you're making decisions versus some other validation set which you are using only for the purpose of detecting whether you fit on the first validation set. [BACKGROUND] Yeah, so in general- in general, there is a when- when does the cycle end? There is no well-defined answer for that, and-and uh, it- it is something that you- you use your judgment to say, you know, you've done your best. Yes question. [inaudible] Yeah, that's a very good question. You know, you go through all of this. Finally, we measure our test set. Uh, what do we do? The performance is really bad. And uh, strictly speaking, you know, you need to use your test set only for the purpose of just getting an estimate of how well you're gonna do. Uh, but you know, it's a good question. What you do if- if you know, if you don't do well on your test set and the, um, um, you know, in real life, if you're trying to take something into production, you still want to do something about it. And for those purposes, it's always good to have, you know, yet another test that you- you haven't looked at. um, otherwise, you can always go through this and- and kind of you know, check your test set- test set performance again. But in a way you're now starting to rock your test set now. Yeah, and- and- and that's like a fundamental paradox for which there's no right answer. Next question. [BACKGROUND] Yeah, so the question is, uh, can we fit the model on- on the 80% or some percent, go through the cycle, measure it on the test set, now you're happy with everything, now, stick to the same set of hyperparameters that you- that you used and refit the model on the full- full training set. You can do that. If your dataset is, is reasonably large, then you know that might work well. But- sometimes if your model is pretty sensitive to small variations in your training data, then you may want to just hold onto the model that you have, which happens to be you know, working reasonably well, rather than- you know, risking it and fitting it on the full dataset. Especially if your- if your model is kind of sensitive to noise in the dataset. But you know uh, you can certainly do that in most cases. Uh, in- in practice, people generally do not do that, you- you know uh, well, let me take that back, It depends on how big your dataset is, right? If your dataset is reasonably large, say you have a million examples and you have some- you know, test set that you've kept apart, and you repeat this. Let's say you kept away 1,000 examples, in your test set, and you repeat it, then there's not much value when you already, you know, you have like 999,000 um, and you know there's little value in adding another 1,000 to refit it. But if you're in a smaller a regime where let's say you have 100 examples and you've kept away 20, then it might be worth- worth doing it, good question. All right, so this is- this is, uh, this kind of cross-validation is also called- is called hold-out cross validation. Where you take a fixed validation and test set and keep it away and fit- fit your model only on the training set. So this is called hold-out cross validation. There is another kind of cross validation technique called k-fold cross-validation. And this is more common when you have small datasets. So the idea behind k-fold cross-validation is to take your training set, right? So x_1, y_1 till x_n y_n, right, And split it into K folds, In this case, k is 1, 2, 3, 4, right. And what you do is once you split it into K folds, you fit k different models. Where for each model, you take one of the folds as the validation set. Use the remaining folds- aggregate the remaining folds as your training set, right? So for Model 1, fold 1 is the validation set and fold 2 to n is the test set- is the training set. There is no- you can- you can also have a separate test set that you don't uh, touch at all. But it's quite common to use the full training set to perform uh, uh, k-fold cross-validation, right? And for model 2, fold 2 is the validation set, and fold 1, 3 to N, is training set, right? So one model time, pick a fold, make that your validation set. Use the remaining fold as your training set, right? And using the corresponding model, make predictions on the corresponding validation set, So from model 1-model 1, we use for model one we- this is the training set, right? Using the training set, make predictions on it and get your predictions here, Y hat from model 1. And similarly for fold 2, take model 2. Model 2 was trained on, these going to Model 2, and using model 2 make predictions y hat from right, and similarly for each model, make the prediction into the corresponding fold. And you're gonna get your full set of predictions of y hats and your full true label wise. And using these two, you can measure, uh, you know, you may have some kind of- some kind of uh, um, evaluation metrics such as, you know, maybe accuracy may be squared, error may be area under the ROC curve. Um, and you can use these- you know, the- the true labels and the predictions for the full dataset and see how well your model is doing. Yes question. [BACKGROUND] Training set. Yes. Right, so for model- model 2 make predictions on fold 2, but model 2's training set is fold 1, fold 3 and all the way clear, right? Everything except the corresponding fold is- is part of the training set, right? And this is called K-fold cross-validation. And this is more commonly done when you're mod- your model is not very computationally expensive and your data's not very big. And in fact, you can even take this to the limit where K equals number of examples, right? So you have one model, per example, for which that example becomes the validation example, and all the rest of the examples are part of the training set for that model. And that's al- also called leave-one-out cross validation, cross-validation where K equals n. Yes, question. In this case there is no distinction between validation set and test set, right? [inaudible] your. Yeah, so the question is- is there a di- uh- uh- uh a difference between validation set and - and, uh, test set. And um- you can still have a separate test set that is not part of this K- fold process. You can still have a- a- um- um- um, a test set and that k-means error. But most- most of the times when you do K- fold cross-validation or leave-one-out cross-validation, you're doing it because your data set is really small. Let's say you have just 20 examples. So let's say you're- you have a data set of some very rare disease, you know, for which the number of patients is just a very small fraction. And you want to build a model in that. In those cases, doing this would be a bad idea because, you know, you have only 20 examples you don't want to lose, let's say 4 of those 20, just for the purpose of cross-validation. Uh- and in those cases, you would do leave-one-out or- or, um, K-fold cross validation because, you know, you want to kind of use your- all the data that you have to build a model in some way and not leave some expensive data. In those cases, you generally would not have a separate test set. And um- you would just uh- add- add- and you do cross validation like this. And then you could either just use the ensemble of all the K models and average their predictions at test time. Or you can use this- use this procedure only for the purpose of deciding the hyper-parameters and take those hyper-parameters and just refit using the entire data set. And- and both- both of those are- are commonly done. So, what's validation error of your test error, right? In that case. Yeah, in that case you- you can think of your validation error and test error to be the same. Uh- though this process, you know, um- yeah yo- you. This- this cycle does not fit well with this procedure. Yeah. Right? Any- any questions on this, before we move on to regularization. All right. Yes question. [inaudible] So is the- you know, um, is there a good reason not to uh, uh, sample from your um, um, training set itself? Uh- um. So there is- there's a process where you take your data set and, you know, sample from it and fit a model and, you know, do it again. And that, that's called bootstrap. However, bootstrap is generally used for um, bootstrap is- is generally used to obtain um- what -what we saw previously as the sampling distribution for getting an -an- uh- a distribution over your parameters, right? And uh, it's not commonly used in machine learning where the goal is to do prediction on unseen examples. Uh- it's, it's more of a technique to get um- uncertainty estimates or confidence intervals on your model parameters. And that's more commonly used in- in- in more classical statistics settings where you want to get confidence intervals on your parameters. Um, it doesn't- it doesn't help us a lot in terms of getting generalization error estimates. All right. Regularization. So first let's, uh- we gonna have a quick, um- a quick motivation for regularization. An-n-n-d, and in- in this lecture we will cover regularization from a Bayesian perspective. Um. So if you remember, um- in the case of uh- K equals 20 in linear regression, right? x and y, where K is- was the degree of polynomials. You got a hyper, there's this function that is very wiggly. Right? And if you look at the coefficients of the, uh- uh- the resulting Theta vector that corresponds to this very wiggly hypothesis, you will notice that in order to get these very sharp wiggles, the Theta- the magnitude of Theta i- Theta i should be large. Right? If all your theta i values are small, then the resulting hypothesis, even for K equals 20 will be something that is much smoother. Right? So it is these large values of Theta that makes for these highly wiggly turns in- in the hypothesis function. And the general intuition is that um- having something so wiggly is bad. You know, that's the general intuition, because it's very unlikely that uh- you know, x's that are, that are so nearby would have very, you know, would have y values that are differing so much, it's more likely that you're gonna have models where your hypothesis is much smoother, where nearby values of x have nearby values of y. And using this intuition, we want to come up with ways in which the- our estimator will output small values of Theta. You still want to have the flexibility of, you know, K equals 20 and have- have the ability to fit our Theta if necessary. But at the same time, we also want to keep the magnitude of each of the Theta i to also be kind of as small as possible because that encourages smoother hypotheses. Right? So the- the- the technique of encouraging small values of theta is, you know, think of it as regularization. You know, how can we kind of, um- not let the noise in the data sway our fit hypothesis to this extent and- and yet, you know, still be flexible to some degree, but encourage smoothness in the resulting hypothesis. That's the goal of regularization. And the goal of regularization is almost actually, you know, stated in- in- in the statement itself that we want small values of Theta. right? Yes question? Why would you want to introduce the- are you saying the normal Theta or just the number of dimensions- divisions to reduce the dimensions there? So the- in- in regularization, we reduce the- the values of- taken on by each component. So regularization. Um. So yeah, so we want to make the values of theta small. [inaudible] Yeah. Then we solve them making the dimensions of theta itself smaller. [inaudible] Yeah. So that problem would be solved by reducing the- Yeah, so- so we could- so the question is, we can solve this by reducing the number of Thetas that we have, right? And, you know, the- so wha- what- why are we trying to reduce the value, the- the magnitude of Theta rather than just reducing the dimensionality of Theta, right? And tha- that's- that's a valid approach too, and we'll see that one of the regularization techniques, you know, achieves that goal. Um, so- so- so the intuition there is if- if your Theta_i equals 0 for some i, then you implicitly kind of reduce the number of Thetas. And- and we- we- we're going to cover all these, uh, shortly. Right? So in general, the- the intuition is that smaller values of Theta_i. So this is bad. We want Theta_i's to be- [NOISE] to be small. So, uh, one way to do it is to- is to penalize large values of Theta. Okay? And what that means is, uh, in case of linear regression, if our cost function, J of Theta, it was i equals 1 to n, y^i minus h Theta of x^i squared. So this was standard linear regression where h Theta of x was, um, [NOISE] was just Theta transpose x. Right? So we take this standard loss of linear regression and kind of add on this extra constraint by penalizing- [NOISE] by penalizing the norm of the Theta vector. The idea here is that this extra term in the loss function will penalize the model for selecting Theta I's that are very large and therefore, give us very wiggly hypotheses. Okay? And these themes- um, and- and- what- so what is the value of- of-of- of Lambda in this case? So Lambda is some coefficient or weighting factor of how much we care about the original objective versus how much we care about having small Thetas. So if Lambda is very big, then the- the model will focus more on making the values of Theta small and might not even fit our- uh, fit the data at all. If Lambda is 0, then the model will just fit our data as the, uh, original objective without any regularization, right? So Lambda acts as this- this, uh, um, relative weighing factor between the original data fitting objective and the regularization objective. And the way we go about, you know, tuning the value of Lambda is through? Cross-validation, exactly. So use cross-validation to figure out what Lambda value works well for- on your validation set. Okay? And- um, and refit the model. So this- this is, um, um, loosely speaking, this is a regularization. A few things to- um, few things to observe, uh, is that the norm that we used here was the 2-norm. You can also use [NOISE] the 1-norm. Right? And the 1-norm is basically- uh, so this is basically [NOISE] Lambda times i equals 1 to d Theta i. Okay? It's just the sum of the absolute values. Yes, question? Why not learn Lambda using MLE? So the question is, why not learn Lambda using MLE? And the- so- so by definition, MLE will- um, will- will give you a zero Lambda I think. Granted because our goal is to, uh, minimize this loss, this is always non-negative. So the way to minimize this is to just set Lambda to 0. So MLE will give you Lambda equals 0. Yes, question? So it is- it's still not clear to me why that's- uh, why- why do we- why that- that is- uh, like, why that is a correct one to have to minimize the norm of Theta? How- how is that- so am I correct to say that we want to minimize the norm of Theta [NOISE] because we want to not overfit? I mean, reducing Theta reduces overfitting? Yeah. So- so the question is, why- why does reducing the norm of Theta, um, help- um, um, help or fight overfitting? So the norm of the Theta, if you remember, um, this is equal to sum over i equals 1 to d Theta_i squared, right? So the- the idea that if you want to have small values of Theta and you want, you know, all your values of Theta to be small, sum over the squared values. And, you know, that's- that's the- um, uh, that's the general idea, right? So, you know, if- if- if all your Thetas are- are reasonably small, then you get, uh, uh, a smoother function. So, you know, penalized- penalize those values. Okay? So, um, however, this seems a little arbitrary. Uh, I mean, you know, it- it seems a little hacky. We just- we knew- we- we use our intuition to that large values of Theta, kind of, costs overfitting so let's directly penalize it, which- which seems kind of a little hacky. But however, regularization has a very- um, very nice Bayesian interpretation, right? If you remember- [NOISE] if you remember, in the- in the Bayesian setting, we had Theta come from some prior distribution. [NOISE] Right? And your data, I'm just going to call it S for both x and y's. So S, uh, given Theta is something which is called the likelihood. [NOISE] And we construct the posterior, [NOISE] it's called the posterior. [NOISE] Right? And in the Bayesian setting, all we did was construct the posterior distribution. So p of Theta given S is equal to P of S given Theta times P of Theta over P of S. Right? And this gives us a full distribution over Theta, and we use this, uh, distribution into test time to construct the posterior predictor distribution of- for constructing P of Y star given X star, comma, X, y was equal to the expectation of Theta, let's call it Theta hat sample from P of Theta given S of P of y given- y star given X star times P, yeah, Theta hat. Okay? So we- we constructed the posterior predictive distribution from- from the posterior distribution. [NOISE] However, only in simple models is this integral tractable. In most cases, this integ- this integral is not tractable. It's very hard to perform this integration to construct a posterior predictive distribution, let alone the posterior predictive distribution, it's very hard to construct even the posterior distribution with the parameters itself. In simple models, it's possible. We saw that in Bayesian linear regression, it's possible. But in a lot of other models, constructing this posterior is extremely hard. So what's done in practice is, and this is something that you have on PS2 as well, is something that's called MAP or maximum a posteriori parameter estimate. Okay? So what we instead do is take our poste- posterior distribution. [NOISE] All right? This is now a distribution over your parameters. It is not a single point estimate, it's a full distribution over all the infinitely many possible parameters. And out of this, [NOISE] we extract just one parameter setting called MAP, which is arg max. [NOISE] So Theta hat MAP is the arg max of Theta of- uh, from the posterior. What is the arg max of- so if you have a distribution over this Theta, this is P of Theta given S, what we want is to find a point which maximizes P of Theta given S. Okay? So we want this to be our Theta-hat MAP. So the MAP estimate is the mode of the posterior distribution, right? The value of the, uh, parameter that has the highest density or the probability in the posterior distribution. And- and now, once we, once we obtain this, we're going to switch back to the frequentist setting and make our prediction using h_Theta hat_MAP on unseen examples, X star. Right? So this is called a MAP estimation. And wha- what we're going to see now is that regularization is essentially doing MAP estimation with particular choices for the prior distribution, right? Well, let's- let's see what- what exactly we mean by that? [NOISE] [inaudible] So what we were doing before was not taking the mean of this distribution. We were holding onto the full distribution to construct the posterior predictive distribution, right? We have a holding onto the full distribution to construct the nes- uh, the- the posterior predictive distribution, right? Now, we are not holding onto the full distribution. We're gonna just pick one value of Theta and switch back to the frequent setting of using that as the estimated value. Yes, question? Is that similar to MLE? It's very similar to MLE, and that's what we're gonna see and that's what you're gonna show in your homework as well. It's- it's- it's- so what we want to do is Theta hat equals arg max. I'm actually probably doing some of your homework here for you, so, you know, pay attention. P of Theta given S, right? And this is equal to arg max of P of S given Theta times P of Theta over P of S, right? But P of S has no Theta term in here, so this is just arg max P of S given Theta times P of Theta. And you can apply the log here because log is monotonically increasing and you get arg max Theta log P of S given Theta minus log p of Theta, right? So this is the MAP estimate. And now, what we are seeing here is that you're trying to find the value of Theta that satisfies this times this, right? You're starting this- balance out two different competing weights. If P of Theta is Gaussian normal with some covariance or Sigma square, then it is going to be your original likelihood term. So this is your likelihood. And this in case of P of Theta to be Gaussian, as we've seen before, will turn out to be some constant times Lambda squared. Remember, the log likelihood of a Gaussian. exb minus half, . If it's mean zero, then there's just the square over here, and you take the log of this. I'm just going to cancel out everything except the squared, and that's exactly what you see here. All right. So having a Gaussian prior on your parameters and performing MAP estimation is effectively the same as adding a reg- a squared reg- uh, uh, an L2 regularization on your- on your- on your, uh, uh, MLE estimate. So this could be, in case of regressions, there's just the square loss, okay? And there is also, um, you can, you know, do a quick visualization of this. So if this is your Theta and this is 0, then your prior is some distribution that's- so this is prior. And over here, let's say this is your- um, um, let's call this likelihood, okay? And this is your MLE estimate. So MLE maximizes the likelihood, right? So assume this just peaked over here. And instead what you're trying to maximize is the product of these two, right? The- the prior times the likelihood. And how- how does the product of two functions look, right? So if either one of them is very close to 0, then the product is 0, right? So over here, the prior is close to 0, so the product will be 0. Over here, the likelihood is 0, so the product is 0. So instead, what we get is- so this is prior times likelihood, right? There's a prior times likelihood because in any place where either, uh, the prior or the likelihood is 0, the product will be very close to 0. So only in- in the region where both of them are kind of non-zero, that's where the maximum will be observed. And MAP estimation, you get us this one here. So Theta hat MAP, right? So what we see is that MAP- so there is this kind of, um, there is these conflicting forces between the prior trying to pull the parameter estimate close to 0, and MLE trying to estimate, it'll pull it towards what the data tells us, the, uh, the estimate is. And MAP is kind of trying to, you know, find the right balance between the prior and the likelihood, and it's gonna therefore shrink your MLE close to 0, right? Yes? Question? When we do Bayesian statistics, when we are making a decision about how our class priors are going to be like, do we just choose the kind of distribution [inaudible] or do we also choose the mean of that distribution? In Bayesian statistics, you choose a prior for your parameters, you choose the family and the parameter. [inaudible]. So the- the- the initial belief, uh, so- uh, is basically captured in the parameters of your Theta, uh, the- the Theta prior. So if you have a strong belief that your parameters are close to 0, then you will assign a very small variance, which means your prior is peaked. And if your prior is peaked at 0, then this pull is even stronger, and your MAP will be even closer to 0. Uh, and sometimes it's common to use something what is called as a flat prior, which means your prior is just flat, which means your likelihood multiplied by the prior is going to hold the same shape. So that's when you use, uh, something that's called a flat prior where you don't want to do anything at all. But, uh, in that case, you know, you don't have the regularization interpretation, but the regularization interpretation, your prior needs to be a Gaussian, and this Lambda is- is gonna, you know, involve terms of the, you know, Sigma square and- and few other things which you do in your homework. But the general idea is this, you know, um, you know, this- this acts as a regularization term because of the squared error that comes out here. So how confident we are [inaudible]? Yeah. So how confident we are depends on what kind of, uh, ah, variance we assigned to the prior? All right. Uh, if there are any other questions, feel free to walk up to- walk up here, and, uh, I can answer that. And that's all for today. Thanks.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_21_Evaluation_Metrics.txt
Welcome everyone. So, uh, today, this is Lecture 21 of, uh, CS229 and the topic for today was, uh, Evaluation Metrics and also some- some general advice on applying machine learning in- in practice, you know, in deployment or in production. So we're gonna look at a few evaluation metrics and next, uh, once we're through with this presentation will, you know, quickly go through some- some, uh, ways in which these evaluation metrics can actually be useful in- in production. So let's get started. So, uh, this is the, uh, quick overview of what we're gonna, uh, be covering in this presentation. You know, first, you know, why evaluation metrics then, uh, we'll focus on binary classifiers that will be, uh, that will be the- the, uh, problem setting that we'll spend most time on. Uh, different kinds of metrics and you know, some general, uh, tips on how to choose evaluation metrics and what to do when, you know, in certain scenarios, when there's class imbalance and how certain metrics kind of break down in, uh, in- in different settings. So first of all, uh, why are, uh, evaluation metrics important? In what we've seen in, uh, in all the, uh, algorithms in this class, we defined some kind of a loss function and minimize the loss function with respect to you know, the parameters, uh, to- to minimize the loss on the, uh, training data. However, these, uh, loss functions, uh, it's not always clear whether these loss functions are meaningful, uh, in terms of the real world use, right? Uh, for example, the loss function may not, uh, may or may not capture some kind of a business goal you have, right? Uh, so you know, if- if your business goal is to improve your revenue or your profit, like your loss function may or may not always reflect that. Right? So it's always important. You know, that- that's why evaluation metrics, which are like this- this, uh, secondary, uh, measure that you have on your model performance become extremely important, right? It also- evaluation metrics also helps organize your team effort, uh, towards some kind of a business target. Alright? So usually it's- it's common practice to define some kind of a dev set and the team which is working on improving the model performance, strive to improve the evaluation metric on the dev set, right? Uh, you- you know, uh, you want- you want to start thinking about the evaluation metric as something distinct from the loss itself, right? It- It's also useful to kind of quantify the gap between, uh, the desired performance and the baseline, right? So it's, uh, easy for, say, a product manager to define what the evaluation metric should evaluate to on your dev set, but it's hard for someone to- to say what the loss value should be on a dev set, right? So, uh, so evaluation metrics are useful to, you know, measure how well your model is doing in terms of your desired performance level, right? So it's- it's useful to, uh, quanti- uh, to quantify the gap. And, uh, it also, uh, you know, you can use it to- to, uh, see the gap between the desired performance and the baseline. Uh, baseline is generally the first attempt that you come up with some kind of a simple model that gives you a sense of, uh, difficulty of the overall project that you're starting with. If the gap between the desired performance and the baseline is a lot then, you know, probably it's- it's, uh, a challenging task. And similarly, you can also measure the gap between the desired performance and the current performance, which kind of gives you a sense of how much more progress is left to be made, right? Uh, and it- it's also useful to kind of, uh, keep- keep track of how our performance is improving over time. Um, you know, when we started off, we were at, you know, 60% accuracy three months in we are at 75% accuracy. Our ideal, you know, our target is, you know, 85% accuracy so that, you know, it gives you a sense of how much progress you've made towards the end goal. Um, It's also useful for lower level tasks such as debugging, right? You wanna do some bias-variance analysis on your model to improve your model performance and evaluation metrics are useful there as well. Uh, so ideally your training objective, your loss function, should reflect your business goal, right? But that's not always possible. For example, if you care about accuracy, it's very hard to train a model by using accuracy as the loss function, right? So accuracy is- is, uh, is not even differentiable. It's uh, it- it ends up being a very hard combinatorial problem and it's, um, using accuracy as your loss, uh, value- as your loss function, you know, generally just does not work. You can't get gradients out with them, etc. So, um, ideally we would want, um, your metric to itself be the loss function, but that's- that's not always feasible. So you know, that kind of makes it necessary to- to do this, uh, measuring this evaluation metric in parallel as we are developing the model be- beyond just the loss function. So, uh, we're gonna limit, uh, for most of this, uh, presentation, we're gonna, uh, stick to binary classification, uh, in a supervised setting. So think of X as an input. X could be an image, it could be an email. Uh, Y is a binary output. You know [NOISE] If it's an image, you know, you want to tell whether there's- whether there's a pedestrian in that picture or not, uh, if you're, you know, in- in the context of, say, a self-driving car, um, it could be whether a given text is spam or not. Uh, so Y is- is, uh, think of Y as binary, the model output is y hat, right? And this y hat can be of- of different types. For certain kinds of algorithms, uh, for example, uh, we did not cover these in- in the course itself, but you know, algorithms like K nearest neighbor or decision trees, the output of the algorithm is directly the class prediction itself. The output is the class directly, right? And there are other kinds of algorithms whose output is some kind of a real-valued score, like, you know, support vector machines or logistic regression. You know logistic regression, the output was the probability of Y being 1, right? It's a real value, it's not a class directly, right? And, uh, with- with, uh, support vector machines, uh, you will output the margin, right? How far from the margin separating hyperplane is the, uh, example that we're looking at. Is it to the- to the left of the margin or to the right of the margin, right? So those are- those are real value. And in these, uh, these- these, uh, score based, uh, algorithms, where we output some kind of a real value, it becomes necessary to choose some kind of a threshold. And once you pick that threshold, you can convert your- your model into a classifier, right? And thi- this is, uh, the score based, uh, models are the one we'll be, uh, we'll be looking at in- in this presentation, right? So think of, uh, this is kind of the, uh, mental picture you want to have. So, uh, the line here from, uh, 0-1, think of it as, uh, probably the range of probabilities that can be output by the model, right? 0 indicates pro- the predicted probability 0 and 1 indicates predicted probability is 1. So the- the green dots, uh, represent, uh, positive examples. And the position of the dot is the probability assigned by the model for that example, right? So the green dots are- are the positive examples, gray dots are the, um, negative examples and, uh, we- we, uh, this- this by, uh, placing them on a line like this, we are kind of defining, uh, an actual order on the predictions, right? And it's, uh, it's- it's always good to first have a look at, uh, you know, have a look at this kind of rank order of your, uh, of your examples in your dev set before you start debugging your model, right? This gives you, you know, uh, a good sense of- of, uh, you know, what- what your model is- is kind of doing, uh, in- in a big picture. A few more, uh, you know, so- some metrics that are, or some terminology that are- are useful. So prevalence is a term that refers to the fraction of examples that are positives, right? So that's- that's a standard terminology. So if you're in a situation where- where, uh, you have an equal number of positive and negative examples in your data, then the prevalence is 50%, right? If you have, say, um, 10 positive examples and, uh, and 90 negative examples then your prevalence is 10%, 10 over 10 plus 90. Alright. This prevalence [inaudible] Good question. So prevalence- the question is prevalence a property of the data or is it a property of the, uh, of the model? Prevalence is totally a property of the data that you have. Um, you know, your- your, uh, yeah, it's- it's just a fraction of positives that you- So here, a good- good, uh, good point is that here when we are counting positives and negatives, we are counting the- the true ground truths, not the predicted probabilities, right?. Any- any questions on this. An- and it's this prev- this prevalence term that- that allows us to, uh, kind of, uh, decide whether we have something like a class imbalance problem. So class imbalance problem is when, you know, one of the, uh, classes is either too many or- or too little compared relative to the other class. If- if out of 100 examples in your dataset, if 2 are positive and 98 are negative, then, you know, you generally call it a class imbalance problem. There's no well, uh, well a- agreed upon threshold to decide whether you have a class imbalance problem or not. But, uh, as a thumb rule, if- if, you know, one of the classes, if the prevalence is, you know, less than 5% or 10% or more than 90% or 95%, then, you know, it's- it's reasonable to think of- think of the problem as having a low prevalence. And, er, sometimes for example, if, uh, you're in- you're in, uh, the problem of, say, detecting, you know, credit card fraud, right? The volume of non-fraud transactions is- is, you know, is probably so high that, you know, in those kind of problems prevalence would be really- really small, like you know, much smaller than 1% [NOISE]. Okay. So we started with, uh- uh, you know, the score based view. I'm going to go back to the slide to compare again. So we started with this line where, you know, we've- we've placed all the examples, uh- uh, based on- by ranking them based on the predicted probability of the model, right? So the green and white are property of the data. Their positions is the property of the model, the model assigned probabilities for each example, and, uh, that's how we started. Uh This by itself is not a classifier, but once we decide a threshold, right, once we have a threshold, let's say we set the threshold equals 0.5 now we have a classifier. Every- every example that's above the threshold line is classified as positive, whether it was positive or not, every- every example below the threshold is classified as negative, right? So the- the, uh, vertical axis is um, is- is our prediction. Anything above that, uh, the- the 0.5 threshold is predicted to be positive. Anything below is predicted to be negative. And the true label is on this, um, you know uh, we- we kind of separate them horizontally. So to the left is those examples that truly are positive and to the right are examples that truly are negative, and here we've chosen a- a threshold to be, you know, arbitrarily 0.5 and only once we choose a threshold does a- a- a score based model become a classifier, until we decide a threshold it's not a classifier, right, it's just assigning scores or probabilities. After we choose a threshold, we get a classifier, right? So, uh, what did we do here? We just summed up the number of examples in each block, right? On the- on the, uh, top-left is the examples that were actually positive, and we also predicted positive. And similarly, on the- uh, on the right- right top, we have examples that were predicted positive but were actually negative, right? Uh, this kind of a matrix is called a confusion matrix. That's just a standard terminology. So A few properties of, um, of- of- of the confusion matrix, the first is that the total sum of all the- all the four squares is fixed, and because the number of examples we have is fixed. Uh, also the column sums are also fixed, right, the sum are 9 plus 1, and 2 plus 8 in this case, will always be fixed because they- they are, you know, the number of positive examples we have and number of negative examples we have. What can change in a confusion matrix is the, uh, based on what threshold we choose and the way in which the examples have been- have been, um, ordered by our model, the- the fraction of the left column that goes up and below and fraction of the right column that gets split up and below, that can change, so the total sum is fixed, the- the column sums are also fixed. Basically, um, by looking at a confusion matrix, you kind of judge the quality of your model depending on how heavy the diagonals are and how light the off diagonals are, so you want the diagonals to be heavy, which means you want the- the, uh, examples that were positive to be predicted positive and that were negative to be predicted negative and you want the errors, you know- you know, examples that were positive but predicted negative to be low and, you know, similarly that were negative and predicted positive to be low. So um, another, uh, observation to make here is that a confusion matrix does not give you a scalar value, like it's a set of four numbers, given two, um, uh, different confusion matrix- matrices for the same model in the same data set, it's hard to- it's hard to compare them and say which confusion matrix is better. They're not scalar, they're like sets of four numbers, you- you can't compare them, um, uh, unambiguously. Okay? And- and so we start, um, extracting comparable, uh, metrics out of this. The, uh, top left, that- the- the- the number in the- in the top left quadrant is called true positives so they are, uh, true positives, uh, is a confusing term. So you- you might want to- you know, uh, it's- it's, um, true positive sounds like what- what is the ground truth. But- but unfortunately, the term true positive refers to examples that were actually positive and were also predicted to be positive. Um, so in this case, true positive is, um, is- is nine, and so on the- on the- on the, uh, right-hand side, we're going to start extracting, um, these metrics or statistics out of, uh, this confusion matrix. Next, um, true negatives. You know, again, true negatives, uh, is the number of examples that were actually negative but also predicted to be, uh, negative. Uh, here trueness refers to, you know, um, you know, that our prediction was true. That's- that's what it basically means. So true positive means it was predicted positive and we got it right. True negative means it was actually negative and we got it right. So- so trueness kind of lies on the diagonal. We want the true positives and true negatives to be high. [NOISE] False positive is, uh, examples that were predicted to be positive, but our prediction was wrong, a false prediction. So false positives are examples that were actually negative, but we predicted them as positive. Uh, similarly false negative are, um, examples that were, um, um, actually positive, but we predicted them as, uh, negative. And these two are- are, uh, false positives and false negatives are- are kind of, you know, two very different kinds of errors, and depending on the actual application that you are, uh, working on, uh, false positives and false negatives may have very different kinds of impacts. It's- it's a very common that the- the, uh, the kind of weight that you want to assign to false positives and false negatives will generally be asymmetric. Um, so- um, so we have, you know, basically extracted four metrics out of this confusion matrix so far. This false positive and false negative are- um, they also have, uh, some other terminology, for example, they're also called type 1 and type 2 errors, right, so type 1 error is when, um, is a false positive, when- you know, you don't have a certain condition, but you predict that you have the condition. You don't have, you know, the- the- the y label is false, but you predicted y to be true, right? Similarly, the type-2 error uh, is- is commonly called um, uh, is another name for false negatives, where the- the true uh, uh, uh, condition is- is -is true- is- um, is 1, you know, y equals 1, but you predict y equals 0, right? Uh, this is, um, um, you know, I found this on- on -on Twitter on- um, on- on someone's, uh, Twitter page, but I could not find the source of, um, source of the image to- to cite it but, you know, it's- it's- um, it's a funny image, right? Um, so- so, uh, back to- back to the slide, the larger, uh, takeaway is that, uh, depending on the kind of errors you make, the- the kind of, um, impact that false positives and false negatives have can be very, very different, right? So if you were to, for example, prescribe some kind of a medication, uh, then, you know, uh, false positives and false negatives can have different kinds of effects, you know, depending on whether the, um, um, so for example, if you want to prescribe a medication based on whether a person has a disease or not and if that, uh, uh, medication has no adverse side effects when given to an ordinary person, then you are, uh, much more likely to worry about false um, um, uh, false negatives because you don't want to lose out on, uh, people from getting that medication. So you want, uh, all the people who have the patients who have the con- condition to be predicted so that you can give them medication, um. [NOISE] Um, another metric is accuracy, right? So accuracy is basically what is the fraction of all the examples, no matter what class they were that we predicted right, that you got them right, right? So accuracy is generally, you know, kind of the trace of the matrix divided by, you know, the sum of all the elements. And, uh, if you want to, uh, optimize your model for accuracy, then, uh, it corresponds to actually using something that's called the 0, 1 loss, right, 0, 1 loss, uh, refers to, you know, um, you have a loss of 1 if you got the answer, er, correct and, um, you get- you get a 0 if you got it wrong, right? So optimizing for 0, 1 loss, which is very hard to do in practice because it is not a differentiable loss, um, is- is, um, that's accuracy. Now we- um, there is, um, another metric that's very important called precision. Accuracy and precision in our day to day language, we- we might kind of use them interchangeably, uh, whether we are accurate or precise. But, um, in- in a- in a statistical setting or in a machine learning setting, the two have very, very different meanings, right. So accuracy is what fraction of all the examples that we got right, whether they were- uh, whether they were- they were, you know, uh, positives or negatives doesn't matter whether we predicted them right or not, that's what accuracy measures. However, precision means among the examples that we predicted to be positive, what fraction of them were actually positive, right? So in precision, we are kind of limited to the top half of the confusion matrix, right? So the top half of the confusion matrix is categorizing all the examples that were predicted to be positive, and now, uh, precision measures what fraction of the predicted examples, uh, were actually positive. And there's another name for precision, it's also called PPV. So PPV stands for [BACKGROUND] positive predictive value. So precision is also called PPV, positive predictive value. Yes. Question. [inaudible] Yeah. So the question is, uh, if you- if you worry a lot about false positives, you know, how do you kind of quantify that into you- into your loss function? Because your loss function is just a loss function. Uh, we'll touch upon that in- in some of the future slides, um, uh, but- but that's- that's a good question, that's- that's the kind of questions you should be thinking about because a loss function is- is- you know, um, is, uh, is our loss function kind of asymmetrically treating, uh, false positive and false negatives, uh, if not, shoot it, how do we do it? You know, uh, that's- that's definitely, uh, a very good question. So, uh, that's precision. So precision is among the examples that we predicted to be positive, you know, what- what fraction of them were actually positive? And, um, you know, um, you can- you can see that, you know, the- the, uh, precision is, um, all these metrics that we've seen so far, you know, true positive, true negative, false-positive, false-negative, accuracy, precision, all- these are all metrics that actually depend on the threshold that we choose, right? Um, so these are all threshold, um, um, uh, sensitive metrics [NOISE]. And similarly, there is, uh, something called sensitivity. Sensitivity is also called recall. All right? So precis- recall is also called sensitivity. So in recall, the, um, what recall measures is, if we were to use this classifier in para- in- in, uh, in deployment, what fraction of all the actual positives are we going to recover? All right. So if- if- if, uh, um, we- we, uh, let say there are, you know, um, uh, 100 patients who are, uh, out there and some fraction of them, say half of them have a certain condition. We run all the patients through our model and our model is going to predict, you know, true or fal- you know, predict positive or negative for each example or each patient. And finally if we, um, if- if we, um, um, look at all the patients who actually had the condition, what fraction of those patients did our model predict as- as, um, positive, right? And this is called sensitivity. So this- the- the term sensitivity kind of, um, uh, comes from, I think it comes from epidemiology where, you know, how sensitive is this test for detecting a certain condition, right? That's- that's exactly what this is measuring. So, um, in this case, say, uh, there were a total of ten, uh, positive exa- positive examples. I'm- I'm counting, uh, the, um, the examples on the left half of the confusion matrix and sensitivity is measuring, you know, the top number, 9, divided by nine plus one. So like, you know, the sensitivity here is- is 90% because we were able to correctly, um, or we were able to recover nine out of the, uh, uh, ten positive, uh, uh, positive examples. So one thing to, um, to note here is that sensitivity is kind of staying agnostic to everything on the right, right? So we're only focused on what fraction of patients were, uh, or- or what fraction of examples that were actually positive. So we're conditioning on just the positive examples and checking, you know, what fraction of them we were able to recover, right? Um, this is- this is completely agnostic to what happened to the examples that were- that had a ground truth of negative. Yet we're- we're just ignoring that- that- that half of, uh, our data-set completely, right? So, um, the- the- the, um, one- one- one way to- another, uh, uh, way to think of this is, if you want to have perfect recall, right? If you want to have 100% recall, then no matter how good or bad your model is, just choose a threshold of- of 0, okay? If- if you choose a threshold of 0, no matter how good or bad your model is, you will get 100% recall, right? And similarly, if you want to- in the- in the previous example over here, um, this was about precision. If you want to have a model that is 100% precise, right? Then all that you want to do is, uh, choose the threshold such that, um, choose a threshold such that the threshold just classifies the top one example to be positive and you will get 100% recall, right? So viewing these metrics in isolation generally does not, um, kind of capture the- the, uh, true performance of a model [NOISE]. Right? So, uh, so- so, uh, uh, to- to, um, so what we'll- we will see here shortly that most of the times we are trying to balance off, you know, uh, precision and recall in some way, uh, because we don't want- we don't want our metric to be, you know, blindly placed all the way at the bottom to maximize a recall. Similarly, we don't want, uh, uh, our threshold to be blindly placed all the way at the top to just get good, uh, uh, precision numbers. Yes, question. [inaudible] So the- so the question is, you know, why are we looking at the top half? Uh, we're interested in the left half. Yeah. So the left half of this matrix are, you know, the- the examples that have a ground truth to be positive, right? But our model, when it's making a prediction, it does not know, you know, what- what, uh, uh, uh, what the ground truth is and it's going to assign some kind of a probability. And depending on what threshold we choose, right? So in- in this case threshold was 0.5, you know, we classify everything that the model classified, uh, you know, uh, as above- above, um, 0.5 to be- we predict them to be positive, right? We don't know what the ground truth is. Yes, we are interested in the left half, but the only way we can classify them is as, you know, top half and bottom half. Good question. So that's- that's, um, recall [NOISE]. Similarly, there's something called negative recall. So negative recall is, um, basically doing the same, um, uh, same exercise as- as a recall, but we do it on the- on the right-hand side of- of this- of the confusion matrix. Which means, uh, so negative recall is- is, um, is- is- is calculating of all the examples that had, uh, a true or- or the actual label to be negative, what fraction of that sub-population did the model actually classi- correctly classify as negative, right? And in negative recall, um, just like, uh, uh, you know, positive recall, we kind of stay agnostic to the left half of this picture, right? We don't- we don't, um, we don't- we don't care how correctly or incorrectly the model performed on the, uh, uh, on the ground truth, uh, uh, equals positive. We just look at- we considered the examples on the right-hand- right-hand side and- and see what fraction of the, um, actual examples, uh, actual negative examples to the model correc- uh, correctly classify as negative. And negative recall has another term, specificity [NOISE]. Okay, um. Right? So negative recall is a specificity. And you can start, um, uh, kind of combining these individual scores, uh, into, you know, um, in- in certain ways, for example, there's something called the F score or the F1- F1-score. So F1 score is, uh, you can- you can, uh, so F1-score is basically the, um, um, F1-score is the harmonic mean of- of, uh, precision and recall. So 1 over F1 is 1 over precision plus 1 over recall. Right? So you can combine your precision and recall into a single number called, uh, the F1-score. Similarly, there is, um, uh, something called, uh, just like the F-score, there is something called the G-score, which is the geometric mean of- of, uh, precision and recall. So, uh, so the G-score you have, uh, so you have- the geometric mean, so it is in a logG equals logPrecision plus logRecall. So it's- it's, uh, uh, you know, G is square root of precision [NOISE] times recall and, uh, F-score- F- F-score is therefore, you know, precision times recall times precision plus recall. [inaudible] Oh sorry. No. It's not quite obvious. Sorry. So- so G- so G is- so ignore this G is actually- so this is the correct. It is the geometric mean. So I guess I missed a half over there. Yeah. Next question. [inaudible] Yeah. So you know what- what do these F scores and G scores kind of represent? So as we saw before, just looking at precision or just looking at recall is- is insufficient because you can just fool the model by having 100% recall by setting the threshold all the way to 0, right? But if you were to do that, you will take a hit on precision. Right? And similarly, if you- if you- if you maximize your precision to 100% by setting your threshold all the way at the top so that you just classify that one correct example as positive and- and just ignore everything else. Your precision will be 100% but your recall will be really low because you've recalled only one out of all the positive examples that you have, right? So it is some way to kind of balance the two in some way. So these- the geometric mean and- and harmonic mean are- are- are- are you know kinds of means where if any one of them low- is low, then the entire mean is low, which is not the case with arithmetic mean. So arithmetic mean will just- is- is pretty robust to stay high, but geometric mean will- will just bring down your value to be really low if either one of them is- is low. So these are just two different- you know, two different ways of kind of combining the two- the two metrics in some way. Yeah, it's another way to- to- to say, you know, what is the min of my precision and recall, right? That's what the geometric mean will- is kind of doing. And of course, next question. So why can't we just [inaudible] So the question is why are- why not just focus on accuracy? Why are we doing all the- all these other things, we'll come to that. [inaudible] So- so- so- so the question is, does accuracy capture precision and recall in some way? Maybe, maybe not. We'll see- we'll see more- more. Why accuracy is not always the right thing to- to look for. Right? So in- in examples, when- in situations where your data is well-balanced, you have you know an equal number of positives and negatives, then you know accuracy and precision and recall are being more or less kind of similar. But if- if you have class imbalance, let's say you know just 1% of all your examples are positive, right, and 99% of your examples are negative. Then if you have a model that just says no to everything, it'll get 99% accuracy, right? So- so in class imbalance situations this precision and recall kind of metrics are- will be much more robust from, you know, from- from getting fooled whereas accuracy can be easily fooled in low- low- low prevalence scenarios. Right? So that's- that's F-score. And now, so everything that we've seen so far are specific to this chosen threshold, right? So our model did the ranking and then we choose a threshold. And out of this- once we- once we chose this threshold, we got this confusion matrix. And from the confusion matrix we extracted all these threshold-specific matrix. Now, what happens if we change the threshold from 0.5? Let's say we move it up to 0.6. If you were to choose a different threshold, then you know, some of these numbers changed. So the- the- the metrics that changed are highlighted in red. So I'm going to go back one slide for you to compare again, right? So this was with- with threshold equals 0.5. With 0.6, we see that the true positive changed, the true negative did not change, similarly the false-negative changed, the accuracy changed, the specificity did not change. So some of them changed, some of them did not change, right? And now- now you know similarly we can- we can repeat this exercise by trying- you know trying as many different thresholds as possible. So first of all, how many effective thresholds are in this- are there in this situation, for example, you know choosing the threshold here versus slightly below changes nothing, right? No example was- was classification of- no example changed. So in- in this kind of situation, the number of effective thresholds we have is 1 plus the number of examples. Right? So that gives us all the- all the possible different thresholds that are somewhat meaningful. And so what we can do is basically repeat this exercise of not just checking it at 0.5 and 0.6, but check it with all possible thresholds. Right? So this is the same ranking. The ranking has not changed. Ranking is a property of the model, but the threshold is something that we choose to apply after the fact, right? And for each particular value of the threshold corresponds to each row in this table on the right side, right? And as we- as we change the threshold, we get different values of you know true positive true negative accuracy, precision, recall, and so on. Right? Now, as I said, each row corresponds to each effective threshold, and the table itself is a property of our model. So if our model was- were- if we were to take a different model and you know take the predictions from the other model, the ranking would probably be very different. Right? And- and so this- this entire table is kind of specific to the model that made the predictions. But each rows within this table depends on what threshold that we chose. And you can make a few more observations in- in- in this table. So first of all, many of these columns are monotonic. So for example, true positives starts at 0 and monotonically increases to 10, true negative similarly monotonically decreases. Similarly, false-positive monotonically increases, false-negative monotonically decreases. Recall, monotonically increases, starts at 0and ends at 1. And specificity monotonically decreases, starts at 1 and ends at 0. However, there are these other metrics, for example, F1 score, right? There is- there is- there is no strict ordering. It kind of reaches a maximum somewhere at around threshold of 0.4 or something. Similarly, accuracy, accuracy also kind of reaches a maximum at 0.45. And it- it- it- it reduces as we go to the extremes, right? So some of them have this- this kind of monotonic relation where you know, we- we increase in one direction or decrease in the other direction. And precision is kind of interesting because in- in precision, we kind of keep going up and down, right? So at precision, we start at 1, come down to 0.6, go up to 0.75. 0.8, 0.83 and then come down to 0.71. So it's kind of- just kind of going up and down. And so what we want to do is try to capture the trade-off between these columns in some way, right? So as we saw before, if you want to just maximize recall, just- just set your threshold to 0. If you want to maximize your precision, maybe just set it at- at- at 1 or just below 1. But instead what we want do is not just look at how the model performs at some specified threshold, but summarize these- these- these entire columns in some way. Right? And- so that's- that's the question. How do we summarize the trade-off between say, precision and recall or specificity and sensitivity. Right? And that's where things like the ROC curves come into picture, right? And the ROC curve, it's called the Receiver Operator Characteristic curve. It is a trade-off between specificity and sensitivity, right? So sometimes you may see the ROC curve to be flipped in some books or some- some articles. You'll see it flipped where on the- on the x-axis you have 1 minus sensitivity but it's- it's basically the same, right? And so the ROC curve is this- is the plot that you get by scanning the threshold. So I'm going to go back one slide. So- two slides rather. So as we scan the threshold from 0-1. And for each threshold, we read off the sensitivity and specificity and plot that point on a graph of sensitivity versus specificity. And as we plot the different points that we get by scanning different thresholds you know connect those points as well. And you get this ROC curve. Right? So the ROC curve is- is giving you the trade-off between sensitivity and specificity as we scan the threshold. So for example, when the threshold is- is- is- set at 1.0. So you know, you set the threshold all the way at the top, right, your recall will be 0, right? Your sensitivity will be- your sensitive- you specify- when- when you- when you set your threshold all the way at the top, at 1 that corresponds to the top left- top-left corner, your sensitivity will be 0, but specificity will be, you know, will be maximum. And as you keep bri- you know, bring- as you keep scanning your threshold down towards 0, you start exploring these- all these other points. Right? So one way to think of the- the ROC curve is- so another way to- to think of it is- so create a grid. Create a grid well, uh, you have- you- you chop up the horizontal axis into equal-sized, uh, ranges, into as many number of examples you have. So if you have, say, a 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, so let's say you have 12 positive examples, right? And chop up the vertical axis into a number of negative examples that you have, so 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13. So 13 negative examples. Examples, right? And now your model gives you- you know, some kind of an ordering. Right? So your ordering looks something like this, right? So positive, positive, negative, positive, positive and so on. So it's going to give you some, some kind of an ordering. Now, one way to think of this ROC curve is start from the top left, right? And inspect one example at a time, right? If you encounter, uh, positive examples, move right, if you encounter a negative example, move down. Right? So first is a positive, we move right, another is a positive we move right, next we've got an neg- a negative example, we move down, then we got a positive, move right, positive, move right, et cetera. And because we have, uh- uh, we can only make, you know, 12 number of- you know, exactly 12 number of right moves, and 13 number of down moves, no matter what order you are, you always start here, and you always end here. Right? And if all your positive examples are at the top, then your curve will- will look like this. So it will- it will turn towards the top right corner if most of the positive examples are at the top. Right? And so, uh, this intuition gives rise to this derived metric from a ROC curve called the area under the ROC curve. Right? So the area under the ROC curve, is basically this area, and this area under the ROC curve will be high, if most of the positive examples or if all of the positive examples are the ones that you encounter first as you start scanning down. Right? Because, you know, positive example will just keep you moving to the right, and another observation you can- you can- you can, uh, make is that ROC curves always have this- you know, can always be broken down into these horiz- horizontal, and vertical um- um- um, segments or chunks, and each- each chunk corresponds to no- one segment of consecutively occurring positive group or a negative group. Right? So you keep moving to the right so you get- you get one long horizontal chunk if you have lot- if you encounter lots of positive examples, and then once you- once you switch over to- you know, some negative examples, you kind of get a kink here, and you start moving down, and so on, okay? So this is- this is, uh, the ROC curve, and the area under the ROC curve is generally a useful metric to- to, uh, to check how well our model is- is kind of classifying. If- if- if it does a good job of placing all the positives up, and negatives down, then the area under the ROC curve will be high, right? Any questions on this? Good. So a few um, a few more observations on ROC curves. So, ah, if you have a model that randomly assigns probabilities to your examples, then the ROC curve will fall along this diagonal. Right? So if you, if, if- let's, let's say you- you, uh, you have a model, you get an example, and your- your, uh, your model, instead of predicting some kind of a probability, it just samples a probability between zero, and one, and a uniform distribution zero and one [inaudible] our probability, then the ROC curve will- will- will exactly coincide with this diagonal line. Of course, uh, you know, this diagonal line looks smooth, you know, it assumes you have lots, and lots of examples, otherwise it's gonna be, you know, uh, a diagonal line that, you know, looks like this, right? So your- your, uh- uh, random guesser will have an ROC of 0.5, right? So- so that's like a low bar for what you should be aiming for with ROC. Right? If your- if your, uh, model has an ROC less than 0.5, then your model is basically doing worse than random guessing, right? That's bad. However, you know, there is- there is, um, um - if it is- if it is doing like really low, like if- if your, uh, model has an ROC of 0.1, that is somewhat good news because you can just flip the labels, and you can get a really high ROC, right? So um- um, staying- staying at 0.5, if your ROC is- is 0.5, that's bad, Uh, you want it to be as high as possible, but if it's really low, you can always just flip the labels, and you know, it will perform- perform pretty well. Uh, but of course that is still disturbing you should- you know, definitely investigate why it is predicting low, maybe there's a bug somewhere. But, you know, uh, yeah, so- so- so 0.5 is- is- is- uh, it should be the low bar when you're looking at ROC values. So if you see, um- um, if you see and read some paper and say the ROC was, you know, 0.65, you know, you should not think of the model could have scored 0-1, and achieved 0.65, you should think of know, between, you know six point- 0.65 as a ratio starting from 0.5-1, not from starting from 0-1. Right? So something with an ROC of 0.6 is quite close to random guessing. There's another interpretation of the ROC- the area under the ROC curve. So the other interpretation of the area under the ROC curve is, um, supposing, um, you choose a random example- a random positive example, and then separately you choose a random negative example. Now, what is the probability that this- in this random pair of positive, and negative examples, the positive examples will be ranked higher than the negative example? That probability is exactly equal to the area under the ROC curve, okay? So that's- that's another way to think of the ROC curve. What's the- what's the probability that a random positive example is ranked higher, than a random negative example? Rank means, uh, in this- you know, uh, pick one random positive example, say it was this green dot, pick another random negative example, say it was this gray dot, right? So those random positive, and negative examples get some kind of a score from the model. Were those scores ordered in the correct way? That's what ROC measures. Any- any- any questions? So uh, another thing to note is that the ROC curve is agnostic to the prevalence of your data-set, right? So what that means is if uh, instead of 13 examples, you had 130 examples, right? All it means is the granularity of each of the vertical drop would be smaller, right? But roughly speaking, you will still get a similar area under the curve, right? It- it- it would just be a little more fine-grained, but the area under the curve will be pretty much the same. So the AU ROC is agnostic to the prevalence of- of the data-set. That's- that's- uh, that's something to, you know, keep in mind when you're evaluating models uh, because that's both a strength and weakness of the ROC, uh, ROC curve. That the ROC curve um, if- if- if you measure your um, if the ROC curve that is reported to you was measured on a data-set that had equal prevalence, but uh, let's say in the real world the um, the- the- the true prevalence is- is very different um, then you know that the ROC- the ROC performance in the real world will still be the same. So that's the good part of it. But the bad part of it is probably the ROC curve is measuring the wrong thing, right? If- if, if it is- if it is- um, if it's-if the- if what it reports is agnostic to the um, - to the prevalence, then maybe what- what er, ROC is measuring is- is not the right metric. Okay? And when- we'll see some of that er, soon. All right? So uh, that's ROC curves. So there's this um- so the ROC curve was uh, - was the trade off between um, sensitivity and specificity. The precision-recall curve is um, you know, is- is the trade-off between um, precision and recall, right? Uh, it's- it's uh, plotted in the same-, uh, in a similar way as the ROC curve, right? You start with the threshold at 0.1. Read off the uh, precision and the recall values. You get- you get a point and then, you know, scan down the threshold a little bit. Read the new precision and recall values. That's the next point, you know, connected by a line. Next, bring down the threshold some more. You get a new precision and recall value plotted and connected by a line. And finally you get a- a- a full curve that, uh, looks like this, right? And this is called the precision recall curve. A few things to note here again, the uh- the ROC curve was kind of monotonic in the sense it could never go back up, right? You would- it would either go to the right or come down to the right or come down. It could never go back up whereas the precision recall curve can sometimes go back up. And the reason for that is in um, what we're measuring is uh, a trade-off between precision and recall. So let's imagine what happens if we were to start over here, right? Bring the threshold down. We got one example, and that one example was positive so the precision is 100%, but the recall was, you know, 0.1. So by- by, uh, we come to this point basically, where the precision is, you know, 100% but the recall is 0.1. Then let's say we bring the threshold down one more level. The thre- the precision is still at 100%. The recall has increased to- to you know, um, 0.2 because we- we recovered 2 out of the 10 examples. And then let's say we encountered a third example, a third that was negative, right? So recall has not improved at all because you've still recovered only 2 out of the 10 examples. But the precision drops from 2 out of 2 to 2 out of 3, right? So the drops are always vertical because we come down in precision. Whenever we encounter a new example, the precision will come down, but the recall will stay the same because the number of positive examples that we've recovered does- you know, does not- does not change if you encounter a new negative example. So the precision recall curve has these vertical drops that correspond to encountering negative examples in our scan, right? But once we are here, let's- let's say we start encountering positive examples again. Then- then what we see is um- so we go from 2 out of 3, to 3 out of 4, to 4 out of 5. So this is like a gradual increase as we encounter more positive examples. So the precision recall curve will um- will have these, you know pretty, uh- uh- uh pretty standard pattern of having vertical drops and slow climbs, vertical drops and slow climbs. Whereas the ROC curve will have horizontal, vertical, horizontal, vertical, uh,-uh, movements. Okay? And just like the ROC curve, you can um- you can also measure the area under the precision-recall curve, right? And one- one kind of uh- uh - another observation that you can make is that while the ROC curve always started at the top-left corner and ended at the bottom right corner, the precision recall curve, when it reaches 100% recall, right? When you- when you take the - the threshold all the way to 0, your precision, when you- you effectively, you know, uh- uh predicted everything to be positive, your precision will be exactly equal to your prevalence, right? So the precision recall curve will terminate at- at the- at- at the right-hand side, at the level equal to the prevalence. So in this case, prevalence was 0.5 ish or yeah, somewhere- somewhere there. So it ends here at prevalence equals to 0.5. That also means that, you know, there is this, uh, [NOISE] the precision-recall curve, you know, might look something like- like this. [NOISE] So this is always the prevalence level where the precision recall curve will end. It also means that there is this kind of area in a precision recall curve, a certain area in the curve where the precision recall curve just cannot enter, right? No matter how horribly you- you- your modeled ranks your, uh, examples, in the worst-case, let's say your model places all the negatives at the top and all the positives at the bottom. Like the worst possible situation, right? In the worst possible situation, you're gonna keep encountering negatives that gonna, um, you know, kill our precision with zero recall, right? And you're gonna start off climbing the ROC- the- the precision-recall curve starting from the first positive example. And we're going to basically, uh, start at, uh, you know, once the threshold comes here, we're gonna start at 1 over the sum of you know, all the negative examples plus 1. From that, you're gonna um, move on to 2 over sum of all the negative examples plus 2 and so on until we reach you know, a number of positive over number of negative plus number of positive. And it's gonna be the slow climb until the prevalence level. So, no matter how bad your model is, you know, you- you kind of get this amount of area for free, in the area under the precision- recall curve. All right? So um, there is- there is- there are variants of the area under the precision curve that account for this, that kind of, you know, discounts this area and only measures, you know, kind of the area that your model has truly earned. And regarding- disregarding the part you get for free. Um, and that's something uh, you might be- might be kind of, um, that's something worth knowing, right? If your prevalence is very high, then your area under precision recall curve might look very good just because you get a lot of area just for free. All right, moving on. Okay. So let's- let's look at these two, um-, these two models, okay? So um- we, uh, let's- let's suppose there is some data-set and we have two models, model A and model B, right? And the models, uh, models A and B make predictions or assign probability values to every example. And let's say we, you know, um, this is the plot. Um, so model A assigned you know, some negative example, this probability value and this probability value. And these are just the probability values that were assigned by model A and B, these are the um, probability values- values that were assigned by model B. Now, by just looking at this, is there a reason to prefer one model over the other? Is one of them better than the other? Or does one of them looked better than the other? B looks better, okay? Anybody thinks A looks better? Why does B look better? So- [NOISE] so the- the answer was-B was confusing er, for a few points, but um, it did well on this region and did well on this region, right? Yeah. So yeah, it might be reasonable to-to, uh, think B does- does better and we should- we should choose B. Now of all the metrics that we have looked so far, which metric would rank B higher than A? None? Any- any other guesses? Right. That's- but- that's the correct answer. None of them will ra- r- rank, uh, uh, um, B higher than A because the ranking of all the points, were the same, right. All the points over here, though they were clustered together, the- if you- if you just look at the ranking position of all the examples, we encounter positives and negatives in the same order. And all the matrix that we've seen so far only cared about the ranking, right. For example, for the area under the precision-recall curve we were scanning the thresholds, but we were only, uh, you know, making moves when we encounter the next positive or negative example, right. We didn't- we didn't care what the threshold value was. We just looked at the order in which the- the positives and negatives were encountered, right. So all the- all the metrics that we've seen so far are ranking based metrics, right. And, uh, as we can see here, here are two situations which qualitatively look very different, but from the point of view of all the metrics we've seen so far, they're exactly the same. Right. And that's- that's, um, um, kind of a motivation for something that's called the log-loss. Okay. The log-loss is, um, it's- it's in fact the- the loss value that we use in logistic regression, right. So the log-loss is, um, think of the log-loss as- [NOISE] so if you remember in logistic regression, if the true label was y and our prediction was y hat, right. The loss that we used was y log y hat plus 1 minus y log 1 minus y hat, right. This value itself can be used as a metric, right. So it's basically just the- the- the- the ah, the log-likelihood of our prediction. That's also called the log-loss, right. So the log-loss is, um, is- is kind of different from the two that we've seen so far, uh, is kind of different from the ones, uh, that we've seen- we've seen so far in that. It- it takes into account the value of- of the, uh, predicted probability itself. It's not just looking at the- at the, uh, um, relative ah, thresholds, right. So if we were to, kind of, sum of our I equals 1-n, you know, yI, yI, yI, y hat I, right. So this- this loss value takes into account the predicted value o- of the probability as well, okay. And it basically, uh, rewards confident correct answers, which means it's not enough that you, you know, um, when- when you're- when you're, um, when you predict, um, a correct answer with a relatively high probability, then it rewards you to take that even higher, but at the same time, it very heavily penalizes you when you get it wrong and are very confident about it. [NOISE] Right. So it- it's- it's kind of asymmetric in a- in a way that it rewards you for having confident correct answers, but pretty much kills you if you make a confident wrong answer, right. And, um, it is- it is at this- this property that [NOISE] makes the model kind of choose the actual probability value more carefully, right. Um, this- this- this, uh, objective does not just, uh, does not just look at whether, you know, the positives were higher and negatives were low, it also looks into how much, uh, you know, what level of confidence did the model, uh, assigning each of these, uh, predictions, right. So another way to think of the, uh, log-loss is, you can think of a betting game, right. Let's say, um, you need to make a prediction of, um, of- of- of n examples. Let's say there are n examples, they can be positive or negative, and you're gonna be presented one example at a time, and you need to make a prediction of, you know, true or false. And the way you are asked to make this prediction is not just, you know, make a prediction, but you're asked to make a bet, right. You need to make a bet of some dollar amount between 0 and 1, let me say, you know, $0.25, or $0.99, or whatever. You need to make a bet of what the correct answer is for a given example. Now, if you make the bet, you- you, um, um, you win the- the bet amount, but if you lose the bet, you win 1 minus the betted amount, right. So, um, which means now if, um, um, [NOISE] so let's say, you know, uh, you have example I- xI, you need to make a bet of, you know, what is the- wha- uh, you need to make a bit of what, uh, uh, uh, of- of- of how confident you are about the, you know, um, um, prediction that you made. Let's say you predict it to be, you know, um, y equals 1. You know, you make some prediction and you- you also place a bet, we'll call it p, right. Then for x, um, um, x2, [NOISE] let's say you predict y equals whatever, 0, and some bet amount p. Let's call this p1, p2. Now, if you- if you win, you get p, if you lose, you get 1-p. [NOISE] Right. And then finally, you know, you go through all the example, and in the very end, what you get is not the sum of them, but the product of them, right. So for- for- for one example, maybe you got- you got the correct answer and you earn this value, and for another example, maybe you got it wrong and you earned this value, right. And the final, uh, take-home amount that you're gonna, uh, uh, uh, um, uh that you're gonna, uh, walk away with at the end of it is gonna be the product of all these values, right. Which means if for any one of the examples you predicted, um, uh, you assign a confidence of 1 or 0.99999, and you happen to be wrong, the entire product will be 0or very close to 0, right. So- so in this exercise, um, in fact, there are- there's, uh, kind of, uh, studies that show that, you know, in- with- with this kind of an exercise, you can extract the- the true level of confidence that, uh, a person has in making a bet, right. If you were to earn the sum of these things, then you would be a lot more- you'd- you- your- your betting strategy would be very different. But if you're going to walk away with the product of all these values, then you know that if you make one confident wrong prediction, you assign a probability of 1 and the answer was wrong, then you get 0 and the product of 0 with eve- anything else is 0, right. And, um, so that's- that's basically how the, um, yeah. A question? [inaudible] So the question is, can we use this in, uh, when, uh, on models that make di- you know, class direct- class predictions rather than scores of probabilities. Now, so the log- log-loss is for probabilistic models only. [inaudible]. So the question is, you know, um, [NOISE] uh, perhaps rightly you're identifying that, you know, wouldn't this encourage the models to be more conservative, you know, rather than making extreme values? Yeah, the log-loss can make your models more conservative, but, uh, uh, the- the, um, the interesting thing out of this is that, this process, you know, this kind of a loss, is of proper scoring rule. Which means- [NOISE] which means the predicted probabilities, you know, from- from this kind of a loss will always result in, you know, uh, calibrated probabilities. Then let's say you want to do like apply modern in the financial market Yeah. You're never going to prove in this because in the financial market you don't get products, you get services. And if you make a words in place [inaudible] any investments and it makes one loss but the [inaudible] [inaudible] Yeah, so the question is, uh, would this be, uh, a good- good metric or good loss to use? So in a financial setting where you generally take the sum of, you know, rewards rather than- yeah, so there are- there are, um, um, you know, the- if- if you're interested, you know, the- the- the thing to look for are robust scoring rules. [NOISE] And we're gonna look at one, uh, in- in a few slides. So there are other kind of scoring rules which still give, you know, p- uh, a calibrated probabilities. Other scoring rules that give you calibrated probabilities, but are much more risk tolerant than, um, uh, the log-loss, right? And- and you can use those, uh, in, uh, those contexts, right? So, uh, so the log-loss is this, uh, is this kind of a loss that not only evaluates the quali- not only evaluates whether you, you know, got the answer right or wrong, but also kind of elicits the true level of confidence [NOISE] or- or the true kind of belief you have, probabilistic belief, you have in making- making a certain, uh, uh, prediction. So this is- this is an example of a proper scoring rule and we- we saw kind of proper scoring rules in- in, uh, in the- in the maximum entropy lecture as well. So that's, uh, log-loss. [NOISE] Which brings us to, uh, this concept of calibration, right? So calibration is when the predicted probabilities match the real-world occurrence or frequencies, right? So in this case, uh, we have two models on the right, over here, right? So the blue line corresponds to the cali- is the calibration curve of, you know, a logistic regression model and the orange line corresponds to the calibration curve of a support vector classifier, support vector machine, right? What- the way to read this plot is on the x-axis. The x-axis corresponds to predicted probabilities and the y-axis is the fraction of positives in the region around this predicted probability, right? And the bottom plot over here is just a histogram of how were the different predicted probabilities, uh, how are they distributed, right? And you always want to look at, uh, a calibration plot and the histogram of predicted probabilities in conjunction all the time, right? So, um, if the calibration curve goes close to the 45-degree line, it means when the predicted probability was 0.8, for example, then the actual fraction of examples was also, you know, roughly 80%. Similarly, if the, uh, of all the predictions that had a prediction of 0.2, roughly 20% of them were, you know, uh, the, uh, positives, right? However, the orange curve looks pretty different, right? So, you know, for the orange curve, even when it predicted, you know, a 0.6 probability, most of them were actually, you know, uh, uh, uh, mo- most of them were- were actually, you know, all positives. This means that the orange curve, you can think of it as being under confident in some way. And even though the examples were mostly positive, it kind of hesitated to assign high probabilities to them, right? So even- even in the region where, you know, the predictions are still 0.6 and 0.7, all of them are actually, you know, positive examples, right? Uh, so a ca- a calibration curve that looks like this. So a perfectly calibrated curve would be like a straight line, right? An underconfident calibration curve, will have this kind of an inverted S-shape, right? Similarly, an overconfident, uh, model will have a calibration curve that looks like this, right? So this would correspond to overconfident. This is underconfident, right? If it's correctly here, this is, you know, well-calibrated. So, um, the- the, um, um, few take- takeaways from this slide is that it's- it's always, you know, uh, important to kind of see these two plots in conjunction. And the way you wanna read them is, first, make sure that your model is well calibrated, right? If your model is not even calibrated, then the probabilities that it is, uh, uh, uh, uh, uh, predicting are not even valid probabilities, right? And then once you- once you know that a model is well calibrated, then look at the histogram to see if you have these nice two peaks at the extremes, right? So that- that's like the perfect scenario where the calibration curve is- is exactly along the 45 degree lines and the- and the histogram, it has these two perfect, uh, uh, uh, you know, two peaks all the way at the left and another peak all the way to the right. So that's- that's the, uh, ideal scenario. However, if your model is not well calibrated, right? In this case, in the orange case, if the model is not well calibrated, then this histogram says nothing at all. It does not tell you whether the model is doing well or bad, right? In this example, if you see, you know, uh, both these models have, uh, you know, for a chosen threshold of 0.5, the logistic regression and support vector machine have, you know, 0.87 precision, 0.85 recall, 0.86 F1 score. They're all exactly the same, right? But the calibration of the support vector machine is completely bonkers. You know, it's not well calibrated at all and so the histogram does not reflect the quality of classification. But if the model is well calibrated and the histogram gives you a good sense of se- of- of how well the model is- is, um, um, uh, separating them. So just like the log-loss that we saw over here, there is this other score that you can use called the Brier score. So the Brier score is [NOISE] very similar to the mean squared error. [NOISE] So the Brier score, we can think of this as, you know, sum over i equals 1 to n of all your, uh, n examples, p-hat, let- let, you know, let- this is the predicate probability, right? So p-hat^i minus y^i [NOISE] squared. [NOISE] It's just the- the mean of the squared error between the label and the predicted probability. Yes, question. So it's- if, uh, if your model is, uh, overconfident, then your histogram will generally look like the blue one because it's- it's assigning like really high- high, uh, uh, uh, probabilities. So, you know, the probabilities are more at the- at- at the extremes it's- it's more confident, but they are wrong, which means they've got the orderings wrong. So over here in the histogram that, you know, the labels are not visible here, right? So this means most of the probabilities at- most of the examples got this probably, but it does not tell you what fraction of them were correct or wrong, right? Whereas the- the, um, uh, calibration curve tells you what fraction of them were correct or wrong. [inaudible] It could look similar to- so yes, so the- the histogram could look similar to the well-calibrated one, it could not- there's- there's- generally underconfident, um, underconfident models, will have more probabilities assigned near 0.5. So you'll ha- you'll see a bulge over here if they are underconfident and, you know, confident ones will have, uh, more- more assignments at the extremes, right? What you wa- ideally want are confident models that are also calibrated. Yes, question. [inaudible] Yes. So the question is, uh, why not just look at the calibration plot and just ignore, ah, ah, ah, the - the uh, histogram completely. And, uh, the - the answer for that is, back in the calibration discussion, we saw how, you know, uh, calibration does not necessarily mean accuracy at all, right? So you can have a perfectly calibrated plot. But most of your assigned probabilities might be, you know, just 0.5. And you have a few of them that are, you know, well calibrated on the other side, but for most of them you're just predicting 0.5. And that's - that's kind of bad. So ideally you want a plot that is perfectly calibrated and also well separated and have these two peaks. So that's - that's like a good sign, right? So the Brier score is basically the - the mean squared error between the predicted probability, right? So this y is either 0 or 1. And this is between 0 and 1, okay? So we see that, uh, even though the - the usual ranking based metrics are exactly the same, the Brier score between them is quite different, right? And the Brier score is also a proper scoring rule [NOISE]. So in the - in the - in the example one of you asked before about, you know, what if you're in a - in a - in a kind of a finance kind of a scenario where you want to, uh, where you want to - to, um, uh be more risky in some way. For example, uh. You know, you can use, you know, the Brier scoring rule in those cases. You know, the Brier scoring rule also - um, is also a - a proper scoring rule, which means it will value calibrated, uh, uh, probabilities more. But it is more risk tolerant. It has other trade-offs compared to the log loss. Right, any questions on this? So - so - so basically the takeaway is that, you know, always look at the calibration and the histogram in conjunction. Make sure - first make sure that the model is well-calibrated. Otherwise your predictions are not even - they're not even valid probabilities. And once it's calibrated, look at, you know, look at - look at ho - how well separated they are. So the - the - the - you may also see uh, the terminologies like calibration versus discrimination [NOISE] right? So discrimination generally refers to how well your model can separate positives and negatives in a ranking sense, and calibration tells you how meaningful are the assigned probability values, right? And - and you - you generally want your model to be well-calibrated and have high dis - high discrimination. Right, moving on. Right. So, ah, the - the log probability of the log loss can also be used in unsupervised learning. So supposing you have a Gaussian mixture model or factor analysis, right? You can use log P of x to measure your, um, uh, whether your model has under-fit or lower - or over-fit. Log p of x by itself, the raw value is hard to interpret. You know, it's - it's hard to just look at the log likelihood and tell whether the model is good or bad. But you can always measure the log likelihood or log p of x on your training set and on your test set and measure the difference between them, right? If - if the, ah, if the gap between them is very large, if you - if you have a high log likelihood on your training set, but low likelihood on your test set, then your unsupervised learning model has over-fit on your training data. So you want - you want to, you know, ah - so - so under-fitting, over-fitting, uh, even though it's commonly used in a supervised learning setting uh, to - to uh, measure accuracy or whatever, is not limited to supervised learning alone, right? Under-fitting or overfitting concepts apply just as well to uh, unsupervised uh, settings as well. Yes. Question? [inaudible]. Between uh - so - so the question is - ah - Take the log likelihoods for like his training versus validation set? So - so - so what you want - so - s - the way you want to - so - so the question is, um, you have a t - a particular training value and you have a particular test value. How do you kind of find a balance between them? The - the onset is very similar to how you would do in - in a supervised setting. Let's say you have, you know, uh, uh, 85% accuracy on your training set, on the supervised classification. Let's say you have, you know, 23% on your test set, right? So what do you do, you go and, you know, address bias or adr - address variance to bring the gap, uh, closer together. Similarly, you know, you fit a Gaussian mixture model that gives you a log likelihood on your training data to be, you know, generally there are like, you know - you know, like minus 2,000. [NOISE] This is like a reasonable value for, you know, a log likelihood of a - of a Gaussian mixture model. Let's say you get minus 2,000 on your uh, training set and on your test set, you get minus 18,000 [NOISE], right? Then - then you know uh, it probably means you have assigned too many number of clusters. You want to reduce your K value, for example, and kind of bring the two together, um, or so - something - something along those lines. So, yeah, so log P of x is - is - is - can be used as an evaluation metric to measure the gap between your training likelihood and the test likelihood, to see if your unsupervised model has over-fit or not. Okay? Um, however, you know, k-means is a little tricky. You know, k-means does not have, uh, a good probabilistic interpretation, and it also uses, you know, fixed co-variances. So with k-means, it's - it's a little harder to measure, uh, whether it is, uh, under-fit or over-fit. Howe - uh, so, uh, however with Gaussian mixture models it uh - uh, or probabilistic models in general like it uh, works pretty well [NOISE]. Yes. Question. [inaudible]. Yeah. So - so, you know, can we take the mean squared between uh, you know uh, the centroids and, um, the examples. So we just basically the distortion function and measure the distortion function on the test set. Uh, you could do that, but that does not always work well. So even the distortion function can - can um - um - so for example, uh, let's say, you know you have [NOISE] some - some clusters like this. And let's say you assign, um, uh - let's say you - you - you uh, assign six uh - uh clusters cent - centroids. So you get one - let me use a different color. [NOISE] So you get this centroid, this centroid, centroid, centroid, centroid, centroid. And I mean - I mean, looking at this qualitatively, you can tell you kind of over-fit that will actually just three clusters, but you've assigned six. Um, however, if you were to measure the distortion function, if you were to hold a fraction of them as a hold-out set and measure the, um, measure the distortion function on the test set, they will look kind of pretty similar. So the distortion function um, you know, because it does not uh, have this flexible covariance per cluster, right? It's, uh, it's - it's hard to use the distortion function as - as a way to check whether you under-fit or over-fit. [inaudible]. Yeah. So you - it's - it's - of - of the k-means it's - it's usually, you - you end up doing some kind of a visualization and it's more, you know, like, you know I see and do some spot checks to see if, you know, a few examples that were in the same cluster are meaningful or not. Or you know, in a similar examples ended up in different clusters. You know, it's - there is - there is no, it - it's not as cleanly, um, um, defined as the uh, log likelihood that you get from uh, Gaussian mixture models. Yes question. [inaudible] [NOISE] Yeah- yeah, so- so- so the comment there was you can- you can have this, you know, distortion versus K and eventually it's gonna- the distortion is gonna flatten out and maybe, you know, just look at it and- and- and choose some value, right? However, this requires, as I said, you know, this is like a heuristic you are to look at it and- and decide, you can't compare one scalar value with another scalar value, one on a test set and one on training set and- and declare that it will all fit, right? So yeah- so measuring underfitting and overfitting is- is- is not as clean with k-means as it is with Gaussian mixture models. Yeah, but this- this- this is a commonly used approach yeah. Right, moving on to class imbalance problems. So generally if you have examples where one classes is- is a much smaller fraction than the other, or if one class, kind of, dominates the other class, then, you know, you kind of call it- you say that you are in a class imbalanced scenario. And a lot of the metrics that we've, ah, kind of seen, um- um, ah- ah, today can end up, ah, breaking down in under class imbalance scenarios, right? So, uh, so for example, accuracy, right? So if you're- if you're in- in- in a class imbalanced scenario, and lets say you have just 1% of all your examples to be, you know, positive, um, and you just- you have this- this blind classifier that predicts no all the time, it gets 99% accuracy, right? So having a high accuracy, does not mean much, ah, in- in a- in a class, uh, imbalance scenario. Similarly, the log loss, right? In the log loss, the majority class can easily dominate the assigned probabilities. So if you have, um, if you- if you- if you- log loss can be easily, um- um- so imagine a- a logistic regression where [NOISE] you have some, you know, some- some number of positive examples over here, um, and you have just one negative example over here. Logistic regression, will place the separating hyperplane, that is the line at which the- the, ah- ah, y hat is 0.5, probably somewhere over here, right? And- and, um- um- so- so the- the majority class can easily dominate the log loss and make the, um- um- make the, uh- uh, separating hyperplane be placed at some, kind of an unreasonable uh- uh, unreasonable, uh- uh, location, right. So the log loss is also vulnerable to class imbalance. AUROC, right. So imagine, uh, let- let- let's see how the AUROC can break down. All right. So let's say you have, um, examples that- that are ranked like this [NOISE], right. So suppose you are- you are, um- suppose you are, um- um, in a- in a fraud detection scenario, credit card fraud detection, right. You want to build a classifier given a credit card transaction classified- classify it as fraudulent or not, right. In this scenario, let's say you decide to use AUROC as your metric, right. AUROC, uh, and- and- let- let's say the- the- so typically in- in these, kind of scenarios where the prevalence is very low, most of the times you are in this, kind of retrieval scenario or information retrieval, kind of a scenario where you want to, you know, in this- given this pool of- of all credit card transactions, you just want to surface the fraudulent wants to the top, and, you know, act on them somehow, right. So in- in this, kind of a scenario, let's say you- you have a model that- let's say you have 10,000 transactions, and in this 10,000 transactions, let's say 10 of them are fraudulent, okay? And let's say 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 so the first 10 were all good transactions, and the next 10 were all fraudulent, right, or we assume there are 10 here, and then you have all the good examples. So this is the first 10, the next 10, and the remaining 9,980, right. Let's say your- your model ranks them in this way. Okay. From, um- um, from- from the user of this model, this is a useless model, because you want to act on the top most likely fraudulent, um- um- the- the top 10 most likely transactions are to be fraudulent, but all of them over here were valid ones right, and then comes the fraudulent, you know, after you have an equal number of, um- um- um, you know, good transactions and then you have the rest, right? Now, what's gonna be the AUC of this model? All right. Let's- let's do a quick sanity check [NOISE]. So- so we have our 10 positives. So chop this into 10, ah- ah- ah, 10, intervals the sensitivity side, and the specificity side chop it into 9,990, right. You have 9,990 pieces here. So the first 10 were all negatives, which means for the first 10, you get a steep drop, and then we have 10- all the 10 positive examples. So, you know 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and then you get the negative, and this will give you an area- area under the curve of 0.999 or 0.99. Maybe I missed the 0.999 or 0.99. Anyway it doesn't matter. It gives you a really high AUC. Even though, you know from- from, um, the usefulness of this model, you got only good transactions flagged as- as fraudulent at the top, right. So in class imbalanced scenarios, AUC can also be pretty bad. Accuracy is the worst, you know, you probably never should use accuracy when you have class imbalance. But AUC can also be- can also be fooled, right? And so AUPRC is somewhat more robust, um, in precision- in- in- in these kind of scenarios, when you're interested in, you know, um, the quality at the top, where your focus is on the top of- of what you do because you're generally not going to act on the rest at all. You're just gonna, um- um, do something with the- the top predicted examples. In those kinds of scenarios, you probably want to use AUPRC rather than AUROC. So, um- so- so the, uh- uh, summary is that we're in- in, um, class imbalanced scenarios, your least preference should be accuracy as the choice of metric. AUROC might be better, might not be better. Because AUROC can also be fooled. AUPRC is generally more robust, uh. But with AUPRC there are, you know, uh, other challenges that come with it. For example, you know, if your AUPRC curve, you know, looks jagged like this. This is pretty sensitive to, um- um, you know the ordering over here. If your ordering is- is different, it might look very different, right. So the AUROC can sometimes seem pre- pretty sensitive to small changes. Yes question. [inaudible] Yes. So the- so the question is, uh, you know, uh, can you- can you kind of adjust the class balance by upsampling or downsampling, uh, the classes, uh, to fit your model? You can- you can do such things. So that's- that's actually a very good point. If you- in order to make your model perform better, you can do all kinds of things like adjusting the class balance, upsampling the, you know, the minority class, downsampling the majority class and so on. However, when you are measuring the quality of your model on a test set or validation set, the prevalence in the test set or the validation sets should match the true prevalence, right? Do the adjustment upsampling, downsampling only on the training set and not on the- on the test set. [inaudible] It could help. It's, you know, the question of whether, uh, you know, upsampling and downsampling, will it help or not? The answer is always cross-validate and check, right? Have- have a validation set which, you know, which is- which is kinda of, you know, pure in the sense that it- it has the same prevalence ratios and so on. Um, do whatever you want with the- with the training set upsampling, downsampling, whatever, but then measure it on this, you know, pure validation set which has the right or the true prevalence ratio and see if it helped or not. And in terms of [inaudible] Yeah, generally, AUPRC is- is, uh, that there are a few more metrics, uh, I think in the next slide. Uh, the next slide, so probably the slide after we'll, uh, uh, will look at them, right? So you can extend all of these to a multi-class, uh, classification setting as well. So the confusion matrix in a multiclass will be, uh, you know, uh, k by k or n by n, depending on the number of classes you have. So in binary, it was just, you know, um, the confusion matrix had just four where, you know, you had two classes, two classes. But if you're in a- in a multiclass setting, you can have one row per predicted class and one column per actual class, right? And then you just count, you know, what- what- what number of class C did, you predict a C, and so on. And here, you want your diagonals to be heavy, right? And the reason why, uh, you know, the name confusion matrix is used is probably more, uh, illuminating here. Because if you have high, you know, high- high values over here, then it means the model is generally confusing, you know, Class B to Class B, right? So, you know, the model is confused between those two classes. You know that, that's an interpretation you can, uh, you can use, right? So, uh, some of the other metrics like accuracy can be, um, except, um, um, accuracy can be analyzed as one versus many. Which means you can- you can plot AUPRC curve for one class, right? Um, you, you know, take all the- all the, uh, the predicted probabilities, see what probability was assigned to this class versus others. You know, just- just condense your classes into just two. You can do that. Um, accuracy, you can, however, you know, get- get. It's basically the sum of all the diagonals divided by the sum of all the- all the elements. There are variants of the, uh, uh, ROC and the PRC curve. Uh, where there's a minor retake, or micro averaging and macro averaging and you can extend the ROC curves in the precision-recall curve to, um, multiclass setting. Uh, and you can also, uh, uh, there's something called a cost-sensitive learning techniques, where you can assign some kind of a dollar value to each, um, uh, on each, um, cell in the confusion matrix. And you can construct a loss function by, um, um, uh- So you can- you can construct loss functions by- by, uh, absorbing these dollar values or- or any kind of a weight into the loss function and- and train on them. So you can- you can look up cost sensitive learning techniques to do that, right? If- if- if the cost of, um, a false positive is a lot more than a cost negative, then you can incorporate those kind of prior information into your loss function, right? And finally, some, uh, some- some tips on choosing a matri-, uh, evaluation matrix. So you will see that commonly-, you know, the problem that you're trying to solve generally fits into a few templates or few patterns, right? So there are going to be some problems where high-precision is a hard constraint and you want to maximize the recall as much as possible subject to this high-precision. So one way to, uh, one, uh, one example for that would be if, um, the action that you are going to take on positive, um, examples, um, or examples that are predicted to be a positive can have a severe negative effect or side effect on- on if the example happened to be negative. Then you want to have high-precision as a hard constraint, which means you want to, your model when it says positive, it has to be positive. And subject to that hard constraint, you want to improve your recall as much as possible, right? And, um, for example, you know, um, uh, search engine results, right? When you- when you search for, um, something on, let's say Google, right? What's going to show 1up on the top? It better be relative, it better be, you know, correct. You may miss out a few search results, but showing irrelevant- irrelevant results on the top is- is really, really bad, as opposed to missing out some relevant results. In those cases, high-precision is- is- is- is- is a hard constraint. And you want to, um, uh, recall as, you want to improve your recall subject to this constraint. Similarly, if you're writing a grammar correction software, then, you know, if you miss out a few corrections, that is fine, but making a bad suggestion is a lot, lot worse, right? And in those cases, you want- the metric that you want to use might not be any one of the metric, but, you know, um, the metric could be recall at a certain precision, which means you have, you know, a precision-recall curve that looks like this. You fix your- your, uh, um, you fix your precision level to be some acceptable threshold and you measure what the recall was at that fixed precision level. And then you want to optimize your recall, right? So this length will now be your metric, right? The recall at some fixed precision. On the other hand, you can have, um, other kinds of problems. For example, medical diagnostics, where you want to- to, um, let's say you have some kind of a screening test where let's say it's a blood test and if it's, you know, if it's positive, then you want to send them to a- a more expensive, more accurate test, you know, and so let's say you are- you need to build some kind of a- a- a simple- simple screening test. And in those cases, it would be very bad if you missed patients who actually had a condition. In which case false negatives are a lot worse. But it's okay to have a few false positives because you're going to conduct this more expensive test anyways on those who- whom you flag. In those kind of scenarios, you want to maximize your precision subject to a high recall, right? You don't want to lose out, um, uh, patients who have a condition in the screening test, right? But at the same time, you know, um, subject to this, you know, 100% recall constraint. You still want to improve your precision because you do want, I mean, you do want to waste the more expensive test on people who may not have the condition. So in those cases, you may want to have a fixed recall. Okay. Fix the recall, and maximize the precision, right? In that case, this height will be the metric that you care about, right? So you can- you can- you can construct these derived metrics from the precision-recall curve depending on what's- what's important for your application. And sometimes they may be capacity constraint, right? Let's say you want to- you want to, um, um, you want among- among the patients that are admitted today in the hospital, let's say you wanna take some kind of, um, you know, a different action on, say, five of them because you have only five doctors available to say test this new procedure, for example, in those case, K is a hard constraint, your capacity is a hard constraint, right? In- in- in those cases, all what matters is the quality of the top-K results, right? You want- you want high- high-precision in the top-K. You don't really care what- what happened here, right? Because you have this hard capacity constraint. Even if you're losing out other positives, you don't- you don't care because anybody who did not have the capacity to do anything about them, right? In those cases, just look at the precision in the top-K. So rank your examples. Just look at the top-K ranks and see what the precision was in- in- in that fraction. Okay. And there are many, you know, similar to this, there are many other derived- derived metrics that's something that you should think about and decide what meaningful metric, what- what a metric is meaningful for the application that you're doing, right? And similarly, um, you know, to- to- to choose- choose a threshold, um, you're going to use, you know, the threshold that you choose for classification would also be based on this. Supposing you have a fixed, um, um, a fixed recall, you- you want a- a fixed recall of certain level, you know, go up to the curve and- and work backwards and see what the threshold was that got you this point. All right, that- that's about it from, uh, for the evaluation metrics. Uh, we'll wrap up with that. Any other questions with what we've seen so far? If not, I'll- I'll- I'll be around here, you know, walk up and, you know, ask any questions you have, and- and, uh, that's it for today. We'll break. [NOISE]
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_17_Factor_Analysis_ELBO.txt
All right. Welcome back everyone [NOISE]. So this is Lecture 17 of CS229. The topics for today will be, uh, to finish, uh, the expectation maximization algorithm. We'll prove the convergence of expectation maximization algorithm, and then we'll apply it, uh; the expectation maximum- uh, maximization algorithm to the Gaussian mixture model that we saw last, um, in the last class. And then we'll move on to a new model, uh, called the factor analysis [NOISE] and solve factor analysis, again, using expectation maximization. So that's the plan for today. And before we jump into today's topics, a quick recap of what we saw in the last class. So in the last class, we started off with unsupervised learning, right? And in unsupervised learning, we are given just, um, data points, a set of xs, and there is no supervision that comes along with it. What I mean is there's no y-values that come along with the, uh, corresponding x-values. We're just given the x-values. And we started off with clustering problems, right? Clustering problems are those where which we want to, uh, identify some kind of a hidden structure where the data kind of clusters into different groups. The first algorithm that we saw was k-means. In k-means, we are given a collection of, uh, examples; xs. And also we are- let's assume we are also given, uh, the nu- the number k of how many clusters we want to identify. And the way we go about, um, executing k-means is to come up- uh, is to randomly initialize k different cluster means where Mu_j refers to the j_th cluster. There are k such Mu_js. Each of those, uh, cluster centroids; they are called centroids, they live in R_d. The same space where our x lives in. And then we loop until convergence, where in step 1, which you can think of as the E-step, uh, we set the cluster identity c_i of the i_th element to be the identity of the j_th, um, uh, cluster centroid to which it is closest, to which it's nearest. Right? So there are k different Mus. And for each i, we check which of the k Mus is nearest to it. We- where it's commonly, we use the l_2 distance. And depending on which one is the nearest, we use the arg min. So the- the identity of the cluster to which x_i is nearest, we set that to be c_i. And then what you can think of as the M-step is for each of the cluster centroids, we only look at the examples which got assigned to that centroid in the previous E-step, and take the average of those xs to update our estimate of Mu_j. Right? It's this iterative algorithm where in one step we perform a cluster center assignment. So this is assignment where we assign each point to a centroid, and then update the centroids using the mean of the assigned points. And we also, uh, briefly discussed of how this overall algorithm can be thought of as coordinate descent on this loss function, which is also called the distortion function. So the distortion function takes two parameters: the, uh- the cs and the Mus; the set of all cs and the set of all Mus. And this loss function can be minimized using coordinate descent. By coordinate descent what we mean is, we hold some subset of the parameters fixed and optimize over the remaining, uh, uh, subset of the parameters, and then we hold the remaining parameters fixed and optimize over the first, uh, set of parameters, and we go on and on. All right. So, uh, you can think of this step as optimizing over the c parameter's while holding the Mu's fixed, and this step as optimizing the Mu parameters while holding the c is fixed. Right? So that was k-means. And then we saw this other model called [NOISE] Gaussian mixture model. So Gaussian mixture model, we can, uh, think of it as soft k-means. [NOISE] In Gaussian mixture model, we are again, given k, the number of clusters or the number of, uh, ga- mixtures- the Gaussian mixtures, uh- um, that we wanna, um, uh, fit. We are given the set of xs. We don't know the cluster identity. If this was supervised learning, then the cluster identities would be the y_is. And again here if supervise- if this was supervised learning, we would be given the c_is, which would be like the y_is. And we assume that the gen- the data generative model is something like this. There is, uh, uh- so we have a multinomial distribution, which you can think of it as like the class prior. You know, what- what's the total number of examples in each given cluster? That- you- you can think of that as a multinomial? And the z_i is the cluster centroids- uh, I mean, the cluster identities of, uh, every corresponding x_is, are sampled from this multinomial. Right? And then once we know the z_is, each- for each identity of z_i there is a corresponding Mu and z. Right? And the xs are sampled from the corresponding Gaussian based on the value that z was sampled from. So it's important to note that z in this case is discrete. Okay? Because it is sampled from a multinomial, we get- we- uh, z will happen to be one of the k different, um, uh, uh- one of the k different, uh, Gaussian mixtures. And by the choice of z, there is an implicit Mu and Sigma associated with that Gaussian, and the x will be sampled from that Gaussian. Right? We think of this as the data generating distribution. And so the parameters that are in this model are the Phi, Mu, and- and- and- and Sigmas. In this case, the parameters was- was both the c. C_i and the Mu_i were the parameters. So the parameters are in the green boxes. Right? And then we came up with this- with this, uh, iterative algorithm again. When we came up with this, we were just using heuristics to make it something that is similar to k-means. We had not yet derived the EM algorithm in its- in its, uh, uh, most general form, and with the algorithm that we came up was this. So set- so randomly initialize the parameters to some values just as in the case of k-means, and then we set, uh, w_i for each example. Okay? So this loop is over the number of examples. For each examples, we set w_i to be pro- the probability that, uh, z_i- the- the probability that z_i is the- is the, uh, um- z_i equals j given x_i, and- and the, uh, current- uh, uh, current values of the parameters. And then using these- these, uh, weights, where, uh- the weight is basically telling us what's the probability that, um, um, x_i corresponds to the j_th, um, uh, cluster. So the sum of all j of these w_is will be equal to 1 for every i. And using these weights, we perform what- what we call as the m-step, which was to recalculate or update the values of the parameters using the, uh, current estimates of the weights. Right? And this again, was very similar to what we did with k-means. Right? So the- the red box think of it as the E-step, and the green boxes think of them as the M-step. Right? And the green boxes, we are updating, the, uh, the parameter based on what we, uh, uh, thought off, uh- based on what we- uh, what we derived in the E-step. Right? In the- in the- ah, in the case of Gaussians- a mixture of Gaussians, each cluster centroid will be the weighted average of the x_is. In k-means, it was just the average of the x_is which got assigned to that cluster. But over here, every- uh, every example belongs to every cluster with different weights. So this is just a weighted means of all the xs. And- you know, we have, um, um, corresponding, uh, uh, parameter update rules for the Phis and- and- and Sigma_js. And basically, all of these, uh, derivations was very similar to the case of GDA; Gaussian discriminant analysis, except, you know, we have a small- a few small, uh, uh, changes. The changes are that each cluster centroid can have its own covariance rather than having a shared covariance, and also that- that there are k such, uh, um- there are clusters rather than just two. Right? Otherwise, this is pretty much exactly the same as GDM. And then we moved on to deriving the- the most general form of the EM algorithm. Right? In the EM algorithm, what we, uh, wish to maximize is this marginal likelihood. That's log p of x, uh, um, for a given value of Theta. Right? This is our likelihood objective. Had we observed the full data, had we observed all the zs like in the case of, uh, GDA, we would- we would instead maximize [NOISE] p of x, z Theta if z was observed. But z is not observed, so the- the- the- the most rational thing to do is just to maximize the, uh- uh, likelihood given the evidence that we have, right? So P of x is also called the evidence. And we do some basi- some- some- some pretty straightforward, uh, algebraic manipulations. So log P of x is the same as log of the, um, log of P of x, z with z getting marginalized, right? That's by definition log P of x. And then we kind of cook up an expectation out of- out of nothing by multiplying and dividing it by some arbitrary, uh, uh, distribution, q over z. And once we multiply it and divide it, we can now think of this as an expectation, right? So we had a log and we cooked up an expectation out of nothing, right? And the- the- to think of the way we- we come from this step to this step is described here. So the expectation of some function g of z, where z is, um, uh, is distributed according to q, by definition, is the sum over z, Q of z times g of z. That's just the definition of expectation. And in this case, if we set g of z to be P of x, z divided by Q of z, this becomes the g function, right? And therefore, this is just the expectation by just the definition of, uh, expectations, right? So we started with the log-likelihood, that's the- that's the one that we always want to maximize, and the log in the log-likelihood is a concave function, right? And then we cooked up this expectation out of nothing by multiplying and dividing it by z. So we got a concave function and an expectation. Using the two, we apply Jensen's inequality and Jensen's inequality allows us to swap the order of the log in the expectation by using this inequality, right? So this is the- the log-likelihood. The log in the log-likelihood is the concave function, the expectation is something that we cooked up out of thin air, and Jensen's inequality now allows us to swap this login expectation and get this inequality, right? And this inequality we just gave it a name, we called it ELBO. It's called the evidence lower bound because this is the evidence, right? And this is a lower bound of the evidence, and the lower bound was obtained through Jensen's inequality, right? And as a corollary, we also saw that Jensen's inequality tells us that this lower bound is exactly equal to, if and only if Q of z, that is, uh, the- the- the probability distribution with which we were, uh, taking, uh, the expectation happens to be P of z given x. Well, Jensen's inequality did not tell us this, but we derived it and showed that as a consequence, Q of z must be equal to, uh, um, P of z given x in order for this term to be a constant, right? Now, we- we have this Jensen's inequality and we have this corollary, and using these two, we then der- uh, came up with the- the more general form of the EM algorithm, right? In the most general form of the EM algorithm, we assume there is some parameter Theta, right? In case of Gaussian mixture model, Theta will be a collection of phis, Mus, and Sigmas, right? We're just calling it Theta. It could be any collection of para- uh, it could be any parameter depending on the model that we have, right? And the most general form of the EM algorithm is again iterative, and in each iteration what we do is we perform the E-step, where the E-step is basically calculating these optimum Q distributions, right? The optimum Q distribution, what we saw here is the- is the, uh- uh, posterior distribution of, uh, p, z- uh, P of z given x, right? So in the E-step, we calculate the Q distributions, right? And in the M-step, we maximize the ELBO, right? And in the ELBO, by writing it out in- in- in a more, um, um, verbose form, the ELBO is basically, uh, what- what we have over there. It is the sum from, uh, i equals 1 to n across all examples, and this again is just the definition of the expectation, right? Q of, uh- of, uh, z times log of P of x, z, Theta over Q of z. A few things- important things to note here is that, in the E-step, for each example, we calculate the posterior distribution, right? And once we calculate the posterior distribution, the Qs are held fixed during the M-step, right? That's a- that's a crucial detail, right? The Qs over here are constant. We assume these Qs to be constant. Even though the Qs had Theta's in them, we assume that these Thetas are from the previous iteration and we held- we- we hold them fixed during the M-step, right? And when we are performing this arg max over Theta, the only Theta that is getting optimized is the Thetas that shows up over here, right? So this is the, you know, P of x, z divide by- by Q of z is what we [NOISE] had over here that went into the ELBO, and it is only this Theta that we are optimizing over in the M-step, right? The Q distribution, remember it came from the- from the E-step, and in the E-step, in order to calculate Q, there was a Q over here, but we don't write this as P of z given x parameterized by Theta, because we are not optimizing that Theta, we're holding it fixed, right? And we're only optimizing the- the- the Thetas that show up in this- in the- in the- in the numerator, right? And this gives us, uh, a visual understanding of the EM algorithm, where the dotted black line is log P of x, which we assume is- is hard- it could be hard to evaluate or it could be hard to- to optimize directly, and we're gonna, um- we're gonna- we're gonna, uh, assume that this is not directly accessible for optimization, right? Instead, what we do is we start with some random initialization Theta, some Theta naught, right? And for this particular value of Theta naught, we construct the corresponding q distribution as the posterior z given x at that value of Theta naught, right? And that gives us the corresponding ELBO where the q corresponds to, uh- Q corresponds to the posterior at- at, um- um, Theta naught, right? And this Theta naught by definition or by the corollary will be exactly equal to log P of x over here, right? So we are at Theta naught, and by constructing the ELBO and by evaluating the ELBO, we have effectively evaluate- we- we know what value log P of x is at this, um- um, at this point, right? But now we want to keep improving our Theta values or keep adjusting our Theta values such that we- the- the corresponding log P of x is increasing, right? That's- that's our goal. Our objective is to find value of Theta that maximizes log P of x. And- and we're doing this EM algorithm to do that indirectly, right? If he had observed z, we could have directly performed gradient ascent or gradient descent over this, uh, objective. However, we are doing this iterative algorithm wherein each step we construct a lower bound using a specially chosen value of q and that value of q is the posterior distribution of z given x at Theta naught, right? And then we maximize the ELBO instead of maximizing log P of x. And when- when we maximize the ELBO, we get an updated value, Theta 1, and this Theta 1 will correspond to, um, the ELBO being maximized, right? And it is- using this Theta 1, we now construct a new ELBO, which is tight at- at, uh, um, at Theta 1, and this ELBO has, you know, q1, and q1 will be the posterior evaluated at Theta 1, right? And that's the- that's the ELBO of the second iteration, and then we locally ma- and then we maximize ELBO or locally maximize the ELBO of the second iteration to get, um, you know, Theta 2. And using this Theta 2 we can, you know, construct ELBO 3 and so on and so on, right? So it is this- this, uh, um, iterative algorithm, you know, which we call as the EM algorithm. And what we're gonna do today will be to uh, give- give, uh, a quick and easy proof that proves that the EM algorithm always converges to a local optimum. And then we're gonna apply it to, UH, the Gaussian mixture model and end up with the same update rules using this more principled approach of maximizing, um- um- maximizing the ELBO iteratively. And then we'll move on to factor analysis, which is a different model. Uh, a- another different, uh, latent variable model where there is a hidden z and apply EM to the factor analysis model and solve that using EM as well. So that's the plan for today. Any- any questions so far? Yes? [inaudible] You used Gaussian mixture model and you used [inaudible] and every time new customers [inaudible] a new initialization mission or will you have your, given [inaudible] shift over to, um, Bayesian methods and then start classification. So the question is, what if you have like, uh, you're in, uh- in, uh, if I- if I summarize it correctly, what if you're in a- in a streaming setting where you're getting new data, uh, um, you know, and you already have a model that you fit with the old data, and yo- you obtain some- some more new data if you're using, uh- if you're performing market segmentation and you have some customer data, you fit a model, and you got data from new customers, right? Uh, so the- the- um, now, so what you wanna do over there is first, you want to ask the question, do we want to hold onto the same number, you know, of clusters k, or do we want to update k? That's the first question you want to ask when you get the new data. If the answer, uh, you know, if you want to hold onto the same cluster k, then you can perform EM algorithm where you can use the- the previously optimized parameter values as the starting random initialization and then just refit over the whole data and, we know, that hopefully will just converge faster. However, if you, um, decide to have, you know, a larger k, uh, where you want to have more number of clusters, you know, with the- with the- with the updated data, then you basically have to just start randomly effective business. [inaudible] Yeah. Oh, yeah. Yeah- yeah- yeah- yeah. So- so the question is, uh, we fit the new models and- and we get new customer data and how do we, uh- uh, place them into, um- um, place them into one of the existing um- um- um, clusters. In case of K-means, that would be, you know, uh, pretty straightforward. In place of x_ i, you place the new x star, to the, uh- uh- uh, um, you know, the new customer, uh, data that you get and find the minimum- the closest centroid. In case of the, uh, Gaussian mixture model, um, you will calculate this posterior distribution using the Bayes rule to get, uh- uh- uh- uh, you know, a probability distribution over your mixtures, right? And- and that's- that's, uh, essentially something very similar to what you did with GDA. In GDA, it was only two classes, and therefore it was- it took uh, a logistic regression form. In- in case of Gaussian mixture model, this will end up taking the form of a Softmax, right? And because the co-variances are different, it will effectively be Softmax using quadratic features. [NOISE] Right? Good question. Was there another question? Yes, question. Is there a general setting for the co-variance? So it that for- for each of the i- x, so Q_i will basically put them in the [inaudible] so is it like for discrete classification? So the question is, in- in- in the E-step, are we calculating a discrete distribution over in- on the set of z, the- the set of k classes? Exactly. That's exactly what happens here. For each x- um- uh, so what- what you wanna do is equals j, equals j. That's what- that's how we wanna think of this, right? So Q is- is a multinomial distribution over the clusters. So Q in this case, we'll, you know, Q of z, you know, 3 for the third example. Once you calculate this, will give you a multinomial distribution or the K clusters. So this is cluster 1 through cluster k and you will get some value, 0.1, 0.9, 0.01, and so on [NOISE]. So what that z_3 mean? So z_3 means like the third example. So- so- Like, you know, x_ i, you know, will have a z_ i corresponding here. Yeah, okay? Was their another question? Yeah, sir. I didn't get what you mean by quadratic features because we used a covariance. Oh yeah. So er, the question is, what did I mean by quadratic features because we used a covariance? We discussed that briefly in- in GDA. It's- it's not super critical that you understand that. What- uh, what- what- if you remember in GDA, what we- uh, what we saw was, um, you know, if- if- if your data resides in two clusters and we assume they have equal co-variances, then the two co-variances might look like this. And the separating hyper boundary will therefore be a straight line, right? But if we assume different co-variances, then it could be, you know, a- a curved line. And the posterior of this is effectively represented by uh- uh, logistic regression that uses quadratic features of x_1 and x_2, right? So it's- it's essentially the same and stuff too. You will have, you know, k different centroids and because they're all having different co-variances, right? We assume different co-variances for each class. That means uh, it's like uh, you're performing a Softmax with quadratic features, right? So logistics regre- the generalization of-of the logistic function to k clusters is the Softmax, right? And if you assume um- um, equal- equal covariance of the Gaussians, then you will get- effectively get a Softmax uh, using linear features and if you include um- um- um, different co-variances, it'll effectively be um- um- um, Softmax using quadratic features. But it's not super critical that you understand the- the connection to the uh, uh, Softmax and the quadratic features. Yes, question. So I have a question about why [inaudible] log p of x given only that particular theta, is that because that's how we are constricting the- Exactly. So- so the- the fact that- uh, so the question is, um, the ELBO touches log P of x over here. Does it touch because we know we intentionally carefully constructed the ELBO to be that way and that's exactly right. We construct the ELBO over here such that the Q in the ELBO is the posterior that makes it tight at- at log P of x, right? So let's move on. Uh, so- so the proof of- of why EM algorithm converges. Proof of EM convergence, right? So the- so the proof of EM convergence is something that we'll almost theme uh, over here, right? So in order to prove that uh, EM converges, our strategy will be to show that uh, for every t, our time step or every iteration, L theta of t plus 1 is greater than or equal to L theta of t, right? So l over here is our objective that we want to uh- uh, optimize. So L theta is a- is log P of x theta. And what we wanna show is by- by performing the EM algorithm by iteration after iteration, the log-likelihood of- uh, of- of the marginal log-likelihood of P of x is always increasing with each iteration of EM algorithm. That's what we wanna show, right? And by showing this uh, and also by recognizing the fact that the log-likelihood is- is bounded above. So log-likelihood cannot go off to, you know, infinity. Using those two uh- uh, assumptions, we will- um, we will make the case that because L is bounded above and because with every iteration we are uh, increasing um, the value of L, eventually we will converge. And so what remains is to- is to show, um, this- this condition. And to show this condition, you will basically use the same reasoning that we saw with the uh, diagrams over here, right? So um, L theta t plus 1, right, is greater than or equal to ELBO of x, Q_t, theta t plus 1. Why is this the case? So L theta- this is- uh- uh, this is basically just the uh- uh, definition of- of- of what Jensen's inequality gave- gave us, that the uh- uh, log-likelihood for- for the same values of theta, the likelihood P of x will always be greater than or equal to the ELBO for any values of-of- of Q, right? And this is greater than or equal to ELBO of x Q_t and theta t, okay? So this was Jensen's inequality. [NOISE] Why is this the case? [inaudible] Exa- Exactly. This - this is because- because of the M-step, right? So the M-step guarantees that, um, that Theta_t plus 1 was chosen to be the maximizer of - of this. So - so Theta_t t plus 1 over here will give us the- the highest value for- for ELBO. And therefore this is due to M-step, [NOISE] because we have the, uh, Theta_t plus 1 was chosen to be the argmax of the - of- of this, right? And then, um, [NOISE] and this is equal to l Theta_t. And why is this the case? This was the last step. Right. This is the corollary of Jensen's inequality, [NOISE] okay? So what does this basically telling us? So to kind of visualize, uh, to visually understand this proof, l Theta_t plus 1- [NOISE] so this black dot over here, right? This is l Theta_1, right? Assume t equals 0, right? L Theta_1, [NOISE] it says is greater than, equal to the ELBO, right? It's always greater than the ELBO. So the ELBO evaluates [NOISE] to a point here, right? So this is greater than 0 is step 1, right? And step 2, basically, told us ELBO evaluated at Theta_t plus 1, is greater than ELBO evaluated at Theta_t. So that tells us that, [NOISE] this is greater than 0. That's step 2 of the inequality, [NOISE] right? Because Theta - Theta_1 was chosen to maximize the blue curve, right? And then step 3 tells us that, [NOISE] that L Theta equals the ELBO. And - and that's because we - we constructed - we intentionally constructed the blue ELBO to be equal to log P of x at, uh, at the current value of - at - at - at Theta t, right? So that basically tells us the gap between [NOISE] the - the - the gap between the - the [NOISE]- the blue line and the black line at this is- is basically zero, right? So that tells us that l of Theta_0 is less than L of Theta_1, right? It- it so - so at each step we are [NOISE] guaranteed not to get worse in terms of the log likelihood. Yes, question? Log l Theta_t [NOISE]. So the same thing except it should be Q t minus 1 because when we got l Theta_t we didn't have to have Q t minus 1 from the Theta_t? So once we - so - so, um, so the question is should this be Q t minus 1? So I'm saying, uh, l Theta of t, uh, would be ELBO of x semicolon x, qt minus 1, theta d. So l Theta of t [NOISE] will be equal to ELBO, where Q is evaluated at the posterior of Theta, right? Yes - yes that is- [OVERLAPPING]. So that is - that is Theta t. So the right hand side is in the M-step, right? So this is the E-step. You get this from the E-step, yeah. So that - and the next step is getting the theta for - for t. So the next step, yeah - for - so the E-step. So at the current value of Theta, we use the E-step to calculate Q of t. And using Q of t, reconstruct the ELBO. And that ELBO will be equal, right? And then from this ELBO, so this is basically like starting at theta naught in the picture and constructing the blue ELBO. So the blue ELBO will be equal to this. And the next step is to go from theta t to theta t plus 1 by optimizing the blue ELBO. But L - L theta of t takes in the previous Q, not the current Q, right? So l theta of t is - has no Q, right? L theta of t is just L Theta of t, L Theta of t is just [NOISE] log P of x. There's no Q in it, [NOISE] right? Is - is - is this clear, Any questions? Cool. So its -so the proof here comes out to be pretty intuitive, um, you know, It can be easily visualized. What we are saying is we start at any value of - of - of Theta, some random value of Theta, right? The log-likelihood has, you know, evaluates to some value which we may or may not be able to calculate, right? But we can always construct an ELBO such that log P of x equals to that ELBO at that point, right? So we construct the ELBO, it has guaranteed to be lower than - than log P of x. And then we optimize the ELBO, right? So once we optimize the ELBO, we reach a higher point of the ELBO. Now it is important that this point over here was tight. What would happen if this point was not tight? What would happen if the ELBO was not tight over here? Yes, question. [inaudible]? Right. So what would happen if, um, if - if this was not guaranteed to be tight is that - it may well have been the case. Then the likelihood function goes like this. And if this was not tight, we may have gone down in our likelihood rather than guaranteed to be going up, right? Because this was tight at - at - at Q naught- because it was tight. We, you know, the - the other two pieces of the proof allow us to connect L Theta of L of Theta 1 to be greater than L of theta of - of 0. If this were not to be tight, then the likelihood at, at Theta naught could have been much higher. And we may have actually gone worse in terms of our optimizing our objective. Is that clear? Cool. All right, so now we have derived the - the most general form of the EM algorithm. Here, Z's and x's could be anything. And the joint probability of the model [NOISE] , we made no assumption of the joint probability of P of x and Z to take any specific form, right? And now we will, taking this more generic algorithm, you will see how we can apply it to the Gaussian mixture model and - and basically see that we will end up with the same update rules. [NOISE]. So this part will be super quick. I'll just give some- some- some hints on how to go about it. And in your homework you'll be basically doing a more detailed version of, uh, what, we'll be doing now. So to derive. Uh, so G- Gaussian Mixture Model, we have EM, right? So for the, uh, Gaussian mixture model by EM, first we do the E-step, right? So in the E-step for each height, right? So bef- before we- we, uh, before we go into how we go about doing it, uh, a general kind of, uh, template or thumbs of rules to follow when you wanna apply EM is to first always, uh, get clarity on what the model is, right? First- step 1, right? Write out p of x,z. And what I mean by write out is write out your assumptions, right? You may observe only x's and z's. That is fine. We may only observe x's and not z's, independent of what your- what data you observe and what you don't observe, first, write out your model. And by writing out your model, it may be something like this. Z comes from multinomial, right? And then x given z comes from normal of Mu z and Sigma z. Why do we call this the model? Because this is effectively p of z times p of x given z, and this is equal to p of x, z right? So first write out your model and writing it in this form is- is basically called as writing the data-generative process and writing the data generative process. You're implicitly defining what the model is, right? So the first thing you wanna always do is write out the model and be clear what the data-generating process is and what your parameters are, right? And in this case the parameters basically, uh, are p of x comma z, the parameters will be Phi, Mus and Sigmas. All right? Be clear on what the parameters are and what the- what's the form of the full model. And then the- Step 2 is clearly identify what are latent and what are observed. Are, you know, what's the evidence? And- and in most of the, uh, models that we will see x will generally be the evidence. That is the part of the data that is observed, and z will be latent. That's how we generally name the things. So when you- when you, uh, when you look at the models, uh, in- in the notes, uh, or in general, you know, x will generally refer to something that is observed that will be given to us. And z will generally refer to, you know, some latent variable that is not observed. Right? And uh, okay? And then- once- once you are clear about what the full model is, what the parameters are, what is observed, and what is latent. Only then attempt to apply EM on it, right? If you're not clear of- of, you know, what- what, you know, if you're not clear about these first two steps, It's gonna be very hard to apply EM and you're just gonna get confused really badly, right? Always be clear on what the full model is. The data-generating process, which is the same as the model. What the parameters are, what's observed, and what's latent, right? In case of the Gaussian mixture model, the- the z's, the latent variables are the cluster identities, right? The parameters are the, uh, uh, the, uh, the multinomial parameters, the different sets of mus and sigmas, right? Those are the parameters and Gaussian mixture model and what's observed are just the x's without the cluster identity. Okay? There's also a difference between um, ano- another thing to be clear about is there are two kinds of unknowns here, right? So we have the unknown latent variables and the unknown parameters. What is the difference between the two? The unknown latent variables are examples specific. Right? For a different choice of x, you will have a different latent variable. Whereas the unknown parameters are global. They- they're not specific to any example, the means and the co-variances and the- and the, uh, parameters of the multinomial. They are kind of global in the sense that they don't belong to any specific example. Whereas the latent variables are paired with a given example, right? And, uh, and- and this kind of, uh, a model, you can think of it as a frequentist model where we- we considered the parameters to be unknown constants. Whereas, but you can also perform a Bayesian treatment of this where you consider everything to be a random variable, but, you know, that's beyond the scope. For the EM algorithm. It is- it is clear that- it's important that you're clear what the- what the parameters are and the parameters are global, they are not specific to any example. And there's this other kind of unknown, which is the latent variable, which is paired with an example. And what we do in the E-step is basically estimate this latent variable that's pair with each example, right? Which is why in the E-step, we loop over each i. Okay? You're calculating what the corresponding hidden latent variable is for this photo given example, by holding the parameters fixed. Right? And in the M-step, we go the other way. We assume we've, you know- you know, the z values are, you know, the z values, um, um what they are. And then we go about updating the parameters in the M-step, right? So in the E-step, we're- we're kind of attacking one kind of unobserved, which is specific to each example, and in the M-step we are attacking the other kind of unknown, which is global. And this is gonna be common in all kinds of models where you're applying EM, right in the E-step, we're attacking the latent variable, in the M-step we are attacking the parameters. Right? So- so the E-step in Gaussian mixture model will- is basically for each i, right? For each i, Qi of z_i equals j is equal to P of z_i given x_i at the current values of Phi, Mu Sigma. Now how do we write- how do we obtain a posterior distribution of z given x, using? Bayes' rule, exactly. And this is P of x given z times P of z over P of x. All right? And again, this, because z is discrete, this will be P of x given z times P of z over summation over all z, p of x given z times p of z. Right? And this turns out to be P of x given Z. What's P of x given z here? P of x given z is a- is a normal distribution, right? And- and p of x given z will therefore be uh, 1 over square root 2 Pi to the, 2 Pi to the d by 2. This is just- I'm just writing out the, uh, the multivariate Gaussian. Now, sigma j to the 1 by 2. That's the law- the determinant times the exponent we have, uh, x_i minus Mu j transpose Sigma j inverse xi minus Mu j. And then times- times P of z. P of z is just Phi because it's no Phi j and- and, uh, because it's- it's uh everywhere it's implicit that we're trying to do is equal to j. So this is just Phi j. Now here, sum over all j, you know, the same thing. Yes question. Should there be a minus? Should there be. Yeah, yeah, there's a minus sign, yeah. Thank you. Minus 1 over 2. Right? So we've done such calculations before. Right? We've- we've calculated such posterior in GDA. And we also saw that if the, uh, if p of x given c belongs to the exponential family, there's gonna be an exponent here. Right? And then there are all these constants, they can get absorbed into the exponent. You can take the log of this, take it in, and you get, you know, an exponent of something divided by exponent of something plus exponent of something, plus exponent of something that works out to be the softmax, uh, comes out to be in the softmax form, right? And yes, so this is basically, you know- you know, all the values here. Plug it in and you get, uh, uh, you get the estimates for- for- for the E-step. So that's the E-step and I'm gonna erase this up. Hopefully you have taken note of this, right? And in the M-step. [NOISE] Now, in the M-step. M-step what we wanna do is Mu Sigma Phi is equal to argmax of Mu Sigma Phi, right? And in the M-step, we do the argmax over the? The ELBO, exactly. And the ELBO can be written as what we saw over here. So the ELBO we've written out the- the, uh, uh, expanded version there. And for simplicity, let's just call this w^i_j. We call it w^i_j, because once we calculated, this will be held constant in the M-step, right? Even though w^i_j, dependent on the values of Theta- of the parameters to get calculated, in the M-step, [NOISE] we will not be optimizing over them. We're gonna hold them fixed, right? And because that's - that's what the- the- the, uh, uh, um, the ELBO should be held fixed. And, uh, we are gonna argmax over the ELBO expression here. And that will be sum of i equals 1 to n, sum of z, w^i_j, or j equals 1 to k, w^i_j of- so the ELBO is- so the ELBO is basically the expectation of the log of this term, right? This would be log p of x^i, z^i given Theta Mu Sigma and Phi divided by w^i_j, right? This is just writing out the ELBO by expanding out the expectations, right? And this argmax Mu Sigma Phi, i equals 1 to n, j equals 1 to k w^i_j. Now, this we can again breakdown the joint according to our data-generating process. So this will be p of x given z, or z^i equals k, or z^i equals j times p of z^i equals j over w^i_j. Now again, the same process that we did over here, p of x given z, will be a multivariate Gaussian PDF. P of z given j is- is the multinomial, which will be just Phi of j, right? And w's, the two w's are constants. For this- for the M-step, the two w's are constant. In the E-step, we derive them using the previous values of Theta or the values of- of the parameters. But once we calculate them in the E-step, we will hold- we will hold them frozen in the M-step for performing the maximization, right? So plugging the values of- of the Gaussian PDF here, plug-in Phi j here. And to perform argmax, how do we perform argmax? Take the gradient, set it equal to 0, solve for the parameters, right? This looks pretty, you know, big and nasty, but, you know, it's pretty straightforward. There- there's no- there's no, um, uh, uh, special tricks going on here. It's just tedious, right? So write out the, uh, uh, write out the, uh, Gaussian PDF here, Phi j here. Take the derivatives, set it equal to 0. And once you do that for separately for Phi Mu and j, we will end up getting Phi is equal to, um, 1 over n, i equals 1 to n w^i_j. And Mu hat will be basically the same thing. So basically this will be Mu hat and Sigma hat will be, you know, this thing, right? So write it out like this. Take the gradient, set it equal to 0, and we get this, right? And this is the way we start from the general EM algorithm. You know, work out the E-step and M-step and in the M-step, work out the maximization. And by following this principled approach, we will see that we will end up with the same update rules that we get by fo-, you know, that we obtained using the, uh, the heuristic soft K-means kind of an argument. Next question? So do we have K different Mu's and K different Sigma's? Exactly. So the question is, do we have k different Mu's and k different Sigma's? Yes, so over here, when I write Mu, I mean the collection of k different Mu vectors and K different, you know, covariance matrices. So in the M-step values, we are simultaneously updating all of those? Yes. So in the M-step, we simultaneously update all of those. Exactly. Yeah. You can do it for any Mu j or Sigma j and, you know, it's symmetric. So once you do it for 1, it's the same pro- procedure for doing, uh, updating all the others. Good question. [NOISE] So that's- that's, uh, EM and EM applied to, um, Gaussian mixture models, right? So what we did basically was to- was to come up with a general EM algorithm and using the general EM algorithm, first, in order to apply to Gaussian mixture model, the first thing we had to do was get clarity on what the model was. What's the full joint probability distribution of- of all the- of all the variables, x's and z's. Have clarity on what the model parameters are. In case of, uh, Gaussian mixture model, it was Phi, collection of Mu's, collection of covariance matrices Sigma's, right? And then get clarity on what's observed and what's not observed. Once you know what's observed and what's not observed, it becomes clear what you need to do in the E-step. In the E-step, we need to calculate the posterior, the probability of unobserved given the observed, whatever it is for your specific model, construct the- construct the, uh, uh, E-step of probability of unobserved given the observed and we hold the parameters fixed, right? And the- the steps of evaluating this will be different for different models. For Gaussian mixture models, this happened to be normal and multinomial. For a different model in factor analysis that we're gonna see next, these might be different, right? But the- but the general recipe is the same. Construct the, [NOISE] construct the, uh, posterior distribution, uh, uh, in the E-step. And then in the M-step, write out the ELBO, right? Write out the ELBO by plugging in the full model in the numerator and, uh, uh, and the, uh, the- the posterior of the Q distributions in the- in the denominator and in the expectation. The denominator and the- the probability with which we are taking the expectation are constant in the M-step, right? We're gonna hold them constant. And by holding them constant, you're going to optimize over these set of parameters, right? When we do the argmax, it is only these parameters that we are updating. There are no kind of hidden parameters inside these that were used in the- in the E-step. But we're not gonna optimize them. For- for- for the purpose of the M-step, the Q distributions are fixed, right? We're gonna only optimize over these. And then we take the gradient, everything except these will be assumed to be constant, right? And basically, you know, this is just calculus. You know, take the gradient, set them equal to 0, and we end up with the update rules that is specific to your model and what was observed and what was not observed, right? Any questions? Yes. So, uh, why, uh, sorry, uh, so [NOISE] why is that- why we take the sum over j. Because let's say for the ith example if it is associated with the jth cluster, why would you want to update Mu, all the other Mu's except [OVERLAPPING] Well, because- so- so the question is why are we summing over j? Because in- only in k-means we do- we assign it to only one point. In- in- in, um, in case of the Gaussian mixture model, we- we- every point belong to every cluster with different weights. Yes. But why- why- why we do that? Like why not have it belong to either of the Gaussian's? So this, what we got here was purely a consequence of applying the EM algorithm and the EM algorithm we- we proved that it will, you know, converge and it'll maximize. So the- the choice of summing over here was not an arbitrary choice. It's just a consequence of the EM algorithm. [NOISE] Yes, question? Like what you said, we- we make a point estimate, every time we get estimation of a state, the maximum, [inaudible] So the question is instead of taking the expectation, what if in- instead we took the mode, you know, take the- the- the highest, uh, highest probability. In that case, we will- Gaussian mixture models will- will- will be kind of modified into becoming k-means in that case. [NOISE] All right, so factor analysis. [NOISE] So factor analysis probably has the most tedious calculus in all this course. So there's gonna be quite a lot of symbols that s- that are gonna come up on the board. But even though the- the, uh, the expressions might look complex and hard, the idea is pretty simple. [NOISE] Factor analysis. [NOISE] So, in factor analysis, we consider an interesting and challenging scenario. So in factor analysis, right, you have x_i belonging to some R_d, right? And you are given a collection x_i where i equals 1_n. And now, we are interested in this- in this kind of, um- um, situation where d is much bigger than n, right? In most of the- most of the, er- er, common scenarios that we encounter, the number of examples that we- that we have, will generally be- be much bigger than the dimension of the data, but there can be situations where the dimension of the data is much bigger than the number of examples that we have, right? And it is- it is challenging because when we, uh, using this, if we- if we, uh, assume this came from some kind of a Gaussian distribution, and if we estimate the covariance matrix sigma, sigma will now be 1 over n, sum over i equals 1 to n x_i minus mu, x_i minus u_T. So each of these, each- each term inside the sum is- is one rank, one matrix and you're summing over n rank one matrices. So sigma will be rank n. But n is much smaller than d, right? Sigma is a d by d matrix, but it's only rank n, right? And now if you want to er, use this covariance matrix to calculate your Gaussian PDF, right? Your Gaussian PDF is p of x given mu sigma 1 over 2pi to the d by 2. So the determinant over here will be 0 because it is- it- it is not full rank, right? And- and because you get- you get er, a 0 in the denominator, you know you can't even evaluate the- the er, - evaluate the PDF, right? And- and er, the question is now what do we do in such scenarios? Yes, was there a question? [BACKGROUND] I'm sorry? [BACKGROUND] This will be rank n because we are summing over n rank one matrices. [BACKGROUND] Matrix is d by d, Sigma is d by d, right? But it will have only rank n. And most of the times when n is bigger than d, it will be rank d, right? But in- in cases where n is much smaller than d, the rank will be- so- so this will be a singular matrix and we- we can't do much with it. Yes question. [BACKGROUND] I wouldn't say so. I- I- I'll come to that. So we- we- we are now just considering a scenario where Xs are coming from a high-dimensional space. And um- the- and- and the number of dimensions is much greater than the number of um- um- the number of dimension is much greater than the number of examples we have, right? [NOISE] We can think of- kind of, you know, a few ways in which we can kind of address this. So one way is, you know, instead of thinking- instead of considering sigma to be, you know, a full-blown covariance matrix, maybe we can restrict sigma to be just the diagonal matrix so that the number of parameters reduce. But even if you limited to a diagonal matrix, the number of- of diagonal values is still d that's greater than n. We could also think of um- um, sigma- thinking of um, restricting sigma to be some kind of a scalar times the identity matrix. But if we do this, then we are effectively limiting ourselves to covariances that are spherical, right? And that may not capture all the interesting um- um- um, structure in your data. So this wouldn't be very interesting either. So instead, what we want to know um- now do in- in um, - with- with um, factor analysis is to think of a latent variable z. So assume that your data, the true data that- so- so we are observing x's but we're going to assume that these x's live on some kind of a lower dimensional subspace, right? So we're going to assume that there is a z, right, that's normally distributed, right? And then we're going to make this second assumption um, that- right? So over here, i is- i is basically k-dimensional and zi is therefore Rk. So we're going to assume that there is this lower k-dimensional subspace in which these recite, right? And from these- from this k-dimensional subspace, [NOISE] we are now- so x given z is now normal distribution with mean mu plus- so in the- in the notes, they use capital lambda, right, to denote the um- um the um- up mapping matrix. I personally don't like Greek letters. There's nothing against Greeks, but I'm just going to call it L because you know, it looks less scary, right? So there is a- a- a matrix L times Z and a covariance Psi, right? Let's leave one Greek- one more Greek in there, no problem, right? So here L is a matrix that maps us from k to d, d by k, right? And you're going to assume Psi is- Are there any Greeks here? What's the right way to pronounce this? Is it Psi or Psi? Anybody knows? Psi? Psi. All right, So I'll call it Psi. So Psi is d by d and we're going to assume it is diagonal, all right? So what's happening here? We're going to assume that there is this low dimensional or k dimensional subspace where k, we will assume is less than n, the number of examples that we have. All right? And there are these latent variables z's that reside in this k-dimensional subspace, which get mapped on to a higher-dimensional space, which is x's, and the way it gets- it get's mapped on is through this, you know, what- what you can call as an uplifting matrix where z's when you multiply it with L, it takes you from k- dimensional to, so Lz will be in d dimensions, and there is, you know, it gets shifted by some- some offset Mu and some- there is some random noise that that gets- a diagonal noise that- that gets added, right? So, previously in Gaussian mixture models, in order to compare it to Gaussian mixture models, z is a multinomial in Gaussian mixture models, but now z is continuous here. All right? x given z was- in the Gaussian mixture model, was nor- normally distributed. Here also it's normally distributed except in the Gaussian mixture model for each z, because z was discrete, we had a separate Mu and- and a Sigma. But over here, you're not gonna have a separate Mu and- and a Sigma, but the- the mean for x given z will be this term, which is some parameter Mu that's common across all examples, plus the mapping of z from the load- from the k-dimensional lower subspace to d-dimensional, that's the higher-dimensional subspace. So L is this mapping that takes you from a lower dimension to a higher dimension. Yes, question? What is z? Is z a- you're not doing discrete classification? So this- so in factor analysis, we have moved on from classification, and here we are essentially trying to find a subspace of our- of our- of the x's that we have, right? x's live in a high dimensional subspace. What we're trying to do is to find a low dimensional subspace in which they reside. In which z's reside? In which z's reside and the corresponding- and we assume that the- the- the- the x's that we observe have a corresponding latent variable z and the way x is generated from its latent variable is through this relation. What is- what is your goal exactly? The goal here is to fit, um, as with all, you know, probabilistic unsupervised models, the goal is to-P of x Theta. In this case, Theta is you know Mu, Sigma and- and And? [inaudible]. Yeah, Mu, so what do we have here? Mu, Psi and L. All right? And we want to find these Mu, Psi, L such that we have- we have a probabilistic model or a density estimator over x's, and it would- there are many interesting um, um, scenarios where this- this- this can happen. For example, uh, you can, you know, uh, the reasons- the- the- the kind of scenarios where this can be useful is supposing you have some kind of a temperature sensor, right? So you can have temperature sensors that are, you know, spread across your building. All right? And let's say there are d such sensors. Right? And if you measure all your sensors at a particular time, that collection of observations can be your x^i's. All right? So x^i belongs to R^d. So instead of i, you can think of it as t. At one time mea- you- you take the measurement of all your sensors, of temperature sensors in your entire building, and you get the d-dimensional, high dimensional subspace. But, you know, all these- these different temperature values may not be independent. Which means, you know, two temp- temperature sensors inside this room are probably gonna give you, you know, very similar values. Right? And similarly, if you have 10 different sensors here, maybe you might see some slight changes where, you know, it might be closer to where the light bulbs are because of, you know, it's warmer and maybe colder in a few places, but more or less they're not independent. And there are these um, this similarly if you- you know, collect the set of all temperature sensors in the entire building, there are probably a few factors that- that affect the different sensor readings across all the- you know, across all the sensors. All right? And basically, what we are trying to find out- our goal here is to come up with a model, you know, for P of x, and- and- and - and for example, come up with a model for P of x to get a sense of what are normal sensor readings. Right? And using this, you can probably build some kind of an anomaly detector to see if there's, you know, a fire going on somewhere. Right? And the idea here is that even though x's reside in a much higher dimensional subspace, their values, the actual temperatures that they end up measuring are based on a few- much fewer factors. Right? And z's are, you know, that live in a k-dimensional subspace. The assumption here is that there are probably just k factors that decide the temperatures that you observe in all the d sensors, and- and- and given these d-dimensional observations, we are trying to uh, build a model from which we can not only fit P of x well, but also hopefully make inference about z units. It's kind of a dual purpose. Uh, in- in case of GM Gaussian mixture models, these were discrete where we were trying to cluster data. But here it's- it's- it's a fundamentally different problem that we're trying to find subspaces in our data. So this is no longer- we're no longer doing the EM? We will do EM on this, right. EM made no assumption about z being discrete or- or- or- or um- discrete or continuous. When we wrote out the- the uh, ELBO. Right? So we intentionally used expectation here and if z was continuous, then this summation will be an integral. Like we make no assumptions about z being continuous or discrete in the EM. Right? So uh- so back to where we were, so P of x, huh, We want to learn a model P of x so that we can do things like anomaly detection. Yes question? [inaudible] Are we going to learn [inaudible] Yeah, so the question is, is- is K going to be learned automatically or is it going to be hyper-parameter? For the purpose of this lecture, assume it's a hyper-parameter, and your tuned different values of K. Yes, question. [inaudible] Yes. So- so this is- this is, ah, the, ah- the ma- the assumptions that we're ma- making over here are what is called as, you know, factor analysis, right? And there are- there are, ah, you know, few variants of factor analysis in- in fact. So here, we are assuming, you know, the- the- the, ah, Psi matrix is diagonal and has, you know, different values on the diagonal. You can tweak that assumption to say that is, you know, equal, uh, that is- that is, uh, equal noise along all the diagonals, um, and- and, you know, that gives you a different model. You can make few other small little changes here. And that gives you something called as probabilistic PCA. This specific set of assumptions is called factor analysis. [inaudible] Yeah, so that's just, you know, an assumption that we're making. [inaudible] Yeah, that's- so the assumption that Z has mean 0 and identity 1 is just an arbitrary assumption. And it turns out to be that, you know, it- it is, most of the times that's- that's good enough, right? So making that assumption that our X has mean 0 and identity 1 would be, you know, absurd, right? But then we have this, you know, offset Mu and, you know, scaling L that we applied to Z. So you know, usually that- that this- this, um- the L and Mu make up for the fact that we don't have any degrees of freedom here, right? So um, and now our goal is to, using- using this set of assumptions, we want to maximize, we want to learn log P of X given Mu, L and Psi, right? How do we go about doing it? The answer is EM, right? And as I- as I- as I, ah, mentioned already, the first thing to do in EM is to be clear on what the model is. So the model that we have was described like this initially. So Z comes from normal zero identity and X given Z comes from normal Mu plus Lz and Psi. We are now gonna make the small change and rewrite this as an equivalent model Z identity Epsilon comes from 0, Psi and X is equal to Mu plus Lz plus Epsilon. Yes, question. [inaudible] why is Psi not dispersing? We'll- we'll see why spi will- Psi will not disperse. We're gonna work it out. And so, um, so these two are basically equivalent. Are there any questions on why these two are equivalent or not? Because we will- we will be using the trick again in a couple of lectures? Are you clear on why these two are equivalent? So this is basically called the- the, ah, scale and location property of Gaussians, where if you take, you know, some Gaussian, right? And, um, so this basically follows from the fact that, um, over here, um, you can write x as, um, you can decompose this into- into two parts. So you can- x given c has this Mu plus Lz and- and, um, Sigma. So this basically tells you that for this given mean. So think of this as the mean, and think of this as the covariance, right? So a normal distribution, if it has some mean and some, ah, ah, covariance, you can write this as m plus normal distribution with some mean and some covariance, right? [inaudible] All right. So, yeah, so this is bad notation. Think of them as you are adding one- one, ah, number to a random variable that's distributed according to them. [inaudible] So, ah, so assume, ah, you know, there is a random, random variable which has mean m and, ah, which is distributed normally according to mean m and stan- covariance s, right? This random variable can be written as the sum of a constant plus another random variable, right? So this- this other random variable has mean zero and covariance s, right? So you're basically, you know, I'm doing essentially the same. So here the- this is Epsilon over here. Yes, question. [inaudible] So for- x had, you know, mean, I mean, you can see here, right? So this is just Mu plus Lz, and we just repeat. Mu plus Lz plus Epsilon. [inaudible] any contribution from Mu, because X is Mu plus [OVERLAPPING] It will, but this is just x equals, we haven't written the distribution of x here. You can just rewrite x as- as this thing. So, ah, this gives us, you know, making this observation, we can now write out model, no, z comma x will have some joint, Mu z, x, and some covariance sigma, right? So what we're gonna do for the next few minutes is to write out this model, right? And once we are clear about the model and clear about the different components, then we're gonna attack it using EM. So first, as we did it with the Gaussian mixture model, first we want to be very clear about the model. What are the latent variables, what are the observed variables? What are our parameters, right? And it turns out that we can- so this Mu z, x, will have two parts, a mean corresponding to z and a mean corresponding to x, right? This is just the standard, ah, Multivariate Gaussian properties. We- we saw such- we saw these properties using- when we were talking about Gaussian processes, right? So this will correspond to the mean of z. So the mean of z is just 0, right? And the mean of x, so the mean of x will be just, um, so what will be the mean of x? Mean of x is equal to the mean of Mu plus Lz plus Epsilon. This is equal to Mu plus 0 plus 0, which is Mu, Mu. So for the- for the joint probability distribution, p of x comma z is distributed according to a normal distribution whose mean is 0 Mu, right? And the covariance- So again, the covariance will have four parts, right? So here we will have covariance of z, covariance of- covariance of x z, x z transpose and covariance of x, all right? And [NOISE] it can be shown pretty straightforwardly that covariance of z is just identity, right? And this will- the covariance of - of, uh, [NOISE] the covariance of this- this is- so the covariance of x and e, uh, uh, covariance of, uh, x and z is basically by definition, expectation of x minus expectation of x, z minus expectation of z transpose. And if you expand this out, this will just turn out to be L, [NOISE] and here you get L transpose. I know, so this will be L transpose, this will be L, and here we get LL transpose plus Psi, right? So the- the- the model or the- or the joint probability of x and z is gonna be our normal distribution, a multivariate Gaussian distribution whose mean is given by, uh, um, you know, zero and Mu and whose covariance is given by identity, uh, LL transpose- LL transpose plus, uh, Psi. And the way you, uh, arrive at this is by basically going with the definition of covariance of x and z to be expectation of x minus expectation of x times z minus expectation of z transpose, right? And- and uh, similarly for - for the expectation of x, uh, you do the same x minus expectation of x, x minus expectation tra, uh, transpose and you'll get these terms and, you know, the detailed steps are in the notes. But the, uh, but the- the- the, uh, the higher-level message that we wanna take, uh, take from this is given some- some data-generating process. The first step that we did was to come up with, uh, the definition of the model [NOISE] so the model in this case, [NOISE] turned out to be x and z are jointly multivariate Gaussian with a given, you know, with some particular mean and some particular covariance, right? So the model, right, that was our- our- our- th - the first step that we wanna do is P of x comma z is such that x z, [NOISE] is jointly normal with mean zero Mu or z x rather, z x Mu and covariance I, L transpose L and LL transpose plus Psi all right? So this is the, you know, the full model, and the parameters- parameters are basically Mu, L, and Psi, [NOISE] all right? And in this, what have you- what did we observe? We observe? We observe, what's observed in this? x, and the latent variable is z okay? So parameters, evidence, latent variable, right? And now to apply EM on this, the E-step [NOISE] you wanna calculate P of z given x, right? So in the E-step, we attack the latent variables, and in the M-step, you wanna do Mu L Sigma equals average max of the ELBO. [NOISE] All right so this is- this is gonna be the- the high level flow of what we're gonna do next, okay? And this- this- this, uh, recipe is the same for any kind of model on which or any kind of latent variable model that we wanna solve using EM, all right? First, get clarity on the full model of- of the joint distribution, identify what are the parameters? What's the evidence? What's the latent variable? And then figure out the E-step and M-step where in the E-step, cal, you know, re-estimate the latent variables, and in the M-step, re-estimate the parameters, all right? [NOISE] It so happens that in this- in the- in the- in the factor analysis model, [NOISE] it so happens that because they are jointly Gaussian. You know this is, you know a side note, [NOISE] right? We- we can actually write L of Mu Sigma, oops not Sigma write somewhere there no, yeah Mu L and Psi [NOISE] log P of x, we are now using those parameters. And using the modularization property of- of, uh, of Gaussians this actually has, you know, you can- you can write it out as, um, um, you know, um, x is actually distributed according to normal, you know, uh, we can just read off this, it has normal mu and variance LL transpose sigma, right? Mu LL transpose plus Sigma, right? But now, if you tried to, uh, perform maximum likelihood estimation and obtain expressions for these, you cannot get a closed-form expression, all right? Which is why, uh, EM kind of comes in- comes to our help here because, uh, in- in this case it so happens that log P of x, you can actually write out, you know, it as 1 over, you know, 2 Pi to the d by 2. And here LL transpose plus Psi to the half exponent minus 1.5 [NOISE] x minus mu transpose LL transpose plus Psi inverse x minus Mu, right? [NOISE] You can write it out like this, but there is no closed-form solution that you can- that you can, uh, um, come up with for L's and L's and Psi. You can try it out but, you know, you- this cannot be solved, uh, uh, using closed form. [NOISE] So instead what we do is, we- we use factor analysis, uh, I'm sorry, yeah, expectation-maximization. And with expectation maximization, we actually do get closed-form update steps for both the E-step and M-step. Though they are a bit tedious in terms of how complex the expressions look, but it is, you know, pretty straightforward it's just tedious, it's not tricky. [NOISE] Right? So for EM, for factor analysis, the first thing we want to, you know, attack for the E step is calculate p of z given x. Right? p of z given x. In this case, both z and x are-are-are jointly Gaussian, which is the best you can hope for, right? When- um, I'm gonna just write out the-the full model here. So z- zx normal 0 miu and covariance identity l transpose l- ll transpose plus Psi. All right, so what is p of z given x? You know, looking at this, we can just read it off, right? If you- if you, um, we did the same thing for- in- in- in Gaussian processes where we calculated the-the conditionals of Gaussian. Right? In this case, um, in this case, p of z given x will be- z given x we have a normal distribution where the mean is given by l transpose, LL transpose plus Psi inverse x minus Mu, right? So this is basically, you know, if- if you- if you remember a,b if they are jointly distributed as Mu- Mu a Mu b and some kind of a covariance Sigma a squared Rho Sigma a, Sigma b Rho Sigma a Sigma b, and Rho Sigma b square. Now, uh, a given b will be normal. This basically, you know, I'm just- I'm just recapping what we have seen in the past. a given b will be Mu b or b minus Mu b divided by Sigma b. You know, get like the-the- obtain the- the z value of- of- of b times Rho times Sigma a. That'll be the mean. And the covariance is- is like the Schur complement, right? So it was, um, Sigma a squared minus, um, in this case, uh, uh. So this minus this times the inverse times this. So it was Rho squared si-Sigma a squared, right, in case of a and b. And this is basically, um, being done in the- in the multivariate case. So it's-it's basically the same thing, but-but with matrices instead of scalars, right? And this will give us the, um, Schur complement that- so that is, um, covariance will be i minus, um, l transpose inverse of this ll transpose plus xi inverse times l, right? So this is the Schur complement. And this is-is like calculate the- the, um, um, um, z value, so x minus Mu divided by the- the- the covariance times, um, um, l transpose, which is- is basically, um, map it back to the-the variance of- of c. Right? So this is z given x. Any questions on this? And now, once we have z given x, we basically are able to calculate set Q_i of z_i to be normal with this mean and this variance, okay? So that gives us our E-step. And the E-step we're gonna- we gonna hold the current estimates of the parameters fixed and calculate the posterior of z given x, right? In the M-step- so in the M-step, you want to do argmax of Mu l psi of the ELBO, right? So sum over i equals 1 to n. And now we will have an integral of over z_i in place of the summation over k different clusters, we instead have an integral over- over the z-space of log p of x_i z_i parameterized by Mu, l, psi divided by Q_i of z_i divided by Q_i of z_i. Mu, l, psi equals this, right? So this is the M-step. Now there are a few observations we can make here. So the first thing is that you're gonna first think- write- write it out as an expectation, right? So the expectation makes it, makes it easier to understand. So this is argmax Mu, l, Psi, i equals 1 to n expectation z_i comes from Q_i of- so we're gonna factor this into x given z times p of z. And in the denominator we have q and there's a log. So this will be log p of x given z plus log of z minus log of Q_i, right? So factor this into two and these three apply to log, so you get log a plus log b minus log c, right. So this will be addition of log p of xi given z_i plus log p of z_i minus log Q_i of z_i. Right? So this is the M-step. Any questions on this, yes question? [BACKGROUND] Yes. So this integral and Q_i became the expectation. Good question. Right? So one thing that we can observe here is that in all- in any kind of an EM, uh, um, whenever you apply EM. This part over here was- is log p of z. So this was the numerator and this was the denominator. Right? So this is the denominator, that's log Q- log Q and this is the numerator, right? And numerator we factor out into two parts, right? In all- whenever you apply EM, no matter what the model is, you always have this denominator Q, that does not have any of the parameters that you want to optimize, right? So the parameters that we want to optimize are always in the numerator, right? And there's a log. So which means in any given EM model, when we're performing argmax, we can just strike out this part. Not just in factor analysis or in Gaussian mixture model. Whenever you apply EM, right, in the ELBO, the denominator, because there's a log- log, um, um, log of this- this ratio, the log of the denominator will never have the parameters that we're optimizing over. So you can always just strike out the- the, um, um, log Q. Over here, specifically in the case of factor analysis, we make yet another observation that log p of z also has no parameters because we assumed z comes from- from a normal distribution of 0, I, right? That also had no parameters, right? So in case of- of, ah, factor analysis, we gain this additional cancellation of this p of z as well, right? So now all we're left with is- so x given z has- so what does x given z? x given z had a normal of, yeah, so this does have the- the- the, this does have Mu, l and psi. It has all three of them, right? It's parametized on Mu, l and z. And so we're- we basically need to perform argmax of- of this term with respect to new LMC. Question? [BACKGROUND]. We have not yet- We have not yet? And I'm not sure if we will have the time to come to that yet, but, uh, let's wrap up as much as we can. So- and now, once- once we have, er, written out in this form, it is basically just calculus, right? Uh, it's- it's gonna be, um, so log p of, er, x given z can be- can be written out as- this can be written out as [NOISE] arg max of Mu L - Mu L Psi, i equals 1-n expectation of z i from Q_i of log 1 over 2 Pi ^ d by 2, Psi one-half exponent minus one-half xi minus Mu minus Lzi, times Psi inverse xi minus Mu L - Mu minus Lzi. All right? And here if - if you - if you, um, if you basically, you know, just take the derivatives, set it equal to 0. Um, in order to do that, uh, however, there is this expectation over here. Right? So we have this expectation over this whole thing. So in order to take the expectation, we will now have to break this into smaller chunks. So this will be log of - log of 1 over 2 Pi ^ d by 2. And this will be minus half log determinant of Psi, log and the exponent will cancel, minus one-half x minus Mu minus Lzi Psi inverse x minus Mu minus Lzi, right? You still have the i equals 1 to n expectation z i from Q_i, right? And now we can - we can, um, er, we can see what terms depend on z and what do not. We are taking an arg max of an expectation of something, right? And so this term does not have z, nor does it have any of the parameters. So this can just cancel out, right? This term does not have z, so it can come out of the expectation. This term does have z and in - in order to perform the expectation, you'll have to expand out, you know, distribute this multiplication across all terms. And once we - once we, uh - uh, once we do all that and you, uh, take the gradients and set it equal to 0, we will get L is equal to sum over i equals 1- n, xi minus Mu times Mu_z given x transpose, where Mu_z given x transpose was from the posterior, right? It's - it's that big expression over here, times summation i equals 1-n Mu_z given x Mu_z given x transpose, again, this - this Mu_z given x is the full posterior that we derived earlier, plus Sigma_z given x, right? So this is the M-step update for L. And similarly, the M-step update for Mu will be 1/n, i equals 1-n xi, right? This should be pretty straightforward because we assumed - we gave z a zero mean and x, you know, x, therefore, had to be, uh - uh- so Mu, therefore had to have [NOISE], er, this - this mean. And then we calculate this Phi matrix, 1/n, i equals 1-n, oh boy, this is painful, xi, xi transpose minus xi Mu_z given x transpose L transpose minus L Mu_z given x, xi transpose plus L- I'm getting tired writing this- Mu_z given x, Mu_z given x transpose plus Sigma_z given x times L transpose. And Psi_ii equal to Phi_ii, right? So this is an intermediate computation and you take the diagonal values of this and set them equal to be, uh - uh, Psi_ii, right? So this- this, um, a lot of symbols, a lot of monstrous-looking expressions, Um, I would not expect you to memorize this, right? Um, however, the - the, uh, larger story from this is the way we went about applying EM, the - the recipe of applying EM. We, uh, we obtained the, you know, we derived the - the E-step, um, as - as, uh, the simple posterior, and in the M-step, we had to take this expectation with respect to z, um, of - of the - the, uh, of the - of the ELBO, uh, so the ELBO, which is the expectation, uh, with respect to z. A crucial point, uh, to note here is that in many places if you - if you, you know, search for EM algorithm on - on the - on Google and try to learn about EM algorithm, in many places, you might see an algorithm that's described like this. But in the E-step, you know, calculate expectation of z, and in the M-step, you know, do arg max of p of z, expectation of z, right? You might see many, many, uh - uh, articles or - or documents that describe E-step this way, where E-step corresponds to the expectation, M-step correspondence to maximization. This recipe, this - this - this, uh - uh, sequence of steps is true only sometimes, right? In simple models, you can write it like this. In Gaussian mixture models, you can write it like this, right? But the - the correct way to do expectation maximization is to have in the E-step perform only calculation of the posterior and construction of the ELBO and in the M-step, maximize the ELBO. In simple models, those two steps are equivalent to this. But if you were to do this for more complex models, it will be wrong. If you follow the step for factor analysis, you will get it wrong. It works only for simple steps, but the - the - the process that works for any model is in E-step, construct the posterior Q- the - the Q distribution, and in the M-step, maximize the ELBO. That works all the time, no matter what. For simple models, that is equivalent to this. But, you know, that's something to - to, uh - uh, keep in your mind. All right, with that, we'll, uh, we'll break. If you have any questions, you know, come up, uh, to the stage, and we can - we can, uh.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_8_Kernel_Methods_Support_Vector_Machine.txt
Okay, let's get started. Uh, welcome back, everyone. Today, we will continue into lecture eight. Is that right? Yes, it's lecture eight. Um, so the topics for today will be kernel methods and support vector machines. [NOISE] And before we jump into, uh, today's topics, let's do a quick recap of what we covered in the previous lecture. We covered generative algorithms. Um, in generative algorithms, for training, we maximize the joint probability of p of, um, at the joint likelihood, p of x, y, where x is your input and y is the output. And we, uh, we train the parameters using MLE. And this is generally decomposed into two parts, the class prior and p of x given y, and at prediction time use Bayes' rule to construct the posterior distribution of p of y given x, okay? And the posterior will- will be different for different models, depending on what assumptions we make of whether p of x given y is Gaussian or Bernoulli or- or whatever you find, we find p of y given x [NOISE] accordingly, right? And we saw two examples of, uh, two different types of generative models, the Gaussian discriminant analysis and naive Bayes. So in Gaussian discriminant analysis, x is continuous, the inputs are continuous. And we make the assumption that, uh, p of x given y follows a normal distribution with a mean that is specific to y, and some shared co-variance Sigma, right? And the, uh, the corresponding posterior for prediction time follows, uh, logistic, um, um, um, logistic, uh, regression, um, [NOISE] functional form, right? And, um, [NOISE] you have more details of this in your first homework and, um, hopefully, that's- that's all clear. Next, we saw naive Bayes. Uh, here x was discrete like words in a text message, um, and we- we, um, made this crucial conditional independence assumption. So to, uh, remind you, conditional independence means p of x_j given y and x_k is equal to p of x_j given y, right? [NOISE] And conditional independence is very different from general independence, right? if two variables are conditionally independent, it says nothing about whether they're independent or not, and similarly if two variables- or two, uh, variables are independent it says nothing about whether they are conditionally independent or not. [NOISE] Um, and then with the- with using this conditional independence assumption, we constructed two different event models. In one of the- the first event model that we saw was called the Bernoulli event model where p of x_j given y follows a Bernoulli distribution. And here x_j refers to the jth word in our vocabulary or our dictionary. And we saw another model called the multinomial event model, where p of x_j given Phi follows a multinomial distribution. But crucially, x_j over here refers to the jth word in a message. Here, x_j refers to the jth word in the vocabulary, right? And the general intuition behind these two is that, in the Bernoulli event model, suppose we apply it for, say, spam classification, the spamminess of a word is determined by how many messages the word appears in, and how many message or how- in what fraction of the spam messages the word occurs in, in what fraction of the non-spam messages the word occurs in. Whereas in the multinomial model, the spamminess of a word is determined by what fraction of all the words in the spammy messages is this word? And what fraction of all the words in the non-spammy messages is this word by not caring about the message boundaries, right? We- we consider just the set of all, the- the collection of all words across all messages and just construct a multinomial distribution of all the words, right? And then we also covered Laplace smoothing, where the idea of Laplace smoothing is we don't want to be, uh, severely swayed by rare words. So words that probably never occurred in our training, training, um, training dataset. We still wanna do something meaningful in at- at prediction time, and the idea there is, um, you perform something called Laplace smoothing, where we, um, assume we've seen, say every word, um, or- or we assume that you know, we see one example of each class, uh, once in- in our training set, and then we start to look at the- uh, at the training data and do our, you know, counting of messages or counting words or so on, right? That's, uh, that's a quick recap of what we did. Any questions on this before we move on to the next topic? Yes. Should we have any maximum likelihood estimation on the priors? So the question is, are there any conditions on the prior that, uh, that will make the posterior take, uh, a logistic form? So that's a good question. So in general, uh, if- if, um, if we are having, um, a binary class where y is 0 or 1, right? And x given y takes a form, uh, any form in the exponential family, the posterior will always be a logistic [NOISE] form, right? Similarly, if your y is a categorical, so one among many, and x given y belongs to any distribution in the exponential family, then your y given x will take the softmax form. That's- that's- that's a common, um, um, that's a property that holds for all exponential family distributions, er, uh, with either, uh, a Bernoulli prior or a categorical prior. Good question. Any other distribution [inaudible] Yeah, for, uh, for the logistic, uh, x, given y needs to be in the exponential family, yeah, good question. Any other questions before we move on to kernel methods? Okay, so let's move on to kernel methods. [NOISE] So I'm gonna start off- yes question. [inaudible] So the question is, you know, why do we wanna use, um, um, the- a generative model in- in place of say, logistic regression for, um, if- if you know the- at prediction time, if it takes a logistic regression from why not just take, you know, uh, um, a dual logistic regression itself? And I think we touched upon this, um, in- in the last class. So in situations where you don't have a lot of data, and you know that your x is distributed according to GDA, then using GDA, you're gonna have you, um, is- it's going to be most efficient. By efficient, I mean sample efficient, you can- you can achieve higher accuracy or- or, you know, your model works better with fewer amount of data if the modeling assumptions hold true, if your x given y actually, uh, follows a Gaussian, right? Uh, whereas if a logistic regression, in general, is more robust, even if, um, these assumptions are violated, logistic regression tends to just work well. But in those cases where, um, [NOISE] the assumptions hold true, then GDA is more sample efficient with fewer number of examples, you get a better model. [NOISE] Right, so Kernel methods. So, um, [NOISE] so far in the class, all the models that we've seen have been linear models, right? They are, um, they are linear in x. And, um, the- for- for example, linear regression gives you a linear hypothesis in x and logistic regression gives you a linear classifying boundary in your, um, data space. However, as you've probably seen in your Homework 1, uh, question five I think, you can actually start constructing nonlinear hypotheses using linear regression, right? And similarly, um, you- we can also construct nonlinear classifiers using logistic regression by including higher-order, uh, features, uh, for example, if you, if, um, your x, um, if you map it to some Phi of x, and Phi of x is, um, 1, x, x squared, norm, x^4, right? And now if you perform linear regression on this feature vector, rather than performing linear regression directly on x, the, um, the linear regression will give you a nonlinear hypothesis. [NOISE] This is x. Your, uh, the linear regression will give you a hypothesis that might look like this. And- and this is, uh- this is because we've included higher-order polynomial terms. Now, to, um, extend this into the classification setting similarly, right? Uh, suppose this is x_1, x_d. In this case, it was, um, x and y. Uh, so supposing you may have two classes. [NOISE] Okay. Now, so, um, if you fit a, uh, a linear regression, for example, you might get, uh, a separating, uh, hyperplane like this, but what if, um, um, in - in - in case of, um, logistic regression, our data was something like this. [NOISE] In such cases we could have, uh, uh, just as in the case of linear regression, we could have included a few, uh, say, quadratic features and we would have gotten a separating boundary that looks like this. You know it was - we're - we're performing logistic regression because we have mapped our data into a higher dimensional feature space, right? Now, this seemed a little, um, arbitrary. Uh, for example, how many features do we map it to? In this case, we map, uh, a one-dimensional feature to 5 dimensions. Why not map it to 10 dimensions, right? What's the right answer? Why not map it to 1,000 dimensions, right? Or even why not map it to an infinite number of dimensions, right? Um, it's hard to think of, uh, infinite-dimensional feature vectors. You cannot even represent it explicitly on a computer, um, but we will see that with kernel methods, we can actually do such things of taking an example and mapping it to [NOISE] an - potentially infinite-dimensional feature vector [NOISE] and perform our learning algorithm in that infinite-dimensional space, right? [NOISE] So the, um, in terms of terminology, um, given x, [NOISE] we will call it attributes, right? And Phi of x, which is the set of features, right? We will call this features. And in cases when there are no feature maps, it's common to call Xs features itself. You know, just - just think your feature map as an identity function, right? And let's start with, uh, uh, a motivating example of linear regression of how we solve linear regression using gradient descent, all right? In linear regression for gradient descent, we - the update rule we had was Theta t plus 1 equals Theta t plus Alpha, some learning rate, times i equals 1 to n, y^i minus h Theta of x^i. In linear regression, this was Theta transpose x^i times x^i. All right. This was the update rule for linear regression using gradient descent. And we run this over and over until the model converges, which means our Theta vector stops - stops changing a lot, right? Now, if we were to use a feature map like this, so with a feature map, we get the rule, Theta t plus 1 equals Theta t plus Alpha times i equals 1 to n, y^i minus Theta transpose Phi x^i times Phi of x^i, right? So we just replaced x with Phi of x. And we are performing gradient descent on this - on this, uh, uh, feature map. The difference between these two equations is over here, Theta is in R^d, and over here, Theta is in R^p, assuming our Phi takes you from R^d to R^p. The idea here is d is the dimension of your original data and p is some high-dimensional space, potentially infinite, right? So, uh, so this is the, um, um, update rule that we get. And now, uh, imagine our Phi to be, um, a feature map like - like this, like Phi of x equals, there's just one example, right? 1, x_1, x_2, and all the - then x_1 square, x_1 x_2, x_1 x_3, and so on. So I write x_1 cubed, x_1 square x2, and so on, right? So basically, a set of all monomial terms of order less than equal to 3, right? Now, what we see is, uh, the number of - the dimension of the feature vector, in this case, will be - p will be approximately already cubed, right? It's going to be, um, um, cubic times for all the three order terms, and some two order terms, one order term, but overall it's going to be, you know, - the - the cubic term is gonna dominate and it's gonna be, uh, approximately d cubed number of, uh, features, which means, um, to perform each gradient, uh, uh, descent update, we now move from calculating dot products in d dimension. For example, if d was 1,000 - d was 1,000, right? This dot product would take about, uh, order d, uh, order d, right? Whereas, this dot product is gonna take about order d cubed, so which means if - if you had d equals to 1,000, this will take about, say, a - a - a - a 1,000 time-steps, whereas, this would take about 1,000 cubed, would be like a billion time-steps, right? So potentially, each - performing each update rule can be a million times slower. And that expense is mostly because we chose - we just happened to choose a higher dimensional feature space, right? Now, let's make a few observations. So the claim here is that if we - in linear regression, if we were to start with Theta naught equal to the 0 vector, if we start with the 0 vector as our initial starting point, if gradient descent has started, uh, from there, then the claim is that any Theta vector, Theta t can be represented as 1 to n Beta i Phi of x^i, right? So the claim is the Theta t that we encounter during any state - any state of our gradient descent can always be represented as a linear combination of the features, right? And this should be obvious, because, um, assume Theta 0 is - is just 0, then Theta 1 will be - each of this is a scalar, a scalar times Phi of x^i. And it's - it's the sum over them. And you can - you can absorb the Alpha - Alpha inside, and Alpha times y^i minus Theta t of, uh, Phi of x^i will be, uh, and Theta at - at 0 will be, uh, uh, just 0. So the - Theta of 1, will be i equals 1 to n Alpha times y^i times Phi of x^i, right? So the very first, uh, uh, Theta vector that we get after the first step is going to be a linear combination of the feature vectors, right? And look, - and this will hold on - this will hold at every state - at every state of gradient descent, right? The next, uh, for example, you know, we can show by induction that - let me start on a fresh board. [NOISE] Theta T plus 1 equals theta plus t plus alpha 1 to n, y_i minus theta t transpose phi of x_i times phi of x_i, and we can just expand this. So theta t, by induction, we can, um, write it as a linear combination of beta_i of t times phi of x_i. So this is this plus, we'll write here, alpha times i equals 1 to n y_i minus, again, we're gonna expand this with this, i equals 1 to n beta_i of t times phi of x_i. So this is the theta transpose phi of x_i times phi of x_i, right? Maybe I can use a few different colors to, um, highlight this. So, um, what we did is this is theta- theta t, and this is the other theta t, right? Did I miss something? Yep. And this thing over here is now essentially our- before we do this, - and this is, I'm gonna rewrite this as i equals 1 to n beta_i of t plus. So basically I'm taking the, uh, common sum across- across, um, both the terms. Beta_i plus alpha times y_i minus equals 1, j equals 1 to n beta_j phi of x_j transpose phi of x_i, phi of x_j transpose. We're gonna use a different color, times phi of x_i, okay? So, uh, what we see is that now theta of t plus 1 can be represented as again, a linear combination of phi i's where each of these is beta t plus 1 of i, right? What- what- what happened just here just now? So we started with the- with the usual gradient descent update rule of theta t plus 1 is theta t plus, you know, um, plus the gradient. And then to show by induction that theta t plus 1 can be represented as a linear combination of our feature vectors, we, uh, we use the, uh, uh, the inductive assumption that theta t can now be written as a linear combination of our feature vectors. So we plug- plugged in, um- so we plugged in that expansion here and similarly we plugged in that expansion here and basically just reorganized the sums, right? And we get that theta t plus 1 can now also be written as a linear combination of our feature vectors where the new coefficients for performing the linear combination are these terms. Yes, question. [inaudible] So- so the question is in the higher dimensional feature space, the features are not independent. Um, but that's fine because we made no independence assumptions about, um, about the features being independent for gradient descent. So the update is the same. Yeah, the update is the same, exactly. So we don't care about what the features are. We just perform. So examples [inaudible]. Well, so the feature- so for linear regression to work, um, the features should ideally not be linearly dependent. In this case, they are not linearly dependent, right? They- they are somewhat dependent, but it's not that one of them is a linear combination of the other, right? So this is, um, so with this observation, we get the update rule in terms of betas. [NOISE] Let me go over this one more time. Just because this is- this is uh, probably the most crucial part for- to- to understanding kernel methods, right? So we started with the theta vectors, right? We're performing gradient descent over theta vectors o- o- on the parameter space. And this is the update rule that we have, right? And we saw that theta 1 over here. [NOISE] Theta 1 is trivially a linear combination of your features based on this rule, where ah, theta 0 is 0. So this term cancels out and you get theta 1 equals 0 plus alpha, you know sum over i from 1 to n alpha times y^i times feature of x^i. So theta 1 is trivially a linear combination of our features. Right, all good? So that kind of bootstraps our inductive argument, and for the inductive step, we assume theta t is- [NOISE] is already a linear combination of our features, and the- the- the beta i's are the coefficients which make up the linear co- uh, combination for theta uh, uh, for theta t. And now theta t plus 1 is, um into- into this expression, you know, expand our theta t in terms of the linear weights of uh, uh, of phi of x^i, right? So in this place we plug in the expanded, um, linear combination of phi of x^i, and in this place we uh, uh, plug in the linear combination of feature of x^i and then just do some algebra, move things around and we're going to get it in the form where theta t plus 1 is still a linear combination of features of x^i, where the next set of coefficients are calculated this way. Right? So that completes our inductive proof that in case of linear regression, your- the theta vectors at any stage during gradient descent can always be represented as a linear combination of our features, right? And- any questions in this? Yes. How about these [inaudible] This-this- this is, yeah, you're right. So this should be j. Yeah. Thank you. Yes, question? Why aren't those- Why isn't the summation over there- [inaudible] Over here? The equation over n is included, I mean, over all the examples. [OVERLAPPING] So this one was- was sum- I took this summation and this summation, uh, this summation and this summation common outside. So this appears just once here. Right? So the- this summation is- is- is common for this term and this term. So I've taken that out. Does that make sense? Or did I misunderstand your question? Um, sorry [inaudible] So uh, [OVERLAPPING] right. So um, uh, for each- so there is one summation over here, and beta i appears once per summation, and over here it appears once per summation. Right? And over here there are, you know, two nested summations. So there- here's the outer summation and here's the inner summation for this beta. So that's the inner summation? Yeah, that's the inner summation. All right? So now we um, [NOISE] we can write this as, you know, um, we can write the same, same thing, right, on this. Beta t plus 1 i is equal to beta t of i plus alpha times y^i minus j equals 1 through n beta j of t times phi of x^j. Transpose phi of x^i. All right, so basically just put this here where now the next- the beta co-efficient, the ith beta co-efficient at time t plus 1 is equal to this over here, right? And this, we do it for i equals 1 [NOISE] through n, and we repeat this over and over until we converge in our beta values, right? So this is basically gradient descent written in terms of theta space, [NOISE] and this is the same gradient descent written as coefficients of the feature vector, right? So um, a few things we can notice um, right away is that at first, it might appear for each iteration, for each of the example we are summing over. We are um, we are doing a dot product between two high-dimensional feature vectors at each iteration for each example, right? But however uh, the dot products between all the high-level features can be pre-computed because the examples are not changing iteration to iteration, right? So we can pre-compute the dot product between every pair of examples in that high dimensional feature space, and just use the pre-computed values in each iteration. Right? So this is not changing example to example, the only thing that's changing is- are the betas, okay? All right? And so the bBetas are changing, um, iteration to iteration. The feature maps are constant so we can pre-compute them and use them. So this thing, evaluates to some scalar, right? So you can pre-compute a big matrix of scalars. Yes, question? What is the last phi, um for us [inaudible] So this is x^j and this is x^i. Right? However, now the question is now, what if this is infinite-dimensional? All right? We got- we got, er, a way in which we can now represent our parameters, um, using by replacing our parameters with coefficients. Because theta in a high-dimensional feature space, theta would also have been high-dimensional, right? So we got to way around representing a very high dimensional parameter vector with a set of coefficients where the coefficients have are-are-you have asked many coefficients as the number of examples which is always finite. However, we still have a dot-product between potentially two infinite-dimensional feature vectors, right? And that's where the concept of, a kernel comes into picture. Right? So kernel is defined as- so a kernel is some function k, that maps- all right, two. So script x represents the space where uh, the space where x's reside. In this case it is, you know um, in our case, x is in RD, which means script x equals RD. So it takes two examples and maps it to a real number. Right? So this is the definition of a kernel, and it does it in such a way such that it satisfies k of x comma z is equal to the inner product between phi of x comma phi of z, where phi is some feature vector. Right? So um, the kernel corresponding to a feature vector phi is defined as a function where- the value of the function is equal to the inner product between the feature map of x dot product with the feature map of z. So this is the same as phi of x transpose phi of z, right? Right? Yes. Question? [inaudible] Can you- can you please repeat the question? This line? The line above it? RD. So, so. This is defining the space. So x is in script x. Right? In this case, the script x happens to be RD, but in general, this need not be in RD. It could even be like strings, the set of all strings or set of all graphs, right? It's an abstract space. And the kernel is a function, which takes two elements of that space and returns a real valued scalar. And it also satisfies the property that the evaluated value can be expressed as the inner product between two feature maps or between the feature map of input one dot-product with the feature map of input two, right? And for the example that we saw here, okay? So for the example where phi of x is equal to the set of all monomials up to order 3x, x1, x2, et cetera, x1 cube, right? Where, so this feature vector has a corresponding kernel. So k of x comma z is equal to 1 plus inner product of x comma z [NOISE] plus inner product of x comma z squared plus inner product of x comma zq. This is exactly the same as phi transpose phi of z. So this is one example of a kernel, right? Where for this particular feature map which had, you know, if x was in rd, then p is approximately order dq, right? And in order to perform an inner product or the outer product in this high dimensional feature space would require harder dq number of operations, right? Whereas over here, in the kernel form, we see that this takes order d, this takes order d, and this takes order d, right? And you sum over three such order d operations and you're still in order d. Right? So this is a computational trick where a high dimensional feature map can be, or the dot-product between two high-dimensional feature maps can be compactly represented by a simple, most straightforward function like this. Yes, question? [inaudible] Why does this hold? Yeah. [inaudible] This one? Yeah. So, there is details of that in the notes. It's algebra, you just work through it and you can see that these two are actually exactly the same, right? I could give you some intuitions, for example, right? So one x1, x2, til xd. And similarly, here you have say, z1, z1, z2, til zd. You have x1 squared, z1 squared, and so on. So if you take the dot product between these two, you get 1 plus x1z1 plus x2z2 plus so on until xdzd plus x1 squared, z1 squared, and so on. And then you can group the terms and each of those will correspond to one of them. [inaudible] Yeah, so the kernel function is defined such that, the functional form of the kernel can always be rewritten as and in a dot product between some feature map of x and some feature map of z, where you apply the same feature map for both the inputs, and then take a dot product in that higher dimensional feature space, right? And this way an order d cube operation was reduced to an order d operation. Yes question? [inaudible] So the script x is. First, it's important to know that the script x is the input to the kernel or the input of the feature map, right? So that is not the high dimensional space. Script x is not the high dimensional space where we want to do the dot product. Script x is the space over which we want, over which we are trying to learn, so that our examples reside in script x, right? And the feature map takes something in script. So, the feature map maps script x to rp, where p is potentially an infinite dimensional space. Right? And the claim here is that instead of performing inner products in high dimensional spaces. There would, for a feature phi can have a corresponding kernel k, where the kernel function has a more compact representation, which is equivalent to performing a dot product in the high dimensional space. Okay. So- so the question is, how does- how does the kernel reduce the number of operations? Uh, the- the way it reduces the operation is so mathematically we can see that this representation and this representation are the same, right? But computationally in this rep- in- in this representation, we require order d cube operations. Whereas to do this, this requires order d operations, order d operations, order d operations. You get a scalar and then you do square cube and sum them up, which are uh, negligible, right? So computationally this requires order d, whereas this requires order d cube. Yes, question? So that kernel that you wrote is always specific to that? Feature map. Yes. So the kernel that uh- that uh, we wrote here is specific to this feature map, exactly. And we are gonna discuss about, you know, kernels in general next. So this is just a motivating example where, uh, instead of performing operations in, you know, if d is 1000, instead of doing a dot product with, you know, 2 billion dimensional vectors, instead, we can do something with just, you know, 1,000 dimensional vectors. It's like a million times faster, all right? So now if we replace the, uh, the inner product between these two feature, uh, uh vectors with the kernel, we end up with an algorithm for linear regression like this, right? So this is linear regression kernelized. Now the first thing is pre-compute a matrix k, where K_ij is equal to K of x_i, x_j, right? And it is common in literature to abuse notation and use the same letter K for both the matrix of- for- for the kernel function and also what is called as the kernel matrix, which is a square symmetric matrix of all the kernel- the kernel is evaluated between every pair of your examples, right? And this is equal to phi of x_i, phi of x_j, right? So pre-compute that, and then do a loop, er, for all i in 1 through n. Beta i of t plus 1 equals beta i of t plus Alpha times y_i minus loop over j equals 1 through n, beta j of t times K_ ij Beta j of t times K_ ij, yeah. All right? So pre-compute all the- all the- all the, uh, n squared possible inner products and construct what is called as a kernel matrix. And once we pre-compute it. Even though phis are- are- in order to pre-compute it. Even though phis might live in like a billion dimensional vectors by using the kernel form, the corresponding kernel form for that feature vector, we will, uh, uh, we can compute it- compute each element effectively in just order, uh, uh, order d instead of order d cube for this kind of a feature map. And then we iterate over this step where we are updating the corresponding coefficients, where beta coefficients from step to step in- in the following way. And this is taken directly from here, where we replace this inner product with K_ij of the kernel matrix, right? And we can also compactly write this as beta t plus 1 equals beta t plus Alpha times the vector y minus K beta t. So this is, uh, these two are equivalent, but instead of doing it for all i, you can perform it in- in, uh, in one iteration in a vectorized format, there is a question? [inaudible] So I'm- I'm gonna come to that. So the question is, what if we don't have a few terms, uh, um, in this feature map, then this is not, uh, uh, then this will not be a- the kernel corresponding to this feature map, that is true. And we'll go into feature maps, uh, in a moment right after we wrap up this example, right? So this is for learning [NOISE] and for prediction time, um, for prediction, so the, uh, we have h theta of x is equal to theta transpose- theta transpose phi of x. And again, we expand out theta as i equals 1 to n beta times phi of X transpose phi of X, i equals 1 to n beta i, K of x_i, x where x is the test example, right? So the, um, in- in- in case of the linear regression, without feature maps or without, uh, uh, kernels, we would have ended up with some theta vector, some high-dimensional theta vector. And this could be potentially infinite dimensional. That's why we use our- our, um, coefficient format to rewrite theta. So this is theta. Theta is- we rewrite theta as a linear combination of our feature vectors where the corresponding coefficients were learned in- in through this algorithm, right? And at prediction time, we evaluate the kernel between our test example and every example in our training set, right? Multiplied by the corresponding, um, um beta coefficients. And this is gonna be our prediction, right? So with this, let's make a few observations. [NOISE] So any questions on this? So let's make a few observations. So in our observations um, first we see that in our training rule, right, um, so the training rule was beta equals beta plus alpha, times y minus k. That's the kernel matrix of, of beta and at test time, where y hat equals nk of x^i, x, right? In both of these, phi of x does not appear, right? So that's the most important thing, right? We have rewritten our algorithms in such a way that the high-dimensional feature map, phi of x, does not appear either during training or during testing, and that's the most important thing, because if we have phi of x explicitly occurring in any of our update rules, then we would have to compute that, and that could be infinite-dimensional, right? So we got rid of phi of x completely, right? That's the most important observation. The second observation is that for test-time. Yes, question? [inaudible] So the question is, um, if phi was infinite dimensional, then will the kernel also have an infinite sum? And the answer is, not always, there are many kernels that we're gonna see next, where even though you have an infinite-dimensionally long feature vector, your kernel can always be evaluated at- sometimes even constant time, right? And we're gonna see a few examples. The whole idea is that the explicit form representation of the feature map can be computationally very expensive and there will sometimes be this more compact kernelized form where this explicit form can be computed in a more efficient way, okay? Yes, question? [inaudible] Uh, of- of this equation? Update rule? Yeah. So in the update rule- um, so this is just, um, so kij represents the- the, um, um, ij element of the matrix, and that should be a scalar. And the kernel always computes into a scalar. The kernel always computes into a scalar, right? And for beta- so beta is a vector, so beta of t is an Rn, right? So beta t plus 1 are the- the jth beta at step t plus 1 is equal to the old beta plus alpha, again scalar. So scalar, scalar times something, yi is a scalar and kij are scalars, and you're summing over some scalar times some scalar, so this whole thing is a scalar, right? So-. [inaudible] I'm sorry? [inaudible] So j is just the index you, you are summing over. [inaudible] Yeah, so beta- beta j is a scalar, so beta t is an Rn. So beta t subscript b is, is a scalar, all right? So this whole thing is a scalar, scalar minus a scalar times a scalar, plus a scalar equals a scalar, right? And similarly over here, this is a vector in Rn, Rn. Y is in Rn. K is n by n, beta t is Rn, so n by n times Rn will give you n. So there's an n vector, n vector, difference is an n vector, times a scalar is an n vector plus an n vector equals n vector. So back to our observations. So the first thing we observed is that we have eliminated phi of x completely, right? That's, that's the, uh, most important, um, most important step. The, uh, second observation that we make is to make a prediction, we need to remember all our test examples, right? So for prediction, we need training examples to be stored in memory, okay? And this is probably the most distinctive feature from linear regression that we've seen. In linear regression, we started with a training, training set, with x's and y's. We learned the theta vector by performing, say, the normal equations or gradient descent, and once we obtain the theta vector, we could discard our entire training set and only carry forward theta- the theta vector from that point on. And to perform a prediction on a new example, all you needed was the theta vector. You didn't need the training sets anymore, right? But with kernel methods, that's not true anymore, right? By giving up the, uh, phi of x representation- explicit phi of x representation, we've also given up the possibility of having a theta vector that we could- that alone we could carry forward, right? In place of that, we need to remember all our training examples. I missed a beta i here. We -um, we need to remember all our training examples and when we get a new test example, we evaluate the kernel with every training example and take a linear combination of those, uh, uh, kernel outputs weighted by the- uh, the beta vector that we learned, right? So training examples must be carried forward as is in to test-time with kernel methods. Any, any questions on this? And to- to kind of better understand, uh, what was happening, so let's assume that in linear regression we were given a matrix. Let's go- so this was the design matrix x, where each row was an example. Let's call this x, uh, x1 to xn, and we had y, right? And in linear regression, we wanted to learn a theta vector, right? So if x was in rd, theta was also in rd, right? And when we started gradient descent, we start with some theta 0 and then perform a gradient update and get a theta 1, right? And then perform an update and get theta 2 and so on, until the theta vectors converge, right? But with kernel methods, we observe the fact that the thetas were always a linear combination of our x vectors, right? That's, that's the observation that we made, that theta, um, that theta at any, any step in gradient descent could always be represented as some linear combination of our vectors, right? And we instead stored beta 0, where we had one beta for example, right? And each theta, so say this was beta 0, and using our, our algorithm over here, we would get beta 1, right? Then we get beta 2, and so on, where each of the betas were in Rn, whereas theta was in Rd, right? Thetas had one component per feature. Betas have one component per example, right? And every beta vector has a corresponding theta vector, where theta vector is- so theta of t is equal to sum over i, equals 1 to n beta of ti x of i, right? And this holds true even if we- so we use this fact to, to kind of extend our x into a, a high dimension. So supposing, you know, we map this into phi of x, which is in Rp, right? And now what used to be a d-dimensional vector is now a p-dimensional vector, right? And this, we could- we could- we could have map x to- we map x into some phi of x, and this phi of x could have been infinitely- infinite dimensionally wide, right? And no matter how big our feature vectors are, the beta vectors are, are gonna be of fixed length, 1, for example, right? And this is what allows us to scale in terms of features and we just remember the corresponding beta vectors by which we weight our examples. Any, any questions on this? Yes, question? [inaudible] Yeah, so the question is-is it really practical to remember all-all um, our training examples into test-time? And the answer is, in general, yes, you need to do that, and um, that makes kernel methods not very scalable as you get lots and lots of data. Which is why, you know, in-in practice, you see when you have lots and lots of data, you see methods like neural networks and deep learning take over because they don't have this limitation that you need to remember all your examples, all your training examples. But at the same time, there are algorithms that we're going to see, in fact later today uh, called the Support Vector Machine, where the support vector machine will result in beta vectors that are very sparse, which means most of them are zeros. So you just need to remember a few of your examples, and both are called support vectors. We will-will be going to support vectors, but in general, your beta vectors can be dense, which means you need to remember all your training examples all the way to test time, and in fact, that is a big obstacle for scaling your algorithms to big data sets. And that does come into-come, come into, come into play uh, uh, if you want to use kernel methods. That's right, yeah, good question. Any other questions? So let's look at some properties of kernels, okay? So what we saw so far was a way um, was-was an example of an algorithm which was linear regression that we kernelized right? And Linear regression is just one algorithm that you can kernelize. You can kernelize a lot of algorithms and-and in fact, you can follow the same steps and you can even can kernelize generalized linear models, which means in a logistic regression or Poisson regression, they can also be kernelized where you, you replace this with some G function. Okay? And um, you know by-by having a G function, for example, you know the sigmoid function or the logistic function um, and following the same recipe, you, we can kernelize any generalized linear model algorithm as well. And there are lots of other algorithms that can also be kernelized. And in fact, the question of our kernel methods for classification or regression, no, wrong question. You know, you can-you can um, kernelize lots of different kinds of algorithms are kernel-kernel methods for discriminative or generative models? The answer is they can work for both you can kernelize generative models, you can kernelize discriminative models, are kernel methods for supervised learning or unsupervised learning? They can work for both. You know, you can kernelize supervised learning algorithms, you can ah, kernelize unsupervised learning algorithms. So kernel-this kernelization or what's called as the Kernel trick, is a very general technique that can be applied in lots of different places. And let's look at a few properties of kernels [BACKGROUND]. Kernel examples, so example. Example A, if k of x comma z equals x transpose z squared. And this has the corresponding phi of x equals x1, x2, x1, x1, x1, x2, x D x D if x is in R D. Okay, so this is one example. In fact, we saw a more general example um, uh, previously. Similarly you can have Example B. Uh, I'm just-I'm just writing out a few examples that are in your notes. Um, if k of x comma z equals x transpose z plus c square. So I take the dot product, add a constant and then square it. That can also be uh, that's also a kernel um, which corresponds to a feature vector that looks like this. phi of x equals x1 square, x1, x2, and then you have square root of 2C times x1, square root of 2C times x2, and so on, all right? And it might not be obvious uh, at first, you know, that looking at this form, it's not directly obvious whether you can represent this as a dot product between two feature vectors. But if you work through it, you can come up with a feature representation. And similarly, uh, when you uh, when you're given a feature representation, it's not always obvious whether you can actually, you know, save some computation by coming up with a more compact representation. And in fact, for many years, this was like an active area of research where, you know, researchers would come up with new kernels, which are, you know, which had nice properties and so forth. And uh, also one would imagine that the-the order in which we presented kernels was that first uh, we-first we came up with a feature vector and then we saw that the feature vector could have been represented as a more compact kernel. But in practice, um, the way things go-go about in research, in reality is that people come up with a function k equals, you know, some form. You know k of x comma z equals something that involves x and z and whatnot. And then tried to, you know, convince themselves that you can come up with some feature vector for it. In-in-in um, reality people come up with functions and try to prove that it's a kernel rather than starting with a feature map and see if there's a kernel corresponding to it. Right? And kernels can also be seen as similarity metrics. Was there a question? Yes question. So the x1, x1, x1, x2, x1, x3, and so on, then x2 x1, x2, x2, x2, x3. So this is a feature map, a feature map can work only on one input, right? So it's-it's important to note that when we write k of x comma z equals phi x transpose phi of z. The phi, the feature map must only involve terms from one input. So [inaudible] So, um, so that would be, you know, x_1, z_1 plus x_2, z_2, and so on, whole squared, right? And this would be x_1 squared z_1 square plus x_2 squared, z_2 squared plus cross terms, right? And this is x_1, x_1. And for Phi of z is going to be z1, z1. And when you do the dot product, you're going to get this term. Right. So kernels can be seen as similarity metrics, which means if you can, um, so, ah, kernels are generally constructed such that similar examples evaluate to a higher value in the kernel and dissimilar examples evaluate to a small value in the kernel, right? And this idea of using kernels as similarity metrics will- will actually show up, in- in, uh, one of the future topics that we're going to cover called Gaussian processes, right? Gaussian processes is- is a kernel method, uh, algorithm, and, over there, um, kernel acting as a similarity metric is kind of highlighted. [NOISE] In your example B, I- I can't read what's written there. I'm sorry. In the second set of parenthesis squared. This one? Yeah. So this is x transpose z plus c. Oh, c. Some constant c. Cool. Whole squared, and that C shows up over here. Thanks. Sorry. Um, so, kernels can be seen as similarity metrics. And we want k of x, z to be high for similar x, z. And we want it to be low for not similar x, z, right? And- and this should be kind of - this, this might be obvious. Uh, the reason why we- we, uh, we say this is because k of [NOISE] x, z is equal to Phi of x transpose Phi of z. And we've seen that for similarly oriented vectors, Phi x and Phi z their dot- their dot product will be high if they are oriented somewhat similarly, which means they are, you know, similar examples, and similar, you know, the dot-product between you know, opposite vectors, pointing opposite is going to be negative, uh, which means if they're not similar, the- the kernel will, will, will evaluate them to, uh, be a smaller value. And in fact, there is also this kernel which is very, uh, popular, k of x, z equal to exponent of minus x minus z squared over 2 Sigma squared. So this kernel is also called the Gaussian kernel, because it- it looks, uh, it looks like, you know, the- the, um, um, Gaussian PDF, um, subject to, you know, um- if you ignored the normalizing constant, this will be- this is also called the, um, uh, squared exponential kernel because you square it and you exponentiate it. Um, and the idea here is that if x and z are very close or similar, x minus z will be 0 and exponent of 0 is just 1. But if x and z are far apart, then x minus z will be a big value. And the square root of that will be also be a big value. And exponent of a negative big value is as- is close to 0, right? So for similar examples, the Gaussian Kernel will evaluate to closer to 1. And for dissimilar examples, it'll- it'll evaluate close to 0, right? And this kernel in particular has- if you expand it, which is, I would say beyond the scope of our course, if you were to expand this kernel into a feature vector, then you will see that there are, uh, the Gaussian kernel has an infinite dimensional feature vector. But that's that's beyond the scope of this course. A question. Yes, question. [inaudible] Yeah. The- the, uh, question is, uh, is there- is there any relation between this and locally weighted, um, um, regression? Yes, there are, uh, connections but we did not cover locally weighted regression. So I will not go into it in the lecture. But yes, there are- there are some connections. Yes. [NOISE] So now the question is, given a ker - uh, a kernel function k, how do we know - or given a function k, what makes it a kernel? How do we know that a function k is a kernel? [NOISE] Right? So necessary conditions for k to be a kernel, right? So first of all, k should be symmetric, [NOISE] which means k of x, z should be equal to k of z, x, right? And then for a given collection of examples, say x_1 through x, uh, m. Now, some collection of examples that this may not be training set. It could be any collection of m examples. The kernel matrix k_i, j equals k of x_i, x_j. Where the collection of- of, um, examples could be anything, you know, completely unrelated to your training set even. This kernel matrix k is symmetric and positive semidefinite, right? And the reason why it's- it should be positive semidefinite is it's pretty easy to see. So consider some other vector z and z transpose kz can be written as sum over i, sum over j, z_i, k_i, j, z_j, all right? And this, you know, if you, er, do some steps, you will see that this is equal to sum over k, where k is the- is the, uh, uh, um, is the number of features in this, uh, feature vector, um. [NOISE] So we show that z transpose kz is greater than equal to 0 for any z, right? [NOISE] So if a kernel- so the definition of a kernel as a reminder, definition of a kernel corresponding to some feature vector Phi is Phi of x transpose Phi of z. This is the mathematical definition, right? But in general, um, this could be any function k, that evaluates to the same value as the explicit dot product. And, uh, that's the definition of a kernel, that it has a- a- a feature representation like this. And in order for a kernel- in order for any function k to be a kernel, these are the necessary conditions for it. And the second necessary condition of a- a so this is the symmetric and this is on, you know, uh, the PSD condition on a set of examples. There is a theorem called Mercer's theorem, which says this necessary condition is also a sufficient condition for k to be a kernel, right? So the Mercer's theorem says, uh, let k, in this case, we're going to just limit ourselves to R^d to R be given. So this is sum function- sum function. Then for k to be a kernel, it is necessary and sufficient [NOISE] for x^1 through x^m. The corresponding kernel matrix k equals kij equals of k of x^i x^j is positive semi-definite and symmetric, right? So, um, Mercer's theorem tells us that a function k for it to be a kernel, and for any collection of input examples, the corresponding kernel matrix will be symmetric and positive semi-definite, and- and if this property holds for, uh- uh, for- for any set of, uh- uh, examples, if the corresponding kernel matrix is positive and semi-definite, then k must be a kernel, so it goes both ways. And the proof of this theorem is- is beyond the scope of our course, but we can still get some intuitions on this, right? So as- as- as a reminder, um, a few lectures ago, we- we saw this kind of relation between linear algebra and functional analysis, right? In that, um, vectors are like functions, right? And matrices- matrix is like operators or- or functions with two inputs also. Functions with say, x and y, right? So the kernel function k, right? Which- which- for which we solve, you know, many compact- compact representations. Imagine this kernel to be an infinite dimensional matrix, right? It extends to infinity in- in- in all directions, right? And k of s, uh, let's call it x comma z is- you know, evaluates to some value, right? Where on one- one axis consider the x-th row and on the other axis consider the z-th column, x-th row, z-th column, right? Imagine this to be a- a two input function. And in this case, um, I'm drawing the horizontal and vertical, uh, axes as- as linear, but, you know, think of this as some abstract axis. So each- each column corresponds to z, which could actually be a vector itself, right? Here, it- it appears as though, um, z-axis is- is real value, but, you know, um, z is- is a think of each of the axes as- as, um, some abstract axes, right? So k of x comma z is gonna give you, evaluate to some value. And if you have a collection, um, x^1 through x^m, then evaluating, uh, say, k of x1 x2 would be here, k of x1 x3 would be here- k of x1 x3 would be here. So- and similarly k of x2 x1 and so on, right? And you're gonna extract these evaluations, right, into a kernel matrix where each row belongs to x^1, x^2, and so on, x^m and similarly x^1, x^m, right? And we also know that k can be represented as the, um- um, the feature as- as the, um, as the inner product between two features, which means k is in some way a positive definite kernel or a positive definite function. And what it means is that if you, uh, take some bunch of examples, which means you choose a few axes and extract the values into a matrix. This matrix is always positive semi-definite. And it says if and only if, which means, um, if any such matrix for any such, you know, collection of examples, if you extract the evaluations, you get a positive semi-definite matrix. And if that holds true for any positive semi definite matrix, then k is a valid kernel, which means it's a positive definite function. And that's basically a way of saying, um, any sub matrix of a positive semi-definite matrix is also positive semi-definite, right? So Mercer's kernel- Mercer's theorem is basically a way of saying, um, a sub-matrix of any positive definite matrix is also positive definite. [NOISE] And that's just some intuition for- for, uh, what, uh, Mercer's kernel is, uh, telling us. Okay, so, uh, that's about it in terms of properties of kernels. To, uh, as I said, in- in order to prove that a function is a kernel, you have two options. To prove k is kernel. Just to summarize this one, you know, construct Phi such that k is equal to Phi transpose Phi, right? Or two, for, use Mercer's theorem, Mercer's theorem, which means for any collection x^1 through x^m, kij equals k of x^i, x^j is positive semi-definite, right? In fact, there is also a three, which generally it's- it's very hard to- to prove that the double integral of- for any f or all f the double integral of f of x, k of x, x prime, f of x prime, dxdx prime, is greater than or equal to 0. So this is basically saying this infinite dimensional matrix is positive semi-definite. And this is saying, um, you know, a- a sub matrix is positive semi-definite. And Mercer's theorem basically tells us that this will hold true for any sub matrix. You know, it's- it's basically a- a statement, uh, saying any sub-matrix of positive definite is also positive definite. Yes. [inaudible] Yeah. This is just the quadratic form written in functional analysis- in function notation. Yeah. [NOISE] All right. So that's it about kernels. So we saw kernels for, um-um, linear regression and now we're gonna spend a little- little bit of time to, uh, see how they get used in support vector machines. So support vector machines, we're not gonna, uh, cover support vector machines in detail in this class. Mostly because, um, once upon a time support vector machines like, you know, ten years ago was a super hot topic and there is shrinking amount of interest in support vector machines. But it is still interesting to- to see how support vector machines are formulated and how kernels play a role there. [NOISE] Right? So support vector machines. [NOISE] So support vector machines is a discriminative classification algorithm. And the idea of support vector machines is if you have some examples, x_1 and x_d. And let's say you have a few examples here and a few [NOISE] examples here. Right? Now, how do we come up with a separating hyperplane, right? By just looking at this, there- there are so many possible hyperplanes that we can come up with. For example, this could be a separating hyperplane- this could be a separating hyperplane. Right. There are so many and infinitely many possible separating hyperplanes that- that we could come up with. And the question is, you know, wha- what should be the ideal separating hyperplane that, um, that we- that we can come up with given a data set like this that's, uh, separable. And for that, the answer uses this concept called a margin, right. So the margin is [NOISE] - it basically tells us [NOISE] right. So the margin- um, you can think of the margin as y_i. All right. Before- before we get into there, some, uh, notational changes from before, y_ i for this lecture alone, think of it as being in plus 1 or minus 1 instead of 1 and 0, just because it's gonna make the notation a little simpler. But this is just a- a superficial change that is, you know, nothing fundamentally changes. We're just using different numbers to indicate different classes. And the parameters [NOISE] will be w and b, where w is in R_ d and b is in R and, uh, for examples, x are also in R_ d. And we make predictions with w transpose x plus b. What this basically means is we assume we're not loading the intercept term x naught equal to 1 into our examples. And we're gonna have an explicit separately written out parameter called b in place of having an interceptor. Right? So the margin is defined as y_ i times w transpose x_ i plus b, right? Now the- the, um, the idea is again very similar to, um, logistic regression, where our predicted value w transpose x plus b. We want it to be greater than 0 for y equals plus 1. And we want it to be less than 0 for y equals minus 1. So let's suppose this is y equals plus 1 class, and this is y equals minus 1 class, right? We want, y_ i times w, uh - uh, w transpose x plus b to be greater than 0 for y equals 1. No, we want w transpose x to be greater than 0 for y equals 1, and uh, w transpose x to be less than 0 for y equals minus 1, which means y_ i times w transpose x plus b should always be greater than 0 for both y equals 1 and y equals minus 1, right? So what we desire is margin to be large, right? And the- the definition of the margin is taking into account that, uh, for some of the examples, uh, w transpose x plus b will be greater than 0, and for some it is less than 0. And for the cases when it is supposed to be less than 0, we multiply it by minus 1. So the margin, you know, after multiplying by minus 1 has to be large no matter what the example is, right? And the idea of support vector machine is to calculate the margin for- from the given- given, uh, uh, hyperplane, calculate the margin with respect to all the examples and maximize the smallest margin. Okay, that's the- that's the idea of- of, uh, the support vector machine, which means in general, now, for example, uh, for this separating hyperplane, the smallest margin is here, right? And for this hyperplane, the smallest margin is here, right? And for this hyperplane, the smallest margin is probably here, right? And we want to choose the hyperplane that gives us the largest small margin, right? So calculate the smallest margin and choose the hyperplane for which the smallest Margin is the largest, right? And that's the intuition behind a support vector machine, right. Now, the, uh, one thing that you can- that you can, um, observe is that if we are able to find some hyperplane for which we are able to correctly classify all the examples. If you are able to find one such w and b that correctly classifies all the examples, then with this definition of a margin, we can kind of fool the system by choosing scaled versions of w and b. So for example, in place of w, if we have 2w and in place of- of b, we have 2b, then the margin by this definition effectively doubles, right? So if we are seeking a separating hyperplane where this- this geometric notion of margin is largest in some sense where the smallest geometric margin is- is- is the largest, then this definition of margin might- is- is vulnerable for being gamed. You can always scale up your w's and b's and increase your margin even more. Though it does not change your separating hyperplane in any way, right? And that is why- um, so this is also called as functional margin. So the- a support vector machine is an algorithm that tries to maximize the geometric margin, which means the actual geometric distance between the- the, uh, separating hyperplane and the examples, right? This is not the geometric distance between x and the hyperplane, right? Because this, you can- you can- um, you can use 2w and 2b. And- [NOISE] so if w transpose x plus b equals 0, this corresponds to the separating hyperplane. You can scale your w's and b's with any scalar and still have the same separating hyperplane. But this definition of margin would result in a- in a larger value. You would get a larger functional margin for the same separating hyperplane, right? And the support vector machine addresses this- this specific, uh, uh, problem. And we're only gonna look at the- the, um, problem formulation and not actually solve it for this course. But the support vector machine- [NOISE] so in support vector machine, uh, we seek a value of w and b such that, uh, w and b gives us the ideal margin. And this is the wrong- wrong objective to maximize because this objective is vulnerable to being scaled and being fooled, right? So the objective instead that, uh, support vector machines uses is this. So minimize the spectrum w and b [NOISE] i equals 1 to n, max 0 comma 1 minus y_ i times w transpose x_ i plus b plus 1 over c [NOISE] w squared. So what's- what's happening here? So we have a 1 minus y_ i times w transpose x plus b. And we have a max between this times 0, right? This term over here is commonly called as the hinge loss. You'll see why it's called that way, or it's also called sometimes the SVM loss. Maybe you see this term in, um, many cases. So what this means is, you know, if we want to think of this as your margin. You know, this is still the function margin. All right. Let's call this equal to gamma of I. All right. And if this is gamma, then 1, the [NOISE] the hinge loss looks like this, right? So this whole term is called the hinge loss. Hinge loss. And it looks like this. Why- why does it look like this? So if for all values where the margin is- margin does not include the- 1 minus, so this is the margin. So whenever the margin is bigger than 1, then the right-hand term after the comma is going to be less than 0. So for all values of n, the margin is bigger than 1, the right time- right hand, the second- the second term will be less than 0 and the max between 0 and a negative number will be 0. And so the hinge loss for any value of- of- of Gamma or margin bigger than 1, the hinge loss is always 0. And for all values where the hinge loss is- evaluates to something less than 1, so for example, if the hinge loss evaluates to, let's say 0.1, then this is going to be between max of 0 and 1 minus 0.1, that's max of 0 and 0.9. So when ga- gamma is 0.1, the loss is going to be 0.9, right? And as- as you keep decreasing gamma further, the hinge loss is going to be more and more, okay? So what this- what this is basically telling us is learn w and b such that w transpose x plus b is at least plus 1. We want it to be just greater than 0 for y equals plus 1. But the loss is instead telling- actually, we are not satisfied if it's just bigger than 0, we'd actually make it bigger than 1, right? Go further. And because we know that- because we know that this functional margin, now, this functional margin is vulnerable to this kind of scaling where, you know, the- the algorithm can, once- once the algorithm finds- finds a w and b such that w transpose x plus b is, is 0.1. it could just scale up w and b further to just increase, uh, your- the- the margin to go greater than 1. And in order to safeguard against that kind of a scaling, uh, we also penalize the norm of w, which means you can't just increase the value of w by- by scaling it larger and thereby increasing the functional margin. You actually need to, you know, find a well-suited, um, well-suited R w and v such that, you know, there is- there is- such that they are actually far away from the examples. All right? And this formulation can be rewritten as- so this optimization problem can be rewritten as min xi, w, b half w square plus c times sum over I equals 1 to n. C i such that y i times W transpose X i plus b greater than equal to one minus c i. for all i in 1 through n and because xi i is greater than equal to 0, i equal to 1n. Now this formulation, it looks different from this, but it, uh, but the two are- are, are actually equivalent in the sense the optimal solution you get for W and b from this problem is the same as, uh, the W and B you would get from this problem. And, uh, the reason why we don't want this form to be sought directly is because this max operator is not differentiable. Whereas this is what is also called as a convex problem, where the losses convex, the thing that we want to minimize, and the constraints are convex. And once you have a problem, um, written out in this form where the objective and the constraints are all convex, then there are lots of solvers that can just solve this for you, right? And the way the, uh, another- another topic in convex optimization is that for every such convex problem, there is an equivalent dual problem, right? So this is also called, think of this as the primal convex problem. Correspondingly there, that is a dual convex problem. And the dual convex problem looks something like this. Max over Alpha. So to be clear, you don't need to know convex optimization for this course and you don't need to know how to derive this. Uh, you just need to have the intuitions of how the SVM is trying to optimize the geometric margin. And whereas this is just the functional margin, the functional margin can be fooled by just scaling Ws and b. So instead, SVM includes this penalty term to- to save itself from getting fooled. And that's equivalent to this convex problem. And that is equivalent to this dual convex problem, yes, question. [BACKGROUND] Good question. Why don't we penalize B? There are good reasons why you don't want to penalize B. Because if you penalize B, you're constraining your separating hyperplane to be close to the origin, right? But that there is no value in doing that. You want, you want to give your algorithm freedom took place it however far from the origin as necessary, depending on the data. [BACKGROUND] So- so, uh, I would say, I would encourage you to go through the notes and you can see why penalizing Bs is the wrong idea if, you know, if you're still interested. But the intuition there is that by penalizing B, you're restricting the algorithm to have a hyperplane that's close to 0. And that's, you know, that- that's seems pretty arbitrary. [BACKGROUND] So- so the thing is w and b are kind of work differently. So B tells you how far apart you can be from the origin and you don't want any kind of constraint over there. Uh, so there is an equivalent dual problem, of i minus half Y i. This is a double sum over i j, y i, y j, Alpha i Alpha j and inner product between x i x j such that 0 less than equal to Alpha I less than equal to c. And sum over i equals 1 to n, Alpha i, y i equals 0. Anyways, so if- if you know Convex analysis- if you know convex optimization, this problem can be- can be rewritten in this form. But you don't need to know how- how- how you go from here to here for this course, all right? But once you write it in this form, you see that we end up with, uh, we end up with a term where we are doing or taking a dot product between pairs of examples. And we're doing that across all pairs, right? Over here we are- we're explicitly trying to learn W and b, right? And to kind of draw the similarity between linear regression, this is like trying to find the theta vector. But in the dual form we are trying to find Alpha, a collection of coefficients, Alphas where each Alpha is the weight you apply to each example. And that was very similar to the per example betas that we saw in linear regression. [NOISE] So in the dual problem, we are trying to find the coefficients with which we weight examples. And we can represent the algorithm completely in terms of- in terms of a dot product between examples, right?. And here you can replace this dot product with the feature map. And you can replace that dot-product between features with a kernel. And that's how kernels come into picture in support vector machines using the dual formulation, all right? And the one unique property. I wouldn't say unique property or one nice property of support vector machines is that once you solve this problem, the set of Alphas that you, that you obtain as- as- as a result of this optimization problem will be sparse. Which means only a small number of Alpha i's will be non-zero but most of the alpha i's will be 0, right? And that means that those set of examples for which the alpha i's were non-zeros are called the support vectors. And intuitively, those will be the- those examples that are nearest to the separating hyperplane. All right. Uh, I think we're over time and, uh, that's pretty much all we wanted to cover about support vector machines. The main idea that you need to remember about support vector machines for this course is this idea of functional margin versus geometric margin. And that it's possible to write the SVM in this dual convex problem as this dual convex problem. And that's going to result in coefficients that are sparse, right? That's- that's the extent to which you need to know about SVMs for this course.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_14_Reinforcement_Learning_I.txt
Welcome back, everyone. So this is CS229, lecture 14. The topic for today, we'll be starting a new chapter, is, ah, reinforcement learning. And today we'll be covering Markov decision processes, value iteration, and, and policy iteration. Ah, which form probably the core of reinforcement learning. And, um, in, in Friday's lecture, we'll see more complex extensions of this. And before we jump into today's topics, ah, a few announcements. So, ah, for those of you who've been, um, sending in, ah, feedback through the Google Forms that we submitted, thank you so much. There are, um, we got a lot of good suggestions, a lot of, ah, um, points where we can improve. And I would like to, ah, briefly summarize, you know, some of the feedback that we've got, and the kind of steps that will be taking to, um, to improve. So, ah, one of the, ah, concerns raised was, um, office hours, ah, that there're- ah there were a few too many last minute changes for office hours. Um, and, um, we will try to do our best not to change the office hour schedule moving forward. Unfortunately, sometimes there may be personal emergencies or things beyond the TAs' control, ah, when it's too late to bring in a replacement TA. But, um, those exceptions apart, ah, we will do our best to keep the office hour schedule fixed. There were also a few very specific comments and suggestions about, ah, the way the office hours, um, were, ah, were conducted. And, um, we've, we've, we've, ah, noticed what you- what you've said, and we'll be, ah, definitely taking steps to improve. And, ah, a few more, ah, comments from, from, ah, you guys. So a few comments on the lectures. So some of you have, have asked for, um, doing the lectures with the slides format, where we're projecting slides and, um, and, and, and doing the lectures around the slides. Ah, however, um, this is a pretty mathy class, and, um, at least it's my opinion that, um, you know, slides might not be the best approach because there'll be a lot of back and forth between going on the slide and going on the blackboard, and that can be pretty distracting for somebody who's watching the videos. And also, ah, we just tend, you know, um, probably math is better explained on a whiteboard than on, on, on, on slides. Ah, several of you have asked that I write bigger on the whiteboard. Ah, thank you for that feedback. I will make a conscious decision to write bigger. If any of you, ah, have any trouble, ah, understanding what's written on the whiteboard, please, ah, stop me right, right then, and, and, you know, tell me that you cannot see it. I will, I will, you know, um, I have no problem erasing it, and writing all, all over again. And also, um, some of you have mentioned that the lectures are going too slow. Some of you have mentioned it's going too fast. Um, so the, the, um, so I guess one, one way to kind of accommodate both needs of, of going fast and going too slow, ah, is probably that, um, I, I think a good, good compromise there would be to probably postpone some of the questions that, that come from the audience to a few- in a batched way. So hold off till I, you know, complete some section, and then ask your question in case your question gets answered, ah, right then. And if, if the questions are not directly related to the lecture itself, then I might ask you to just post the questions on Piazza. And when I asked you to post it on Pia- on, on Piazza, it would be, um, it's a sincere request that you actually do post it on Piazza because, you know, I wanna answer them, ah, and just not hold, you know, the 100 or more of you, you know, waiting while I answer the questions that may be of interest to a few of you. Um, and then, ah, some of you have also, um, ah, given some feedback, ah, about the difficulty of homeworks. Ah, so some find the math in it too hard, some of you find the programming in it too hard, and, um, I don't know if there is a clear action that we can take over there, ah, because, ah, machine learning is, is indeed, um, you know, a, a, a field that, that spans computer science and statistics. So it will be hard on math, it will be hard on programming, and it- that's just the nature of, of, you know, ah, machine learning. And, um, the programming assignments, ah, or, or, the, the, ah, homework assignments might feel a little too long. Um, however, ah, you know, ah, er, in this course we are just following the, the, um, following the pattern that has been followed for this course in the previous quarters, right? And, and the programming assignments- and the homework assignments, you know, in each homework is not more or less than what's been done before. And, and that's just the way the course is structured. It is, it is a work- workload heavy course. Ah, in terms of, um, in terms of, ah, help required for programming, ah, in general, in office hours, we tend not to focus on helping students debug their code. Ah, we expect students to be comfortable with programming and be able to just implement things on your own. However, ah, going forward, if, if, ah, there is no backlog in the queue, you know, the TAs will- might probably help you a little bit if there is nobody else waiting. Ah, but in general, the expectation for the course is that you come with, you know, ah, um, basic computer programming skills, and you have no, no problems writing simple code given an algorithm. Ah, but at the same time, we don't wanna, um, you know, um, kind of leave you stuck if you're having some trouble. So, you know, try attending office- if you're- especially if you need help in programming, try attending office hours early in the homework phase so that office hours are free and maybe TAs can help you out there. If you, if you, ah, attend office hours when the deadline to a homework is, is, you know, close, then the TAs will have a hard time helping you with, with, ah, programming. Um, and for those of you who's, you know, the, the feedback form is still open, ah, I st- I'm still gonna keep checking for it every once in a while. If you have any feedbacks, you know, please feel free to give us candid feedbacks on how we can improve, and we'll do our best to, you know, make changes and accommodate as many needs as possible. [NOISE] Okay. With that, ah, let's start with a quick recap of, um, of, of, ah, the last lecture, and also do a quick summary of where we are standing in terms of the overall course. So in the last lecture or the last two lectures, we've been talking quite a bit about bias-variance tradeoff, and this- understanding this bias-variance tradeoff in a deep way should probably be the highest priority for you of what to take away from this course, all right? Bias-variance tradeoff is something that is so fundamental, and understanding it in an intuitive and deep way will be- will probably serve you the longest. It, it has probably the largest payoff into your future machine learning career, all right? So, um, to summarize, ah, bias-variance tradeoff, in machine learning, we are interested in generalization error, all right? How well our model performs on unseen data, not on the training data that we have. And the test error or the generalization error can be decomposed into three parts, three additive parts. So one is irreducible error, and this is just the noise in the test example, all right? Our training data is noisy and our test data is noisy. And the noise in the, in the, in the test data is what contributes to irreducible error, and there's nothing we can do about it. It's just a fact of life and we accept it. And then there are these two other components called bias and variance. Bias, you can think of it as a systematic error of your model. Which means, if you average it across all the possible training sets that you can get, the systematic error that your model makes, that's either you're underpredicting or overpredicting for certain examples, or that your parameters are closer to 0 than what it should be, that systematic error is called bias. And variance is how sensitive our model is to noise in the training data, all right? And this is where the distinction between noise in the test data and the training data comes into picture. Noise in the test data contributes to irreducible error, and that's, you know, one component of the generalization error, and noise in the training data contributes to the variance of, of your model [NOISE] , right? And [NOISE] the steps that we can take to reduce, um, the test error, the steps that we can take, there are way too many steps that you can take. For example, you can try a bigger model, you can try- you can get more training data. There are many steps that you can take. However, not all the steps work all the time. And in order to determine what steps, what, what is the step that's most effective for the problem, ah, that is immediately in front of you, in order to make that decision, it's very crucial that you understand this bias-variance trade-off. All right? Reducing training error is, is simple. You can always, you know, um, um, get a bigger model, get a more complex model, and that'll reduce your training error, but our goal is to improve on test error. Okay. And- and- and- and the steps that we take, as I said already, is gonna- is gonna depend on whether the current problem that you're facing is mostly a bias problem, or is it mostly a variance problem, right? And the bias variance trade off basically tells you that [NOISE] by taking some step, you may be reducing, you know, one of them, either bias or variance, but that may also increase the other, right? And which is why it's- it's very important that we- that we are able to characterize which of the two problems is the bigger problem at hand and take a step to purposefully go after that particular problem, either bias or variance, right? For example, we saw that if we- if we increa- increase the model capacity, if you- if you move to a more complex model, that will reduce the bias, um, of- of- of your model, but at the same time, it may increase the variance of your model, right? So if- if the problem that you're [NOISE] facing right now is high variance, then you should almost certainly not use a bigger model because that is going to worsen the problem that you have, right? And similarly, with regularization, if we increase regularization, that is going to reduce the variance in the model, but it's gonna increase the, um, um, it's also gonna increase the bias in the, in, uh, in the model, [NOISE] which means if your problem right now is high bias, then increasing regularization is gonna make it worse, whereas increasing regula- regularization would have helped if the problem was high variance, right? Which is why it's super- super important for you to characterize and get a heuristic estimate of what the contribution of bias is towards error, and what the contribution of variance is towards the current, uh, test error. And unfortunately, there is no, um, principled approach to decompose and get these estimates in practice, which is why we use heuristics, right? The heuristics are, you can think of the training error itself as the bias, and you can think of the gap between cross-validation error and training error as variance. And this is where cross-validation comes into picture. Cross-validation is necessary, or the- the- the most important use of cross-validation is that we can get this- this decomposition of variance- variance and bias from the current model, right? And- and once we get this- get this, uh, breakdown, we can then make, uh, a judgment about whether our model is currently facing a larger bias problem or a larger variance problem and take some remedial step according to which of the two is larger, right? So the- the summary is that whenever you are working on model development and you wanna improve your model performance, you should always, always, always look at the training error and cross validation error simultaneously, right? We care about improving the cross-validation error or the- or- or the- or the test error, but just looking at that alone will give you nothing actionable. In order to have something actionable, you need to see both the training error and the cross validation error simultaneously, make a judgment about whether the situation is a high bias situation or a high variance situation, and then accordingly, take some action to remedy the high bias or high variance, right? And this- this is probably the- the- the most important thing that's going to serve you well for the rest of your machine-learning career, right? So that's- that's the, uh, bias-variance trade-off. And to kind of take a larger look of where we are standing in terms of the overall course, so here's- here's a, um, um, a rough overview or- or a quick overview of what we've covered so far. So approximately, week 1 through 4, we cover supervised learning. And what is supervised learning? Supervised learning is learning some function h, which we call the hypothesis, that maps x to y, right? The- there is- there is this clear concept of input and output in supervised learning, which will not be the case in unsupervised learning. In supervised learning, there's a very clear sense of what is input and what's output, and we have noisy examples of input-output pairs, right? And most of the time, the noise is embedded in the label, what we call, you know, in- in the output, and we get a training set of size n of, you know, these- these x-y pairs. And from this training set, our goal is to learn this hypothesis that- that's be- a function that maps x to y, such that it generalizes well to unseen data, right? There is some notion of a loss function, and we want to minimize this loss function on unseen data, right? If- if the problem was just to minimize it on the given data that we have, then we would just call it an optimization problem, right? But this is machine learning because we want to do well on unseen data, right? And that's- that's where, you know, the concept of bias-variance comes into the picture. And we thought- we've been seeing, you know, approximately, you know, two kinds of algorithms, classification versus regression. And this- this categorization is based on the data type of the y-variable, right? If the data type of the y variable is binary, you know, 0 or 1, then we call it classification. And this could not be just binary, but it could be, you know, or- or multiple class, like, you know, um, for example, in the homework, you're- you're- you're building a classifier for handwritten digits for MNIST, where y can take anything between, you know, 0 to 9. And that's called classification, where y is discrete. And in cases where y is continuous, we call it regression, right? And we've seen, you know, a simple- two very simple, uh, models that we started off with, that was linear regression and logistic regression for, uh, for- for regression problems and classification problems. And then we, uh, after- after seeing those two, we kind of generalized the- the datatype of y to not just, you know, be this strict regression versus classification, but generalize them into something called GLMs, generalized linear models, where y given x is some distribution in the exponential family, right? It could be discrete, it could be continuous, it could be Poisson, which is just integers, it could be anything. It could be, you know, positive only. And as long as that y given x distribution is- belongs to the exponential family, then we can solve it with- with, you know, common update rules, and, you know, there- there were other niceties that we saw. So generalized linear models is not just for classification, or for regression, or for Poisson, it generalizes the data type of y to any- any kind of, uh, datatype essentially, right? And then we also saw two different, uh, kinds of approaches for building models, discriminative versus generative. In discriminative models, we are directly learning y given x. So a generalized linear model is an example of a discriminative model, where we are only interested in learning the probability distribution of y when the corresponding x is given to you. And the- the, uh, other, um, approach for building models is called generative models, where we want to learn how to generate new examples, how to generate a full pair, x-y pair, right? And this can be generally broken down using the chain rule as p of y times p of x given y. And we saw a few examples of- of generative models, so we saw a Naive Bayes where x was discrete, and we saw GDA, Gaussian discriminant analysis, where x was real valued, right? And we also saw that, uh, with- with- with GDA, there is a one-to-one correspondence between GDA and logistic regression. In fact, for any x given y, where x belongs to exponential family, p of y given x will always take a logistic regression form, right? And we saw that in the homework as well. And then we move on to non-linear models, right? The- wa- we- we actually did explore non-linearity using feature maps in your homework already, in the first homework, where you tried different feature maps, we get nonlinear hypotheses, right? And the- the- the- a more formal approach to feature maps that we saw was kernels, right? Kernels are- are these symmetric positive definite functions which have an implicit feature map embedded in them. And using kernels, we can- we can build, um, you know, kernel-based methods, you know, we call them kernel methods, and we saw two such examples, support vector machines for classification problems and Gaussian processes for regression problems. Both of these methods use kernels, and SVM, in particular, was- was, you know, particularly well-suited for a kernel method. And that was because with- with, um, with the non-kernel-based approaches, um, the complexity of the model depends on the number of parameters that we have, that depends on the number of features that we have. But as with kernel methods, the- the complexity depends on the number of examples that we have, right? Those- those are the two- two kind of sides of your design matrix. With- with- with nonlinear models, we learn one parameter per feature, that is per column, and with kernel methods, we learn one coefficient per example, you know, per row. And so, um, kernel methods, in general, tend not to work so well when the number of examples is a lot, when you have a very big data set, kernel methods tend to, um, um, tend to be more expensive. And we saw that in SVMs in particular, the set of coefficients that we learn for, um, the kernel approach happens to be sparse, which means you get the best of both worlds. You get good scalability in the number of features because kernels can potentially give you infinite number of features, and you also get good scalability in terms of number of examples because the coefficients are mostly 0 even with large datasets, therefore SVMs tend to work well. And then we moved on to another kind of non-linearity. So kernel methods was one and the other approach that we saw was learnable features. Where in the first approach, the feature map of the kernel is hand-designed. You decide what the kernel you want to use is. The other approach is to again use data to learn what are good feature maps, okay? That's when neural networks came into the picture, right? With neural networks, you can think of a neural network as a generalized linear model at the last layer and everything before that as some kind of a learnable feature map, right? And after that, we moved on to learning theory. And in learning theory, the main topics that we touched upon was regularization and its Bayesian interpretation of how regularization corresponds to performing MAP estimation in the Bayesian setting. We studied bias and variance and the bias-variance trade-off, which we covered already, and we also studied uniform convergence, right? So uniform convergence is a more theoretical part of the course. And in general, understanding the contents of the lecture notes on uniform convergence and what we covered in the lecture can be very useful if you're interested in getting into machine learning research. Because that's the common framework in which most of the machine learning theory problems are posted and that's until the last lecture. So we are here right now, right? And today, and in the next lecture, we'll be covering reinforcement learning. It's going to be a very quick overview of reinforcement learning, we'll not be going very deep. There are multiple courses offered, only on reinforcement learning, it's a super vast field, but we'll only be covering just the basics and giving you kind of a big picture or a mind map of how the different reinforcement learning algorithms that we are not going to cover, how they all fit together in one piece. So that's the plan for reinforcement learning. And starting next week we'll be starting unsupervised learning. For me personally, I like unsupervised learning the most, it's very exciting. And once we kind of finish up with unsupervised learning, we're going to cover some general topics, things like evaluation metrics, maybe a few, a few theoretical ideas like KL divergence, or a few such things. Some general topics that don't kind of fit into this supervised versus unsupervised classification. And finally, in the last week, on week 8, we'll have only two lectures on the Monday and the Wednesday, and those two lectures we'll be doing a full review of the entire course again, with the hope that it prepares you well for the final exam, right? So any questions before we move into today's topic of reinforcement learning? Yes, question. Could you please explain the motivation of why the training error is approximately used of bias and the same as [inaudible] in cross-validation error? So the question is, why are these two, why is training error an approximation of bias and cross-validation? I would say post it on Piazza and I'll be happy to kind of give you more details on that. All right, so let's start with reinforcement learning. [NOISE] So in the examples that we've, in the topics that we've covered so far, which has been mostly supervised learning, we were always told for a given input or for a given situation what the correct answer is, right? So for every x, we had a corresponding y, which was the correct answer or the answer that our model should learn to predict, and that was given directly to us, right? And which is why we call it supervised learning, right? Whereas in reinforcement learning, this kind of supervision is a little weak. Instead of telling us what to do in a given situation, we are instead given some kind of a reward, right? So in RL, one way to think of it is replace y, which is supervision with reward, right, and reward is always real-valued. It could be positive, it could be negative. And the way you want to think of reward is kind of how good a job you did, right? There is no right reward, so to speak. There is no optimal reward as such, and the goal in reinforcement learning is to kind of maximize our long-term reward, which means we are dealing with situations which are changing over time, right? In supervised learning, we were dealing with fixed situations and each example was what we call as IID, where each example was independent of the other. Whereas in reinforcement learning, we are trying to program an agent that kind of lives in an environment or in the real world, or in a simulator that performs well over time. And examples of such agents could be, for example, it could be a robot which is learning how to walk. It could be a game-playing agent that plays chess or Go. It could be even some kind of an automated trading agent that you use in a financial market, right? Things where you're making multiple decisions over time and at each time, after you take every action, you get some kind of a reward, right? And the goal with reinforcement learning is to maximize this accumulated reward over time, right? Which means we need to take this long-term view of trying to optimize how much reward we're going to accumulate, starting from now all the way into the future, right? Which is very different from the supervised learning setting, where in the supervised learning setting, we were given an example and our only concern was to do well on that example alone, and the next example is completely independent, right? And so it is this concept of time that makes reinforcement learning different from general supervised learning, right? In reinforcement learning, you're making multiple decisions, one decision at a time, and with each decision, we're earning some reward and our goal is to maximize this accumulated reward into the future. And the way we kind of formalize reinforcement learning is through this formalism called MDP or Markov Decision Process. Markov Decision Process, MDP. So an MDP is, 1, 2, 3, it's a 5-tuple. It's a tuple containing five things. S, A, P_sa, Gamma, and R where S is a set of states. [NOISE] It is the set of all possible states that our agent can currently be in. [NOISE] A is set of actions, right? P_sa, are transition probabilities, [NOISE] Gamma. This is 0, 1 discount factor and R, which is S cross A is called the reward function right? Now, let's- let's look at an example of what all of these actually mean. Supposing, uh, this example comes from a very famous text book on- on artificial intelligence. And it's also a very standard example used in many courses. So you might have seen this before. So supposing there is this- supposing this is- that is agent that lives in some kind of a map, some kind of, um, um, um, region where your agent at any given time can be in one of these grids. And our goal is to reach here. And, uh- and you get a reward of plus 1 if you reach here and you get a reward of minus 1 if you reach here and our goal is to- is to, uh- is to be able to solve this. And by- what I mean by solving is, so first of all, what is S? S is the set of all possible states where our agent can be. So states in this case, S in this case is the set, you know, 1, 1 1, 2 and so on. And the size of- the size of, uh, our- our, uh, state-space in this case is 11, it can be in 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and, you know, this is blocked. You cannot be in this state. The set of actions that we can take in this case are to move our agent one step at a time in any of the directions. So the set of actions we take can be move north, move south, move west, or move east, right? Four simple actions that we can take. And the probability PSA is also called the transition probability. What this means is this is a probability distribution over the states. And what this basically means is if we are in one particular state, that's indexed by s and we take an action indexed by a, then what is the probability that we're going to end up in one of the other states, right? So it's- it's a probability distribution over the state space and it has two indices, the current state and the action that we're taking. So for example, if you're- if you're building a robot that has to move in this kind of a map. If the robot is in, uh,- in 1, 3, then p_1, 3 and let's say the action that we take is go north- north, so this is P (s,a) state, is, you know, the state we are in, 1, 3 and the action we take is go north, then this will be a probability distribution such that, uh, let's say there is- there is, you know, some kind of uncertainty in the robot. And whenever we ask it to go north with- let's say with 80% probability, it actually goes north and with a 10% probability, it- it goes west, and with 10% probability it goes east, right? So this will be a probability distribution that has zeroes everywhere, 0.1 that corresponds to the state 1, 4 right? 0.1 that corresponds to the state 1.2, 0.8 corresponding to the state 2, 3. And zeros everywhere. Okay. Is this clear? For- for every state, we take an action. And when we take an action, we can end up in a new state. And that- that transition of moving to the next state is stochastic. And that stochasticity- that stochasticity is captured by this probability distribution, which tells us with what probability we're going to end up in any given state, depending on what action we took and what state we were currently in. Yes, question? So action is for the- the action said it's for the [inaudible] like you can't say that for certain. Yeah, I'll come to that- I'll come to that. So, uh, uh- so let- let's for, you know, simplicity assume we can take any action in any state for now, right? So, uh- so that's- that's called a, uh, uh, transition probability. And then there is something called the, uh, discount factor Gamma, we'll come to that, and there is a reward function. So a reward function is basically, uh, the reward that our agent earns immediately after, you know, when it is in some- some particular state. For example, if- if our agent comes to this state where we, you know, want it to be, it earns a reward of plus 1. And if we wanted to avoid the state, it earns a reward of minus 1, right? And let's say, you know, the reward at all the other places is a- a small negative reward of 0.02 minus, you know, 0.02, minus 0.02 and so on, right? And these are the immediate rewards that the model gets or- or the agent gets for entering a particular state. And most of the times, the reward function is represented as a function of the state, you know, it gives you some, uh, uh, real valued reward. And sometimes, it is also modeled as R of s, a, where when you are in a particular state and you take the next action a, you get some particular reward. The reason why sometimes, uh, we tend to model- model it as a state action pair rather than state alone is because taking, uh- taking different actions may have different costs. So, uh, even though you earn the same reward for being in a state, the fact that you took one action over another, you may be paying a different cost. So, you know, uh, to- to- to, uh, account for such, uh, actions, specific costs, you know, it's- you know, sometimes you would- you model the reward function as- as a function over the state action pair rather than just a state alone. But for, uh- in terms of what we're gonna cover, most of the times, we'll be, uh, just focusing on- on reward functions as a function of state and, uh, assume that all actions have equal cost. Yes, question. I'm sorry, how do we know what reward to use? Yeah. So in practice, how do we know? We'll- we'll, uh, get to that, we know it's- it's- it's part of how we define the problem, uh, uh, it- itself. Um, yeah, so that's the, uh, reward function. And in terms of, um- in terms of, uh, uh, what happens when we start an agent, we assume that the agent is gonna start in some initial state, S- S naught. Um, so let's assume that the agent starts at some initial state S naught, and we take an action a or a naught. That's the first action taken by, uh, the agent and by taking an action a at, uh, state S, we enter a new state, S1, which is according to the transition probabilities. So S_1 is now- S_1 is sampled from P of S_0, a_0, right? And once we are in S_1, we take action a_1 and enter state S_2, where S_2 is sampled from P of S_2, S_1, a_1. All right? And then we take action a_2 into S_3, you know, action a_3, and so on, right? This sequence of states and actions is commonly called- there are multiple names that you will see in- in- in- in, uh, different books or different resources. You can either call it a trial, or you can call it an episode, or trajectory, right? All these are, you know, synonymous for a sequence of states and actions, uh, taken by an agent, where- where the action is taken by the agent and the environment lands you in the new state according to the transition probabilities. Yes, question? [inaudible] the probability of going north when we are at 1, 3 is a vector? So, uh, the question is why is, uh, the probability of going north when we are on 1, 3 a vector? Yeah, so, uh, in general- so- so the way to think of reinforcement learning is you are in an agent that lives in some kind of an environment, right? You are in one particular state and you take an action. And by- by taking an action, you end up in another state. But the state that you end up in can be random, right? So let's say you are building a robot and in your robot, you, uh, ask the robot to move north and for whatever reason, maybe there's a stone under its wheel or maybe there is some- some things that are beyond our control. Your robot may end up going to a different location than the one that you are- than you are, uh, actually interested in. Maybe you ask your robot to move, you know, a meter in one direction, but in- in reality, it may end up moving, say, 95 centimeters or 105 centimeters. All right. So there is some stochasticity in the environment due to which we always don't go to the same state that we- that an action will not always take you to the same state all the time, right? And that is why, here, if you are in, you know, 1, 3 and you- you know- uh, uh, and you are- when you go north, most of the times, you will go to 2, 3 with probability, you know, 0.8. But sometimes you may go to you know 1, 2 or you may go to 1, 4, sorry, either here or here with probability 0.1. But, you know, most of the times, you end up going north. That's- that's the general idea. Yes, question. Just to- just to understand the [inaudible] so would- would the probability be like, uh, let's say we try to use this [inaudible] , would that stochasticity be like [inaudible] I'm- I'm sorry. I didn't understand the question. [inaudible] Is that [inaudible] look like [inaudible] what actually maybe the opponent can make like- Yeah. So- so I- I think you're talking about- [NOISE] about um, a game playing where there is an opponent. Um, so- uh um, I'll- I'll push that question until later. Uh, for now, let's assume we are, you know, the environment is stochastic for whatever reason, right? [NOISE] Yeah, so it could be if you're in- if you're in- uh, in a- in a game playing scenario, it could be due to the opponent um, um- So um yeah, so this- this, we call it as a- as a trial or an episode or a trajectory, and all three are- all three names are synonymous. [NOISE] And now, what we- what we- uh when- when an agent goes through a particular trajectory [NOISE] at each states, it is earning some reward. [NOISE] So over here, it earns R of S naught and it lands in state S_1. The agent earns R of S_1, right? From here, R of S_2. [NOISE] So the agent is accumulating rewards at each step, right? [NOISE] And [NOISE] the- and this is where the discount factor comes into the picture, right? So the discount factor is basically a factor which we multiply all future rewards by. So [NOISE] the reward for being in state S_1 [NOISE] will be discounted by a factor Gamma. [NOISE] The rewarded state, S_2, will be discounted by a factor Gamma squared. The reward from S_3 [NOISE] will be discounted by a factor Gamma q. And what the agent- um [NOISE] what the agent um, um, accumulates over the future is basically the sum of these discounted rewards, right? And this is- [NOISE] and this sum of accumulated rewards is what we wanna maximize. And the- the uh idea here, the reason why we call it discount factor is one way to think of it is um, any positive reward that we get is better obtained sooner rather than later, right? So the discount factor incentivizes the model to earn larger rewards sooner. And it also incentivizes the model to um, postpone any negative rewards into the future, right? That's one way to think of it. Another common interpretation of the discount factor, especially if you're in a finance uh, uh, setting, is you can think of the discount factor as interest rate, right, uh, which means uh, you want to- you want to uh, Earn your money sooner rather than later and you wanna postpone your losses into the future rather than- you know, uh, rather than sooner, right? [NOISE] Yes, question. Uh, with respect to [inaudible] doesn't that you need to sub out the most [inaudible] Well, so I'm no- uh let's- let's push game- game playing. You know, we'll cover game playing tomorrow. Uh, for now, let's assume only in the setting where we have an agent that's working in some kind of an environment, right? We'll cover game - game playing. That's the [inaudible] that's about the [inaudible] like if- if you- if you are trying to maximize your winning just short of that- uh [inaudible] suppose to let by the end of the game? So we- we- we- we're- we're gonna look at- we're gonna touch upon exactly that- that- that- uh that topic of- of- uh, you know, sub-optimality, right? So uh, this is - this is the uh, setting in which uh, the- the uh, agent operates. So we want to design uh, an agent that, given this Markov decision process, wants to or needs to maximize the sum of accumulated rewards into the future, right? And these rewards are discounted by a discount factor Gamma and this discount factor Gamma, um, um, there are many, many um, um, uh, good reasons to have this discount factor Gamma, but some of the interpretation is uh, you can think of this Gamma as, you know, interest rate or you can think of it as um, um, uh, a motivation to make the agent um, um, get a reward sooner rather than later. Yes, question. [inaudible] the same Gamma? Yeah, it- it's- it's- uh the Gamma is the same Gamma that we use at all time steps. Is there a particular reason? Yes, there's a particular reason. Uh, we'll- we'll- um, uh, we're gon- we'll- we'll cover this in- in- in- shortly why- why we're gonna use the same Gamma, right? So now uh, given this- given this setting, we can define two central concepts, right, and these two concepts are gonna- [NOISE] are gonna stay with us for the rest of reinforcement learning. [NOISE] So, so we're gonna define something called as a policy Pi which is mapping from state to action. So the policy tells you what action to take when you are in a given state, right? And this is the policy over which um, the agent has flexibility to learn, right? Our goal is to learn a policy. We wanna learn this function that maps us from state to action and we want to learn a policy that is going to maximize something called as the value, right? So the value associated with the policy Pi is a function that takes us from a state to a real value, where V_Pi of S is defined as expectation of R of S naught plus Gamma times R of S_1 plus Gamma squared times R of S_2 and so on until S naught equals S, comma. So we define the value of a state by following a particular policy to be the sum of all the expected rewards from- starting from the first state all the way, you know, until- un- until infinity where uh, each reward at a particular time step is discounted by the corresponding Gamma factor, right? And over- over here, uh, the- the S, which is the parameter of the value function. Besides, what is the starting state will be, right? So the value of the value of a particular state by following a policy Pi is the discounted sum of all future rewards. Assuming we start at the given state and the actions that we take at each step is according to the policy that- that's given to us, right? You start at a given state, start taking actions according to a policy, the environment based on the transition probabilities is gonna keep throwing you into different states and when you go to a new state, [refer to your policy and take the next action and, you know, you keep going to new states and as you're going to- uh, as you're going from one state to another, you're accumulating rewards and we discount each of those rewards by the corresponding Gamma factor, right? And the value of a particular- uh, uh, value of a policy at a given state is this expected sum of future rewards. And the reason why we use expectation here [NOISE] is because these rewards are random. The r- reason why the rewards are random is because the state that we end up in is random, and the reason why the state that we end up in uh, is random is because of the transition probabilities, right? So if the transition probabilities were not stochastic, so if you always reach the same state, when we took a particular action every time, then we would not need this expectation but because the transitions are stochastic, we're- uh, the value is defined as the expected sum of future rewards or sum of future discounted rewards. Any questions about this? Yes, question. [inaudible] But this is a comma. All right. So this is- this is- uh, this is called the value function and this is called the policy, right? And as we will see probably tomorrow, you know, these two concepts are- are kind of- um they are the heart of reinforcement learning because you can kind of um, classify the entire field of reinforcement learning into those that try to optimi- or- or learn good value function versus those that directly try to learn policies, right? So the- this- this- uh these two are central concepts that- that we're going to encounter over and over. And it's this value function that is- um, that's related to rewards but also it is different from rewards. So reward is- you know, you can think of reward as like the- um reward is the immediate reward that you get by just entering into a state whereas value is- you can think of value as the long-term benefit or the sum of all future words that you're gonna accumulate by following a particular policy. So reward, you can think of it as short-sighted, not something that you're gonna get immediately by going into a particular state, whereas, you know, value is- is like the long-term value by following a particular policy. Yes, question? [inaudible] understand so the value keeps changing, right, um, depending on [inaudible] state. So even in the 1st state, um, S_0 is always thinking of where you are at the moment and definitely, the value is shifting all the time, right? [NOISE] It can always [inaudible] . So, um, the- so- so the question is the states that you end up in are- you know, can be different because of, you know, the- the dynamics are- are- are stochastic, which is why we have the expectation. Right? Based on the notation, does it mean the S naught is not fixed? It depends on where it ends up [inaudible]. So- yes, so in this expression, S naught is not random because we have conditioned S naught to be something. That S is S that is, you know, parameter to the function. So S naught here is not random because it's given. All the future S_1, S_2 are random. Okay. So we can now rewrite this in a slightly different way, you know, by actually recognizing that S naught is, um, um, not random. And so we can rewrite this as R of S, because S naught was just equal to S, plus Gamma times sum over S prime and S, P of S, Pi of S, S prime, V Pi of S prime. How did we go from this equation to this equation? So the way we went from this equation to this equation is by recognizing the fact that this- this expression can be now written as R of S plus Gamma times expectation of R of S_1 plus- plus R of S_1 plus Gamma, R of S_2 plus Gamma times Gamma squared times R of S_3, and so on. Right. Because this was- this was not random, we took it out. And all of these terms have a Gamma, so we take the Gamma factor out. And what we end up with is- is- is, um, the expectation over all of these. And this- we again recognize that this whole thing over here is, right, so this whole thing is just V Pi of S_1. Right. And by making this- this- uh, by, uh, recognizing this recursive pattern, we can define this recursive statement that V Pi of S is equal to the immediate reward that we obtained by being at S plus the discounted or, uh, you know, uh, the discount factor times, um, the expectation of the future values for- of each particular state, you know, times the probability that we end up in that state by taking the action. Right. And the action that we take is according to the policy. Yes, question. So the S primes are the successors to [inaudible] , immediate successor? Yes. So the S primes are the immediate successor states. Right. So, um, we are add- um, just for the fact that we are in state S, we earn R of S. And once we have R of S, we take some action according to Pi of S. The policy tells us what action to take. And by taking this action, we can end up with one of the all possible, um, uh, future states or the successor states that we denote by S prime, and the- and for landing in one particular S prime, we earn, you know, V Pi of S prime as- uh, as the as- as the, uh, uh, future value. Right. And so the overall value of a state can be decomposed into these two parts, the immediate part and everything else that follows. Yes, question. Uh, yes, can you, uh, clarify the notation V S [inaudible]? Yeah, so the notation here- this is, uh, the transition probability, that's PSA, A here is Pi of S. You're following, uh, the policy of what action to take, right? And so this is PSA, think of this as PSA, and PSA is, uh, probability- uh, uh, uh, probability function over all the states. So PSA of S prime tells you the probability of ending in S prime, if you were in state S and took action Pi of S. Right. And depending on which state we end up in, we get an appropriate value out of it. And so this is just the expected value of taking- uh, expected future value of taking, uh, uh, action according to Pi of S. Yes, question. So V Pi S, what's the expected value [inaudible]? Uh, can you- can you, uh, repeat the question again? With V Pi S, one is just a sum, not expected value, right? So V Pi S is- so- uh, so there was this expectation here, right? In this expectation here, we could take, uh, R of S naught out because this was not random, right? And what we end up with, that is the rest over here, we're just calling this V Pi S. It's the expected value? Yeah, it's the expected value, yeah. Yeah, the S- S_1 through, you know, uh, S_1, S_2, S_3 are still random, and so we need to still have the expectation. Right. And this equation, you know, V Pi S, where V is rep- uh, is defined recursively, right, so we have V on the left hand and V on the right, this recursive equation is also called the Bellman equation. Right. And let's put a box because this is important. All right. So to quickly summarize, we have these two central concepts called policy and value, right? Policy is just what actions we wanna take when we are in a given state. It's a simple function, nothing fancy there. Value is this long-term extension of reward, right? Reward is this short-term, short-sighted, immediate- immediate gratifi- gratification that we're gonna get by, you know, uh, being in a particular state. Whereas value is this, you know, long-term, long-sighted, you know, what is the- what is the total sum of discounted rewards that we're gonna make in the long-term by following a particular policy, and that is value. Right. So and- and this distinction between- between immediate reward and long-term value is at the heart of reinforcement learning. Because if our goal was to just maximize the immediate reward, you would just be back in supervised learning setting. Right. It is because that we want to maximize this long-term reward where we need to take this long-term view, we may need to make some sacrifices of short-term rewards in order to maximize the long-term value. And that's what makes reinforcement learning hard and makes it distinct from supervised learning. Right. Yes, question. So, uh, in- in that sum, [inaudible] you have gone for, uh, all- all possibilities from if we start from [inaudible]. Yes. Yeah. So- so the question is, does this expectation cover all possible future scenarios? Yes, it covers all possible future scenarios where you may win, you may lose, your robot may crash right away, the- your robot may land your goal, uh, you know, reach the particular goal. And, you know, all these possible scenarios that may happen by following a particular policy Pi. So our goal, uh, is to find the, uh, Pi? So our goal which- which- which- which we're going to, uh, talk about next, is to come up with a policy that is going to maximize our values for- for all states. Any other question? All right. So before we go into- into maximizing these value functions, first, let's solve, you know, given a policy, how do we calculate V^Pi, right? So- so the Picture to have is- [NOISE] so these two concepts, you know, value and policy, um, you can, kind of, think of them as the duals of each other, right? So- [NOISE] and there is this, kind of, um, relation between them such that one gets induced given the other. [NOISE] I should have just erase that. Never mind. So, um- so we have policy and we have value, right? So policy takes us from S to A, right? Value takes us from S to R, right? And again, value is distinct from reward because reward is immediate gratification and value is long-term value. Which means, in order to maximize value, you may have to sacrifice immediate rewards if there is a prospect of doing very well in the long run, right? That's- that's the key distinction between reward and value. [NOISE] And given a policy Pi, we can calculate the corresponding value V^Pi, right? So Pi takes us from Pi to V^Pi, and we'll see how we're going to do that. And this should be intuitive, right? If- if, um- if you start from a given state S and you follow a particular policy, and you fo- follow the same policy over and over and you repeat such trials, you- you know, and you average out the total value, uh, the- the- you average out all the values that you end up making, then, you know, that is V^Pi, right? So the- so Pi implicitly defines a value function for each state, right? And correspondingly from V^Pi or- or from any value function V, you can construct a corresponding policy Pi, where you say that the action that you're going to take at a given state is to- the action that you're gonna take at each state will be to choose the action that gives you the highest probability of reaching the next state that has the highest value, right? Try to reach the next state which has the highest value, right? And that will- that implicitly defines a policy if you start with a value function, right? So you can go from Pi to V^Pi as we're going to see next and from V to Pi, uh, with this- with this, um- in- in a somewhat natural way, right? So, uh, given a Pi, if you- if you just follow it over and over, you can, you know, find out what V^Pi is. And similarly, if you- if you are given a value function, if- if you're told that in this- in the- in the grid of your map, what the long-term value of each one is, and you're in a particular state, try to take an action that is going to ta-, you know, with high probability take you towards the next state that has the highest value, right? So it's- it's, uh, they're, kind of, duals of each other. Yes, question? So when you- [inaudible] take into account the time [inaudible] if it's time T or [inaudible] Can you- can you please repeat the question? I think, it's about- something about time. Uh, sure. If you're in a given state [inaudible] indicates that [inaudible] Yes. So the- so the- the policy asks you to take a particular action A when you are in a particular state, no matter how far into the trajectory you are right now, right? So you always follow, uh- so you always follow, uh, the same action when you are in a given state according to the policy, ignoring all the previous- You know, you may have been- you may be entering that state for the first time, you may be, uh, entering it for the thousandth time, but if you're following a policy, you always follow the same- same action. Yes. [inaudible] about, like, arrow that goes form V to Pi? V to Pi, yeah. So V- V- let's say, V will- V is essentially like between the grid it's just a bunch of numbers, right? It will tell you what value you have if you start from the state. Yeah. It- it doesn't really- so whereas, Pi tells you, if you're at a- so Pi obviously tells you if you're here, what you should do. Yeah. Uh, but V^Pi will just tell you what the value is if you follow, like, some unknown good trajectory, right? Like, the best possible, yeah? So- so no. V- V^Pi does not tell you the best possible value that you can get. V^Pi tells you what is the value that you get by following this policy. It's not the best possible policy, it's not the best possible value. So how do you deduce- uh, yeah, it'll tell you, like, the expected, uh- pick expected value that is like from [inaudible] But how do you go from, like, a grid of numbers to a policy? Like, that arrow [inaudible] Yeah. So- so we'll- we'll come to this. Uh, so, uh, the- the- the takeaway from here is that, you know, there is a relation from going from Pi to V^Pi, and a relation from going from V to Pi. And- and, you know, uh, we're going to, uh, uh, clearly define how to do this, uh, next. So let's tal-, first, let's talk about going from Pi to V^Pi, right? And so for that, you know, from [NOISE] Pi to V^Pi. For this, we start with the Bellman's equation, right? V^Pi of S equals [NOISE] R of s plus Gamma times sum over S prime in S, P_S Pi of S, S prime times V^Pi of S, right? So this is just the Bellman equation, I've just written it again. And over here what we see is that, V^Pi of S, assuming we have a finite number of states, so V^Pi is a vector, right? So V^Pi [NOISE] is some vector of numbers that will have real values for each, you know, S_1, S_2, and so on, right? And this is, you know, uh, we- we've, kind of, seen this- this view before, of switching between vectors and functions in the past, right? So vectors and functions are essentially one and the same. So think of this V^Pi of S as a vector, uh, having one, you know, value- value per state. All right, so, uh, we have V^Pi- so we have V^Pi, uh, of S_1 over here, V^Pi of S_2 over here, uh, over here, and so on. And what this tells us is, now, we want to solve for V^Pi. Let's assume we know R of S for every possible state S. And let's assume, we know the transition probabilities from any state and any action to any other state, right? Let's assume we know R, P, and Gamma. So now, this is basically for each S- you know, uh, for each possible value of S, we get one such equation where everything is linear. And we can construct a set of linear equations. So if this is S_1, S_1, S_1, S_1, similarly, V^Pi of S_2 equals R of S_2 plus [NOISE] Gamma times sum over S prime in S- P_S [NOISE] Pi of S_2 of S prime times V^Pi of S prime, right? And so on. All the way until V^Pi of S S equals- [NOISE] So here, each of these terms [NOISE] is a scalar variable that we want to solve. [NOISE] And there are, you know, um, um, um- there are capital S number of such variables and there are capital S number of such equations. So we have S variables with S number of equations, right? And they are related only with, you know, in- in- in a linear way. So all of these are- are known constants and scalars which we are multiplying, right? And this can be solved, you know, with- with any kind of a linear solver. To make this more concrete, um, let's define this matrix P^Pi- let me write it a little bigger. So [NOISE] let's define P^Pi to be matrix S. So you have one row per state, and one column per state, [NOISE], right? And so P Pi, here, each row is basically, P_s [NOISE] Pi of s [NOISE] where s over here indicates the row number. So let's call this s1, s1, this corresponds to s1. Si will give you [NOISE] P_si [NOISE] Pi of si, right? Basically, collect all the transition probabilities of every P_s1 and a, right? And where, ah, a is chosen according to pi, and write it out like a matrix, right? So this is just, um, um you- you- you- you're just selecting a few of the transition probabilities where we're starting from a given, um, state s-,uh, si, and taking an action according to, uh, Pi of si, let's call it ai. And this is P_si ai, right? And arrange them into a matrix, and call it, uh, P Pi. And with that matrix, we can rewrite this in a more compact form as V Pi equals R plus Gamma times [NOISE] P Pi V Pi. [BACKGROUND] Uh, question, what is? [BACKGROUND] I? ISI? What does ISI stand for? No, so this is the size of the set S, that's many numbers of states you have. All right? So we can write this in a compact way. Yes, question? Yes. Is this factor elimination really useful because of the number of policies that we're gonna have? Seems like that policy is just gonna be way, way more than whatever [inaudible] So this is just, ah, ah, So the number of policies that we have are- uh, uh you could have any number of policies, but we- you know, we are assuming that we have PSA for every possible SA already, right? We- we are assuming we have the model, we- we know that transition probability already. [NOISE] Okay. So all these- all these rows and all the rows, the transition probability that are not included here, we assume we know PSA because that's part of the MDP definition. So we have PSA for all combinations of S and A. Yes, question. So in the [inaudible] instead of [inaudible] [NOISE] Over here, should it be v- v pi S or S prime? Over here it is S prime because we are taking the expectation over all future states [BACKGROUND]. This should be- you're right, So here, sorry, this should be S prime. Yes. So the S1 only goes into the policy. So this should be S prime. Thank you. All right, so this- this set of equations can be compactly written in vector notation in this way. And once we write it in this way, we immediately get uh- so move this over to the other side and invert it. We get v pi equals I minus gamma P pi inverse R. So this is a vector of size s. This is a vector of size S. This is S cross S, it's a matrix. This is the matrix that we constructed here. And v pi is again, it's the same v pi over here. of size S. Alright. So take this to the other side, invert it, and we get this. Right. So this gives us a way of starting from a policy pi and calculating the long-term values for each of the states if we follow the policy at all times. Okay? So this is this arrow, so we'll call it 1. This is 1, right? So accordingly, according to the policy that we have, you're going to end up with a different p- p pi and plug that p pi in here. Rs are given, gamma is given. And, you know, calculate this, we get v pi. Yes, question. So uh this [inaudible] consider i through of p Pi will be from, uh, from the state i all the probability of every state in the [inaudible] Yes. So each row over here tells you if we are at state SI [OVERLAPPING] and we take an action according to pi of SI. What are the set of all possible states that we can go to with corresponding probabilities? So this tells us how to go from policy to value. [BACKGROUND] [NOISE] Yes [BACKGROUND]. No, we're not making an assumption that we cannot go back to the state. We can always come back to the same state again, uh, there is no such restriction. [OVERLAPPING] So the question is, will we not converge? We will converge and we will see why we will converge shortly. [NOISE] So this is, this is, ah, right. This- So yeah, this is that. How to go from policy, uh, to value i. So the policy according to different policies, we get a different p matrix because we choose different rows according to the corresponding actions. And you plug that matrix in, construct this, you know this related matrix, invert it multiplied with r, you get that v pi. Right? So now. Um, we can also define what is called as this, [NOISE] you know, recognizing the fact that we can now, you know, given a policy, we can construct the value function. Let us move on to define something related. [NOISE] We'll come to the other arrow shortly. All right. [NOISE] So we also define what is called as the optimal value function. So the optimal value function is defined as V star of S is equal to max over Pi, V Pi of S. Alright? So the optimal value function is defined such that if we are at a particular state, right? By choosing the best possible policy, what is the highest value that we can obtain? So V Pi of S tells us what is the- the long-term pay off, long-term discou- sum of discounted rewards, the expected sum of discounted rewards that we're gonna get if we follow the policy Pi and V star of S is- it tells us, you know, what is the maximum possible value that you can get across all possible policies Pi. Right? Is S the initial state? Yes, you know, uh, here, the- so the question is here, is- is S the initial state? Yes. So the definition of value of a function is, when- assuming you are at the current state, it doesn't matter whether this is the initial state or you somehow landed here. If we start at a particular state S, no matter what the history was, even if it's the first state, what is the expected sum of future, uh- future rewards, right? So V star S is defined as the maximum possible value that you can get by searching over the entire policy space. All right? And V star of S can- is therefore is also written as R of S plus max a and E Gamma times S prime and S P_Sa S prime times V star of S prime. All right. So V star of S is defined as the max of V Pi of S, where V Pi of S, we used this definition. We plugged in this definition into V Pi of S and note that R of S is a constant that does not depend on a and it does not depend on Pi because the policy only tells us what's gonna happen in the future, whereas the reward is what we already got. So it's gonna be R of S plus max of a in the action space, what is the action that we need to take such that we maximize the, uh, uh, sum of the future rewards. This equation looks pretty similar to the original Bellman equation, except the- the difference is that here instead of Pi of S, we are maximizing our a in place of P, uh,- Pi of S, we have P_s of a and you're maximizing over A. And the other difference is that V Pi of S, uh, because this was for V Pi of S, we had V Pi of S here. And over here, because this is where V star of S, you know, by the recursive definition, we have V star of S here. Now, this- the Bellman's equation, we saw that we can express this in a nice linear form and obtain a close form solution, you know, to, uh, to- to- uh, to obtain V Pi. However, V star of S, because of this max operator it's just not possible to solve it linearly, right? So if you're given a policy, it's very easy to calculate the long-term value according to that policy. But to calculate the best possible value or V star or the optimal value function, it's not- it's not as easy as solving a linear system, right? And it is- it is, um- and we saw how we can go from Pi- Pi policy to V Pi using this approach. Now, here, you'll see how we can go from a value function back to a policy, right? So this is the optimal value function that, you know, we are, um, um, interested in or strive to- to, uh, calculate. And given this V Pi of S, we can define the corresponding Pi star of S to be the optimal policy to be equal to arg max aEA, same thing over here. Sum over S prime in capital S P_sa of S prime times V star of S prime. So the optimal policy tells us that the action that we need to take at state S is basically the same action that achieved the optimal value for us. Okay. Can you explain why the second question is not easy to solve because if you eliminate a finite set of A's, you can eliminate all and just take the max and solve. So the question is, why is this hard to solve because, you know, we have a limited number of- of, um, A's and- and, uh, you can- so, yes, so in- in- in cases where, uh- in cases where you have, um, uh, a finite state space and an action space, you can iteratively calculate this. So this involves iteration. Whereas the- the, uh- for V Pi, there was no iteration, you had a direct close form solution. By hard, I meant there is no close form solution, all right? So this is the optimal value function and this is the optimal policy. [NOISE] All right. So the optimal policy is defined as the policy that will achieve the optimal value function for us. All right. So it's- it's- because, uh, it achieves the optimal value function because, you know, we are- we are using the exact same, uh, action at each state that's gonna maximize, uh, the- the same right-hand side. All right? So this is- uh, we started with the two concepts, policy and value. [NOISE] We started with policy and value, right? These two are two very central concepts, all right? And given a policy, we can calculate the corresponding value function that you get by just following the policy with this closed form expression. And similarly, for any given value function, [NOISE] we can define a policy that takes action that maximizes the- the, uh, uh, expected value based on the given value function estimate, right? And when we use the optimal value in place, we get the optimal policy. All right. So this is how we move from value function to policy. Now, with these two observations, we can now come up with two algorithms to- to actually calculate V star of S. This is just, uh, um, you know, this is just a mathematical expression. This is not an algorithm of how to calculate, right? So this is just giving you a relation between- um, uh, that V star needs to satisfy. So this is a necessary condition that V star needs to satisfy. [NOISE] Uh, if- if V star is the optimal value function, right? So there's a necessary condition. It is not telling you how to calculate, um, um, uh, V star of S and this is the optimal policy corresponding to V star. So now we're gonna look at two algorithms, you know, one of them is- is called value iteration, the other is called policy iteration, that will help us calculate V star of S and, uh, Pi star of S. [NOISE] Policy iteration, that's the next topic. [NOISE] Sorry, not policy iteration, value iteration. [NOISE] So the value iteration is an algorithm, right? [NOISE] So this is an algorithm. It tells us how to compute. It's not just a mathematical relation which is a necessary condition, right? So this just tells us how to compute. So step 1, [NOISE] for each state S, [NOISE] initialize V of S equal to 0. Two, repeat until convergence. [NOISE] For every state S, update V of S equal to R of S plus max a in A sum over Gamma prime, P_sa of S prime, V of S prime. So initialize- initialize, uh, your value function to be 0 everywhere at all states, right? And then, for every state S, update V of S according to this equation. And this equation is the same necessary condition that we have according to, uh, according to the optimal value condition, right? So if- if we have the optimal value, uh, optimal value, then V of S would have satisfied this. But we're gonna use- you know, uh, pretend that, um, um, V Pi of- of- V of S is- is the optimal value. And- and -and -and, uh, by taking that- that, uh, necessary condition, you're gonna construct an update rule. Now, the question is, why will this converge? Will this even converge, right? We- we took a necessary condition, right? It's just a condition, and converted into an update rule where on the right-hand side we have the old value, and you plug it in, you know, and we set it to be, you know, uh, to be equal to the, uh, uh, left-hand side and update V of S. Why is this even expected to converge? And the reason why it- I know the proof is- is beyond the scope, but I can, you know, offer a quick sketch of how- how it works. So this update rule that we have over here is also called the Bellman update operator or the Bellman Backup operator. [NOISE] So the word operator is interesting. That's because, supposing we have S_1 and S_s, right? So you might remember how- how we visualized functions as points in space. You know, we've done it several times in the past, right? So supposing we have a function- we have a policy or- I'm sorry. Supposing we have, um, some, uh, value function, V Pi, that we represent as, you know, a point in a space where each axis corresponds to a different- uh, to a state, right? And V Pi of S is basically the- uh, you know, the coordinate along S1 is V Pi of S1, coordinate along S2 is V Pi S2. You know, the same way we've been, uh, viewing functions as points in space. So supposing this is the value function, and we plug that V Pi of S into this operator on the right-hand side and get something else on the left-hand side, right? We take that V Pi, run it through this operator, and, you know, there- we obtain some output, right? This is the update rule, right? Now, supposing we start here, and this is the, uh, uh- so I'm gonna call this B of V Pi, which means, this is the output of applying the Bellman Backup operator, right? Now, there is a proof that is beyond the scope of this course that this Bellman operator is what is called as- the technical term for it is called a contraction mapping. So what a contraction mapping means is, take any two input points, you know, let's call this V prime of Pi, or let's just call it, um, V Pi, and let's call it, uh, V- V, um, um, V Pi 2. Let's call it V Pi 1 and V Pi 2. And the Bellman update operator- uh, sorry, the Bellman Backup operator, when you apply on the two of them, you get two different B of V Pi 2. And the Bellman opera- uh, update operator is called a contraction mapping because, for any two inputs to the operator, the corresponding outputs will be closer to each other than the corresponding two inputs. So- and- and this can be- this can be, uh, you know, uh, shown mathematically, uh, with- with, you know, some simple rules. You apply this and you can prove that the Bellman operat- uh, the Bellman Backup operator is a contraction mapping. And when something is a contraction mapping, there exists something called as a fixed point. So the intuition there is, for any two- any two points, you- you know, uh, uh, run them through the Bellman operator. You know, you get two- you know, the two output points will always be closer than the two input points. And that holds for any two points in the entire space, right? And when you have such a contraction mapping, the points to which they are all converging to is called a fixed point, right? And it's called a fixed point because, once you reach the fixed point and you apply the Bellman operator on it, you get the same point again, right? And this fixed point is called V star- is- is V star, that's the optimal value function, right? So start with any initialization. We started with 0, right? We started at 0. It does not have to be 0. You start at any other point you want and you keep applying this Bellman Backup operator over and over recursively. You take the- uh, you start from the initialization, apply the operator, you get an output, apply it back into Bellman operator, you get an output, and so on. And we will keep following, you know, this- this kind of a path where, eventually, you're gonna converge to V star. And the reason we are gonna converge to V star is because you can prove that the Bellman Backup operator is what is called, technically, a contraction mapping, right? So it- it- it- it- it kind of contracts the-the entire space towards a single fixed point. Anyway, so the proof is beyond- beyond the scope, but it's good to have this intuition that what's happening over here, this right-hand side, corresponds to this B mapping, this contraction mapping where the entire space is converging towards, uh, V star, right? So this is the value iteration, right? So start with some random initialization. We know P, uh- we know- we know all- all the- all the P values, we know all the R values, and you keep applying this over and over until it converges, right? And when you converge, you would have reached V star and the so-called fixed point. After which, even if you apply the Bellman operator, the output is gonna be the same. Yes, question? So the Bellman operator is defined to be, uh, one iter- one iteration of that, uh, of that last equation that you wrote or is it, like, until convergence? So the Bellman operator takes you one step. That's one iteration of that, one application of that zero? Yes. So one application of this for all the states is one Bellman operator. And- and anywhere in the state, uh, space, it's almost like a convex problem. [OVERLAPPING] Yes, it is a convex one exactly. So it is a convex problem. No matter where you start and you'll keep recursively applying the Bellman operator, you will keep, you know, hopping and hopping and hopping and reach the same fixed point. Yeah. Yeah? All right, so that's- that's, uh, value iteration. And- and- and this proof is beyond the- beyond the scope, but you need to know the, you know, value iteration itself, the algorithm itself. And then there is, uh, another algorithm that's called the policy iteration. [NOISE] And in the notes, in fact, there are two variants of the, uh, value iteration, that's called the synchronous and asynchronous. So in- in- in, uh, in the synchronous, we calculate the- the left-hand sides for all states as a temporary variable, and, you know, replace it in one shot. Whereas, an asynchronous, um, you update it for one variable and use the updated variable for updating the other variables, right? That's, uh, a- a slight variant. And now, uh, we come to the- the second algorithm, called the policy iteration. So in policy iteration, this is a different approach. Here we start with the policy, right? So initialize [NOISE] Pi randomly. [NOISE] As I said, in RL, you have these two central concepts, the value function, and the policy, right? So with this approach, use the value function and I've kept updating the value function, kept hopping the value function. Here we start with a policy, right? The other central concept in RL. And two, uh, so step 2, repeat until convergence [NOISE] a, set V equal to V^Pi and b, for each state, S set, Pi of S equal to argmax in A, sum over S prime, P_sa of S prime times V of S prime. Right? So start with some initial estimate for Pi, right? And probably erased it. We saw how given a Pi we can- we have a close form expression for, uh, calculating V, right? You just solve a linear system of equations. So start with the random Pi, calculate the corresponding value- value function, and then using that value function, update your policy such that you're taking steps where you are maximizing your expected future rewards using the current value function estimate. Okay? And then that gives us a new policy. Using that new policy, update V. And then using the- the updated V, update Pi, and so on. Right? Now, what's the difference between these two methods? So we can make a few observations. So first of all, in value iteration, all we're doing is for each- for each state in the value function, we are performing a max over, um, um, over a S number of, uh, items, right? So you can think of the, um, computational complexity as this will be order S square approximately. Is that right? So for each state, we take a max over. So I think this would be O S squared times A, action space, right? Whereas over here, this step involves inverting, uh, uh, S by S matrix, right? You remember there was an inverse where V equals identity minus Gamma P Pi inverse of R. Right? So this involves inverting an S by S matrix. And so, uh, this is order approximately S cubed. And this- this step is approximately order O of, you know, action space times state space. Right? So for large state spaces, policy iteration can be much more expensive than value iteration. However, with policy iteration, after a finite number of steps, we get the exact optimal value, right? Whereas with this, we keep co- getting closer and closer and, um, uh, we keep getting closer and closer but we- we never reach the exact V star. But with policy iteration after a fixed number of steps, we will recover the exact V star, after some number of steps. Yes, question? Are we converging in terms of V or in terms of policy? So the question is, are we converging in terms of V or in terms of policy? Uh, both- both are, you know, you can use both the conditions. Um, it is easier to check in terms of policy because it is discrete. So how do we know it will never change in the future? As we can just see that in someday it might change if you do more iterations. With this? Yes. So once- once your policy converges, it- it's going to stay converged. If you don't know if it has converged or not? Because we are watching the arrows, like what action you should take? Yes, so- so the way to check convergence here is first, you know, um, make a copy as, you know, Pi prime of the full policy. You know, update the new policy and check if, you know, for every element-wise, does Pi prime equals Pi of S for every S? And if it does, then, you know, your policy hasn't changed at all after a full update. And, you know, then you can declare yourself as converged. Does it take 11 iterations? Yeah, it can take 11 iterations, it can take, you know, some number of iterations, but, um, there is a proof that we're not going to go over it, that this will converge in a finite number of steps. Whereas this has this, you know, uh, uh, this- this contraction mapping will take you closer and closer to an arbitrary, um, um, um, degree of precision of your choice, but you may never actually reach V star. But with this, once your Pi converges, the conversed Pi will be Pi star. And once you have Pi star, you can recover the exact V star. Yes, question? So if we use the value iteration then we can't find the converged value of Pi? So once we reach, or- or if- if you use value iteration and we reach, um, uh, and- and- and we converge to some, you know, V, let's not call it V star, let's call it V prime. Then you can use the V prime and recover the corresponding policy by using V prime here. Right? So this is taking you from value function to policy. Right? And this takes you from policy to value function. So in practice, it uses both matrices? In practice, we will use whether you use policy iteration or value iteration. If our end goal is to recover pol- the optimal policy, we will end up using both these equations, uh, both these equations. In value iteration, we apply- we- we- we use this over and over and in the end, extract the policy according to this equation. And in case of policy iteration, um, we apply this over and over and, uh, the- the- the value function, uh, uh, uh, Pi to V is- is used in step a, and in step b, we extract the, uh, the, uh, policy according to that updated value function. Yes, question? Can you do value iteration then basically [inaudible] step up this to get the V product exactly? Yeah, so the question is, can we- can we run value iteration for some number of iterations and then switch over to a policy iteration. No, it has to converge first. Let us converge then from that converge, we can extract Pi then do more iteration to get the V. Yeah, so if it has mathematically converged to the right answer, then you- performing one value- er, er, policy iteration and ex- uh, extracting, uh, V star will give you the same V star, right? Both- both the algorithms take you to the same V star and same Pi star. But then you don't have the exact V? In this case, after a finite number of steps, you get an exact V star. You know, as soon as your policy stops changing, the corresponding value function is the exact V star. With value iteration, you get arbitrary close to V star, you know, depending on how many steps you have, but you may never get the exact V star by only following this approach. Question. Yes, question. Can you explain like why we should expect them to converge? Like why do you need that V update and V and why- why should it lead to convergence? So the intuition of why it is expected to converge is right here. Right? So think of the Bellman update operator as some kind of a contraction mapping. The proof of why it's a contraction mapping is- is beyond the scope and it is pretty easy to show actually, you can- you can try it on your own. Um, and if you're interested, I can post a- a proof on Piazza. But the Bellman update operator is a contraction mapping, and once you prove it, you know, you get the intuition that, uh, contraction mapping, uh, contraction mappings have a fixed point. And if you keep repeatedly applying that operator over and over to the same input, you know, and then applying it to the output and then applying to its output, it's going to take you towards the fixed point. And that's exactly why this will converge. Now we gonna, uh, cover a few more simple extensions to the policy iteration and value iteration. What is something called as the fitted value function. And we're gonna cover a- a quick overview of the larger reinforcement learning field as a whole. We won't be getting into any other methods. For your homework, these, uh, what we're going to cover, uh, until the middle of next Friday will be sufficient for the reinforcement learning part of whatever is gonna to come in your homework. But the rest of the reinforcement learning will be just for, you know, for your information. [NOISE] All right, thanks. [BACKGROUND]
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_10_Deep_learning_I.txt
Okay. Welcome back, everyone. Let's get started. So welcome to Lecture 10. The topic for today will- will be, neural networks and backpropagation under the umbrella of deep learning. So before we jump into today's topics, quick recap of what we covered, in Lecture 9, last Friday. So Lecture 9was all about Bayesian methods and we saw two different kinds of Bayesian methods, parametric and non-parametric with an example each. In parametric we saw Bayesian linear regression and the general approach in, with, with Bayesian- with- with Bayesian methods for supervised machine learning in a parameterized setting is to have a parameter Theta and we assume it is distributed according to a prior distribution, sum P of Theta, right? And this is the assumption that distinguishes Bayesian methods from frequentist methods, that we assign a prior probability to the unknown parameters and then, again in the supervised setting, we assume Y comes from sum distribution P of Y given, Theta, X which is also called the likelihood and from this, from the observed data, where, X and Y are your training set, we construct the posterior distribution, where Theta hat, which is now a random variable, is- is, is derived as P of Theta given X, Y, using the Bayes rule and that's, you know, that's how the name Bayesian methods comes into picture because, the way we construct the posterior is using the Bayes rule. There is no gradient descent, there is no maximum likelihood. We just conditioned on the observed data and we get our posterior distribution, and then using the posterior distribution, we can then construct the posterior predictive distribution, which, which is used for making predictions on new unseen data examples. So y star condition on x star, x, y, what this means is x, y, is your training set, that you observed in the past and x star is the new input that you encounter and we need to- to make a prediction about y star and what we do is construct a distribution over y star, given the new input and all the past training inputs and that is called the posterior predictive distribution, and this is what the posterior predictive distribution is defined to be. And we saw that there was also a Piazza post where the steps for this was described in detail. We saw that this distribution can be expressed as an expectation of P of y star given x star, and, Theta hat, where Theta hat is- it comes from the posterior distribution. The interpretation of this is that, for every possible setting of the Theta vector, we get a different model- a predictive model and the posterior predictive distribution, takes the average across all possible models that can possibly exist, you know, that's infinitely many, but the average is a weighted average where the weights are decided according to the posterior distribution. Right? And [NOISE] so- so, so this- this kind of averaging across models, where we don't commit ourselves to only any one given single model, but we're still kind of hedging our bets against all possible models, but we just weight them differently according to the posterior distribution and this is, as, you know, there, there- there's, you might read this in several other places as well. This makes Bayesian methods kind of prone to overfitting. Right. This- this makes it fundamentally prone to overfitting, there is no- there is no other extra steps you need to take, in order to make your models not overfit and we'll go into the- into more details about this, probably in Wednesday or Friday's lecture. But this- this kind of model averaging where we don't commit ourselves to one single model, which may- which could, which could be due to noise in your data, right. We- we don't commit ourselves to one single model, but instead, we consider all the possible models- the infinitely many possible models that you get for different values of Theta and we take a weighted average across the predictions where the weights are decided by the posterior distribution, right. So this is in the parametric setting where a Theta exists, right. So this- this, this whole setting- this whole exercise is, and revolves around Theta and- and that's why we call it, parametric Bayesian methods. In the non-parametric approach, we assume y equals some function f, f of x plus, some noise, where noise, Epsilon comes from a normal distribution and we assume that there is a prior distribution on f. Just the way there was a prior distribution on Theta, here we assume there is a prior distribution on- on f and that prior distribution is a Gaussian process. So this is called a- a Gaussian process prior and in order to define a Gaussian process prior, its parameters, so to speak, is a mean and a covariance function. So this is actually a mean function, which I've just written as 0, it means it evaluates to 0 everywhere, and the way to think of a Gaussian process is with the same analogy as, even from vector to functions, as an infinite dimension- an infinite extension of a vector, similarly a Gaussian process is an infinite extension of a multivariate Gaussian distribution. Right, and for vectors to functions, the relation was, indices of vectors become the domain of the function. Similarly, indices of your multivariate Gaussian, become the domain of the functions over which we are- we are defining the GP. Right, and we saw a few properties about, about multivariate Gaussians. So we saw the normalization property that if you integrate out all the data, the density of the multivariate Gaussian must evaluate to 1. We saw the marginalization property where, suppose in this example, if you want to marginalize out b from this multivariate distribution, then all we get is a and Mu a and Sigma a squared. So all the rows and columns, that correspond to the- the variable that we're marginalizing out, just wiped them out of your multivariate Gaussian. It's that simple. Right. That's- that's a special property about Gaussians, which, don't- don't exist for other distributions in general and that's- that makes Gaussian process- Gaussian distributions really nice, and then we saw the conditioning rule, where the multivariate version appears complex at first, but if we see the analogy in case of two dimensional Gaussians the conditioning looks like this, a given b is again a normal distribution if a and b are jointly distributed then a given b is again a normal distribution, whose mean is given by this. The interpretation here is, with- when, when the value of b is given, first, we standardize it, that is divide it by its mean, and divide it by its standard deviation. So it's like a z value. Transform the z value by the correlation coefficient and then do the reverse transform like the-the, the inverse z, transform, back into Mu a by- by scaling it by a standard deviation and adding a as mean. Right. That's how we- we, we update the a, as' I mean and then the covariance of a, or the variance of a given b, is the variance of a scaled by 1 minus Rho square, where Rho is the correlation coefficient. So what this means is if- if Rho is 1, if a and b are perfectly correlated, then by observing b, the variance of a reduces to 0. Right. 1 minus 1 is 0. Which means you know exactly what a is, and similarly, if the covariance between a and b is 0, if there's no covariance whatsoever, then the variance of a given b, will still be Mu of a square. So there was no reduction in uncertainty by observing b if there was no correlation between them. Right. So these, these properties- using these properties we constructed a Gaussian process regression. So the analogues to this, we define, f and f star where these are the function f evaluated at our training set and function f evaluated at our test sets and this follows a multivariate Gaussian distribution. Because that's the property of Gaussian processes where you take any finite subsample of your Gaussian process, that will be a Gaussian distribution- multivariate Gaussian distribution and this is the kernel function evaluated, at every possible pairs. Right, and so this is the- this is- we get this by marginalizing out the GP from the GP all the variable- all the test exam- test and train examples that we have not observed. So marginalize out everything that you've not seen and this condenses your Gaussian process into a multivariate Gaussian, evaluated only at- at the test and train points. And the assumption was, y equals f plus Epsilon. Right, and using the addition property of Gaussians, which I forgot to recap, using the addition property of Gaussians, y equals f plus Epsilon. So f plus Epsilon gives us this, Epsilon, you still have the same Mu, but because, f and epsilon are independent, the covariances add up. And Epsilon is, is a diagonal matrix, so you only add along the diagonals and you get, a distribution for y is given x's and using this- using the conditioning property we construct the posterior predictive distribution, which- which evaluates to some form. It might look complex, but this whole expression for the- for the variance is exactly this, but a multivariate version of this. Right. It- it, it just looks more complex but it's essentially the same and similarly, the mean is- is, is, is very similar to that and- and this is all Gaussian process is about. It- it's very simple. You perform one conditioning, and- and that's it. All right. So any questions about GPs before we move on to deep learning today? Yes. [inaudible] you said that they will be prone to overfitting. They will not be prone to overfitting. Yeah. So because that's parametric, right? Or isn't [OVERLAPPING] So in general Bayesian methods are- are, you know, you consider them as not prone to overfitting. Because the-the, the intuition there is that overfitting happens due to the fact that you're committing yourself to just one model, given your training data, which- which may be swayed by- swayed due to noise, right. And- and here we're kind of hedging our bets by not committing to any single model. We are considering all the possible models that could possibly exist and we are taking the average across them where the average is weighted by the, weight- by the posterior distribution. It could be [inaudible] very high, tempted not to run exactly to data and then that's what overfitting is. Well, loo-um, uh, let-uh, we'll go into details to that on-on Wednesday and Friday and- and, um, I will postpone that, you know, uh, answer to that uh-uh, for a couple more days. But in general, the intuition is that with Bayesian methods, the- at- at least at the theoretical level, the concept of over-fitting does not exist. All right. So today, the topic for today is neural networks and- and deep learning, [NOISE] right? And again, in order to, uh, motivate deep learning and neural networks, right? Um, the- the, uh, so the models that we've seen so far, for example, um, we saw- we saw linear regression, we saw logistic regression, GLMs, et cetera. They have all been linear. And one way we kind of introduced non linearity was using kernels, right? And also using feature maps. And we saw that kernels and feature maps are kind of very tightly related to each other, right? For every feature map is- a-a feature map is associated with some kernel, right? And that was one way to introduce, uh, non linearity. And neural networks is another way, right? In neural networks, the- the high-level picture to have in your mind is that when we defined a feature map for example, in problem set one, the last question, we- we, uh, explore different kinds of feature maps on a simple regression task that gave us, you know, very flexible, um, hypotheses. And the feature map we use there was you know,, for example, we used a polynomial feature map p of x equals 1x, x squared, x cubed and maybe things like sine x. You can add, you can add, uh, more features. And the responsibility of coming up with such a feature map was in the hands of the designer. The one who was training the model, the one, you know, the data scientist who's trying to fit this model on- on, uh, on the data had to, you know, use you their intuition, use their creativity to come up with a suitable feature map, right? And that is- is, uh, hard work. You need to have some kind of an intuition you need to, you know, think hard to see to- to- to make some decisions about what features to include, what features not to include. [NOISE] And at a high level, what neural networks, uh, does for you is-is give you a way in which features themselves are learned automatically, right? So the models that we've seen so far, given features, we constructed linear models for, you know, regression classification, et cetera. With neural net-networks, not only are we learning that model, given the features, but we're also learning what the right set of features are, right? And the- the, [NOISE] uh, it's probably best- best explained by looking at an example of- of what I mean by that. [NOISE] So supposing we have some input x, you know, in Rd. And let's say we represent x as a set of nodes where we have d number of nodes. So this corresponds to x1 and this corresponds to x-d, right? And the way we constructed logistic regression, for example, you know, just to consider an example, was we had a-a parameter vector and that parameter vector. So here, uh, this, like SVMs, we're gonna switch our notation and use w and b for weights and biases. Um, we had w_1, w_2, w_3, w-d. And we had bias. And over here, this was summation i equals 1 to d x_i w_i plus b, right? So this is, uh, you can think of this as Theta transpose x, where, uh, Theta was the collection of w and b and, um, x is the x vector along with the intercept term one. All right. And let's call this z, that is z equals x transpose w plus b. And on the other side let's call this a equals g of z, where g is some kind of uh, a non-linearity. Okay. For example, in- in the case of logistic regression, g was 1/1 plus e to the minus e, sorry, g of z was 1/1 plus e to the minus e, right? So this was, uh, this was logistic regression, just visualized in a different way as nodes and connections, right? And this is the view of- this is the interpretation that we're gonna use for extending this further, uh, and construct neural networks out of them, right? So we used to have a parameter vector Theta. Instead of that, we visualize parameters as connections between some node to another node, right? And in neural-neural network terminology, these nodes are called neurons, right? So, um, to clarify, um, the reason why they're called neurons is because, uh, early in the development of neural networks, the design of- of, uh, this- this kind of a design was inspired by how neurons possibly work based on the theory of how neurons worked back then. But, um, it's-it's, you know, you see a lot of literature where people say neural networks mimic neurons and that's just not the case, right? Neural networks when they were designed, were inspired by how people thought neurons worked back then, right? And, uh, in no way whatsoever do neural networks mimic neurons or mimic the brain, right? So, you know, just- just have that in the back of your mind and- and neurons is probably, uh, therefore, uh, a poor choice of word, you can call it nodes and that's just fine. But, you know, we call it neurons. Assuming you guys know that, you know, this is not how actual neurons work. All right. So there is some non-linearity, um, which takes-takes us from z to an output a. We're gonna call a as the output of the nonlinearity, right? And together this- this entire assembly, we'll call it a neuron. [NOISE] And in this particular, uh-uh, configuration, x_1 to x_d is fixed, right? There- there's no feature map here, x_1 through x_d is fixed. And, um, everything that- that we've seen over here is basically exactly logistic regression with no difference whatsoever, right? And we could, for example, if this was logistic regression, then we would call this, uh, our y hat as a, right? And we would have the right label y. And we will construct a loss y, y hat is equal to y log y hat plus 1 minus y log 1 minus y-hat. And in place of y hat, we would express y hat as a function of w's and b's and perform gradient descent, right? That- that was logistic regression. Any questions on this of why this is similar to logistic regression? There's a question, right? Yeah, so, uh, this- this is a logistic regression viewed in terms of connections and- and neurons, so to speak. And now, what we're gonna do next is basically take this network, now come back to this network view and start growing this network. So x_1 through x_d. This is our input layer again, right? And in place of one neuron, let's have a collection of neurons. Here, we had a set of weights connecting. We had a set of weights that connected the input to as, uh, from x's, the input data to the input of this neuron. And just like that, let's add a second neuron here and give it its own set of weights. Right? And this is as if we are- we are trying to train two different logistic regression models in parallel. You know that's the image to- you know, that's the picture to have. We have one set of- one set of, uh, w's that connect x's to this neuron. And another set of w's that connect the x's to the second neuron. They have their own bias term, as well. And similar- similarly, there's gonna be a z here, goes through the, uh, nonlinearity to get an a. Similarly, the- the- the z's are basically the linear combinations of x's with w's plus b. The same thing over here. And let's call this b1 and b2- b2, and this is z1, z2, and a1 and a2. So from z to a, we apply the g function, the nonlinearity, to take the linear combination. We get a scalar, run it through the nonlinearity, we get another scalar, all right? And we- we can continue this. So, similarly, we can have, uh, a z3 and an a3, in that has its own sets of weights, and its own bias, b3. So you get z3 as the sum of over w1, x1, w2, x2, wd, xd plus b3. And then apply the nonlinearity and you get a3, all right? And not only can we add multiple models like these. We can then start nesting them. In the sense, these three, the outputs of these three, that is uh, a1, a2, a3, can now become the input for another logistic regression. It's gonna be a z and an a. Similarly, a second logistic regression using the inputs as the outputs from the first layer, z and a, and so on. We can nest these even further. Yes, question? [inaudible] We'll come to that, we'll come to that. So-so the question was, uh, wouldn't they all learn the same, same, uh, same model? Aren't they gonna be just copies of the same model? Um, in- potentially it can and we'll- we'll address that shortly. Was there another question? [inaudible] Yeah, we'll come to that, we'll come to that. For now we are just constructing a network, uh, using base components that we've seen before. Basically logistic regression, right? So, take logistic regression, create multiple copies of it. Take the outputs of those logistic regressions. Feed them as input to, you know, a nested logistic regression and so on. And this kind of a network can be arbitrarily deep and arbitrarily wide, right? And it can be arbitrarily wide to different levels at different layers. And we're going to call- start giving them names. So, these collection of neurons, we're gonna call it the input layer, [NOISE] right? And these over here is the output layer [NOISE]. And these are called hidden layers. So this would be, [NOISE] hidden layer 1, hidden layer 2, [NOISE] right? And this overall network, you can call it maybe a three layer network, right? So you have an output layer, hidden layer, or hidden layer 1, hidden layer 2. So, you have, um, um, three layers and generally we don't count the input layer, you know, input is just input, right? And, soon we're gonna give more, more precise terminologies to each, each of these, uh, weights and- and, uh, weights and biases and activations. But before that, you know, it's important to get this general intuition of- of how neural networks are kind of constructed. Um, this is an example of a very simple neural network. Um, there are lots of different choices for the g function. You don't always have to use the logistic, uh, logistic function. But the general idea is more or less the same. That is, we take, uh, a linear model with some activation. Um, in GLMs we saw that, uh, for different kinds of outputs, we use different kinds of g's. Similarly, uh, here we can use, uh, there is a wide variety of choices of the, uh, activation functions that we can use. So g is also called the activation function, [NOISE] right? And using this, this linear model building block. We start assembling a big network as though these are, you know, think of them as Lego blocks, right? You place one block here, one block here, one block here, and then start building yet another layer on top and so on, right? And, [NOISE] and, and now, once we construct a layer, uh, a network with, you know, some number of layers. Each layer being- you know, some having some particular width. And finally, at- at the output layer. The output, the a of the last layer will be considered our y hat, right? And then we're going to get the true label y. And we can apply- construct some kind of a loss, all right? Previously we would take the output of the very first layer, y hat, and apply over the last. But now, we're going to nest these building blocks into a more complex network. And we're gonna construct the network that eventually we gonna have a single output. And that output will be the, the prediction made by this network. On which we are gonna apply the loss, right? This is the big picture to have in mind. Deep learning is basically adding depth by cascading simple building blocks, using nonlinearities in the middle. Any question about the big picture we- before we start defining our terminology and, and giving, uh, specific, uh, uh, numbers here. Yes, question. [inaudible] Yeah. [inaudible] Yes. [inaudible]. Yeah. [inaudible] Yes. The question is, do the activation function, the choice that we use in the first layer, should it be the same, say for the second layer, right? In- in theory they- they can be different. You can use different activation functions for different neurons even, then it need not be the same within the same layer, but in practice, generally tend to use the same activation function throughout the network, but, you know, that's, that's- that's a convention. That's not a requirement. Any questions? All right. Now, um, let's start, um, Yes question. [inaudible] Yeah, we are going to come to that. So- so the question was, if we have a loss function here, what are we going to update? The answer is, once we apply a loss function, we need to calculate gradients with respect to every single connection and bias, right? The- the goal of- of- of our training is to first come up with a network architecture, and once you come up with a network architecture, you take your data, start feeding your input. You get some predicted output, calculate the loss, and then calculate the gradient of that loss with respect to every connection in parallel, and take a gradient descent step to minimize that loss in- along the direction of that gradient. [inaudible] Yeah- yeah- yeah, we'll- we'll come to that of- you know, generally stochastic gradient descent is the one that's most commonly used. In fact a variant of that called mini Batch stochastic gradient, we'll come to that. All right? Let's- let's start giving some precise terminology, so that we are clear of what we are- we are referring to. We will use um, all the- all the connections. We will use the letter w, so w will mean connection. Right? And b will be bias. [NOISE] And z will be w dot something plus b, so z will always be linear in w, and linear in b, and linear with whatever we'll multiply b- uh, w by, a will always be g of sum z, and as- as, uh, to distinguish the different z's, and the different a's, and the different b's and w's, we're going to use the notation where a superscript identifies the layer. A superscript is going to identify the layer, and a subscript is going to identify which node in that layer we are referring to. Right? We'll have i j, i, i, i, so the a's, there are only- a's and z's, and b's is a vector per layer. Whereas the weight, the connections are- are a matrix. These are vectors, so this is the vector a, this is the vector z, and this is the vector b, and you have one such vector per layer, one vector of a, one vector of z, and one vector of b per layer, but we have one matrix w per layer. so w is going to be- it's also called the weight matrix, and that's why it has two indices, i and j, and the- the very first input, x will also be called a0. It's like the output of the 0th layer, so you're kind of bootstrapping this hierarchy, by considering the inputs to be the output of the 0th player, you know, we're just going to consider the x's as just given. Think of x's as the output of the activation from the 0th layer, and z1- z1, we are referring to this vector z superscript square bracket 1, refers to this z vector, and in that let's conserve z11, so z11 is, uh, in the- in the z vector of the first layer, of the first element of that vector, will be sum of w, j- sum of j w1 of j x, or, [NOISE] j plus b1, right? So w is now a matrix that has- so a matrix has- has two axes, two dimensions. The number of rows in the matrix, is the number of neurons in that layer, and the number of columns in that matrix, is the number of neurons in the input layer in the previous layer. So w is a matrix. w l is the weight matrix, where the number of rows is the num- is the- is the number of neurons in the layer. This is number of neurons in the lth layer, and the number of columns is the number of neurons in the l minus 1th layer. Right. So that's, uh, that's a w matrix. And Z_1^ 1 is therefore the first row of the weight matrix, dot producted with the a ^0 vector. That is dot producted with the input, right? And that - that's basically, uh, doing this for you. W1 X1, W2 X2, W3 X3, and so on plus b_1. And similarly, z_2 of 1 is equal to W_2 of 1. This time, I'm gonna write it as a transpose, so that this I can simplify. 0 plus b_2. There's just the, uh, uh, the dot product and - and in - in this world it's never written as a dot product. Okay, and so on. And, so that gives - that gives us the collection of these. And similarly then a^1,_1, equals g of z^1_1. The activation functions are applied element-wise, right? For a given value of z that was calculated with the, uh, dot-product and adding up the sum. The a function is applied element-wise separate and the g function is applied element-wise separately to each of the z values to get the corresponding a values. [NOISE] Right and so on and- and- and similarly, once we have constructed the a^1 vector, so x^1 is the output of the first layer. So, ah, in this case, the vector that comes out of the first layer will be a^1 okay, and the dimension of this vector is equal to the number of neurons in the first layer. Is that clear? Okay, and we can- we can continue this-this uh, style of nesting even further. So we will- we will then have z^2. Question. Yes question? So you were saying that uh, z^2 is written in uh, matrix form? So w- so- so w is a matrix. Like- oh yeah and then vector notation. I meant that uh, my question is that a is essentially agnostic of which layer you're - I mean not layer but which- [NOISE] Good question. Let's- let's, uh, [NOISE] is th- is this a the one you're referring to? Yeah. Yeah. So a- a^1_1 means the output of the first layer and the first element of that vector is equal to the activation function applied on the first element of the fir- z, of, uh, the z vector of the first layer. So z vector is found the whole layer? Not just one of the dots, on one [inaudible] Yeah. So- so z-. So let me- let me, um, write this in vector notation. So here we were calculating it element-wise. So this is Z^1_1, this is z^1_2, this is z^1_3 Okay. Right? And, if you wanna write it in vector form, then you would write this as z^1 is equal to w^1 times a^0 plus b^1, right? Now we are writing this in vector notation where z^1 is this entire z vector. Okay. Right? And that is equal to the w matrix times the input vector, plus element-wise addition due to the, uh, summing up of the b vector, the bias, [OVERLAPPING] vector. So z is just a linear combination of w's and a's [BACKGROUND] plus the b's. So a is a probability? Uh, I would say don't think of a as a probability if you take it through a sigmoid, you get a value between 0 and 1. Okay. Um, for now, don't treat it as a probability. Just think of it as, you know, some- output of some non-linearity, right? So the a is, if,- if G is the sigmoid, then a will be between 0 and 1. Yes. Good question. Any- any - any other questions? All right. So this is a compact way of writing these element-wise operations, right? So the full - the full z vector, that is uh, the vector z's before applying the non-linearity. This vector of size 3 is equal to a matrix w. In this case, the matrix w will be 3 cross d, where d is the um, uh, dimension of the input. So the - the input dimension goes to the number of columns, and 3 is the number of outputs, that's the number of um, number of neurons in this layer. So that would be 3 here, d here. So w^1 will be 3 cross d, a^0, that is the uh, a^0 is the x vector, right? A^0 is d-dimensional. So this would be, according to this network, this is three cross d and this is d dimension and this is three-dimension, and z will also be therefore three dimension. Is this clear? And - and then we apply the - the non-linearity on this z, that is on the left half of the circles. You apply the non-linearity to jump over to the right half of the uh, circle to get a's. And the g function is applied separately element-wise. Yes question? Sorry to bother you but it looks like the three is the number of impute nodes, okay. So in this case, three is the number of neurons. The number of input nodes is d-dimensional. Okay, but then the number of neurons will also be d, right? So the number of neurons here, we've chosen this to be 3. Okay. Right, and the w matrix takes you from d to 3, right? So the w matrix takes you from d to 3 when you multiply it with x, which is 3 uh - uh, which- which is d-dimensional. You multiply the w matrix with x, and because the number of rows and w is three, you get a vector of dimension three. And to that you add a b vector, which is also of dimension three. So you get a z vector of dimension three. Any other questions? So um, the thing to kind of keep in mind is whenever we- we- we see some kind of a subscript uh, at - at least in case of uh, logistic regression, when - when we thought - when we saw an x_ j, we thought of this as a scalar where x was in RD, right? However, when we see a superscript, right? We're still referring to you know, the vector or scalar or whatever it is, but it's only identifying which layer we are at, right? So z^1 is not the first element of z, it just means it's the z vector of the first layer, right? So the superscript squared bracket refers to the layer number. [NOISE] If something is not clear, please stop me. It's- it's super important that, um, you understand this very well because, you know, once- once we go to back propagation, we're just gonna use all these, um- um, all these notations very liberally there. So, you know, if there's any question, feel free to stop me as many number of times you want, okay? So, uh, so now a_1, loosely speaking, we can call it g of z_1, where you can think of g as, uh, the non-linearity, for example, the sigmoid being applied element-wise separately to each, uh- uh, element in the input vector, right? And then similarly, z_2 is now w_2 times a_1 plus b_2, right? And this gives us a_3,uh, I'm sorry, a_2 equals g of z_2, and so on, right? And then z_3 equals w_3 of a_2 plus b_3, and a_3 equals g of z_3. And you can- and you can take this as arbitrarily deep as you want, right? And this- this depth is what- is what- is the- basically the reason why we call this deep learning. You know, the deeper your network is, you know, the more deeper your deep learning is, you know, so to speak, right? [LAUGHTER] So the- the number of levels- the number of degrees of nesting you have is called the depth of your network, right? And- and this is what distinguishes from, uh, simple linear models where the- there was pretty much no depth at all, right? There was just one single layer and the output of the layer was the output of- of- of your hypothesis, okay? And we can go on. And, uh, okay? And now the question is, why do we have this g to be a non-linear function, right? What if g was identity? That is, what if g of z was just z? That is if a equals z, what would happen in that case? Yes, question. [inaudible] Exactly. So the reason why, um, you know, uh- uh, the reason why it's important to have nonlinearities is because if z was just, uh- uh, if g was just the identity function, then a_3 would have been- just to simplify the argument, we're gonna ignore the b terms, uh, but you can include the b terms here and- and the same argument will still hold. Let's- let's, uh, assume that the bs are just zeros for now. So a_3 will be g of z_3, right? And g of z_3 is just z_3, right? But z_3 is w_3 a_2, so this is w_3 a_2. But a_2 is g of z_3, and g is identity. So this is w_3 of z_2, right? But z_2 is w_2 of a_1, so w_3 w_2 of a_1. And similarly, a_1 is g of z_1 if z is identity, then, you know, this becomes z_1, z_1. And this, again, w_3 times w_2 times w_1 times x, right? And this is basically a matrix times a matrix times a matrix and we can call this sum w tilde times x. So if g were not a non-linear function, but assume it to be just some, you know, uh, linear function, then this entire network can now be represented as a single matrix, right? Which means we haven't really gone beyond the scope of linear models. So any such networks can be represented by a linear model if g is not a non-linear function, right? And so g being non-linear is essential to have depth be meaningful. Otherwise, all it- all- no matter how many levels deep your network is, it can always be collapsed into a- a single matrix and it will effectively be as expressive as a single layer network. Yes, question. But you said that the number of nodes in the three layers can be different, right? Yeah. So then you can take this as a single matrix, I guess, then a linear sort of reduction can be present on the matrix but is it lack of that because we're looking at the number of [inaudible] can't change across layers or is it- Yes. So the question is, what is- what if- what if, uh- um, even though we have, you know, so many matrices, why is it, you know, still effectively the same as one? One- one way to think about that is, you can think of these, the- the w matrices in your networks which have arbitrary, you know, widths as being, uh, a decomposition of your w tilde matrix, right? So you- you take a w tilde matrix and decompose it into product of, you know, other matrices and those will be your weight matrices. Yes, question. [inaudible] So we're gonna come to that. So, uh- um, you mean why g is the sigmoid? Yeah. Yeah. So, um- [inaudible] Yeah, so- so z- so g of z clearly should not be just z because then, you know, there is no depth in the network. Then the question is, what can z be? What can g be rather? So g of z can be any function which is, uh, nonlinear, but in deep learning, we choose, you know, some kind of a non-linear function that's also monotonic, right? So g of z equals 1 over 1 plus e to the minus z. So that is the family of sigmoid function. You know, that works. g can sometimes be tanh. So tanh is basically e_z plus, what is it? Minus [NOISE] yeah, e to the- g to the- e_z minus e to the minus z over e_z plus e_z. This is the hyperbolic tangent, right? And this also looks very similar to, uh, the sigmoid, except instead from, you know, 0 to1 it is minus 1 to plus 1, you know, just a different choice. And a very common one of g is relu, it's also called the rectified linear unit. And this is basically max of z,0. What does this look like? It basically looks like this. [NOISE] So [NOISE] this is z, this is ReLU of z, right? And similarly here, you know, this was z and this is tanh of z, and here this was z and this a sigmoid of z, this is z and this is the ReLU of z, right? What ReLU does is, um, if z is negative, it just sets it to 0. If it's positive, it leaves it as it is. And this is a non-linearity, right? And if- if g is- are the ReLU function, then you cannot represent w as, you know, product of, uh, other w's. And- and these are, you know, probably the, you know, three most common choices of- of the activation function that are used in practice, right? So this is pretty much how we construct, um, a neural network of this kind of an architecture. So this kind of an architecture is also called a fully connected network. The reason why it's called fully connected is because in each layer, to the next layer, we have a full bipartite connection. Every node in one layer is connected to every other node in the next layer, or in the previous layer, right? And- and- and that's the reason these, this kind of a network architecture is called a fully connected neural network, and the number of layers you wanna have and the number of neurons in each layer that you want to have, they are all configuration choices and they are also called hyper-parameters. Yes question. [BACKGROUND] Yep. So the question is, um, I guess, you know, um-um, do the number of layers, do the- uh, and the number of neurons that we have, you know, does that relate to overfitting and underfitting, right? [BACKGROUND] Yeah. So how will we choose the right number of layers and how do we choose the right number of neurons per layer? Or in general, how do we choose what activation function to choose- what activation function to use? And the answer is almost always cross-validation. And, uh, the answer will be cross-validation for a lot of future questions as well. And that's just the nature of machine learning, right? Um, you have a lot of freedom, lots of degrees of freedom to tune a lot of different hyperparameters, and the right answer of what a hyper, uh, parameter value needs to be is generally not obvious, uh, by, you know, just reading the problem description on just by looking at the data. If it were obvious, there will be a lot of theory and there isn't a lot of theory for, uh, deciding what is the right number of layers by just looking at the data or the word, know what's the right activation to use looking at the data. You can use some intuitions which you might, you know, develop over practice for a long time, but more often than not, even like the world's best experts, they just generally tune it through cross-validation. Have a holdout cross-validation set and fit your network on your training data and see how well it performs on, you know, validation data. [BACKGROUND] So what do you mean by alpha? Oh, the learning rate yeah? [BACKGROUND] Yeah. Uh, and in fact, you still need to choose alpha as well. You know, like the choice of alpha hasn't gone away. So you need to choose alpha. We need to choose the number of layers. We need to choose the number of neurons per layer. We need to choose what activation functions to use, right? There are a lots of hyperparameters that you need to tune in neural network, right? And, um, there are many strategies for doing cross-validation. For example, if it was just learning rate, then you can- you could have done some kind of a binary search in a sweep some region and- and find a good a- a- a- a lear-learning rate. But here, uh, you basically have this hypercube of different hyperparameters, right? And each- each point in this hypercube represents some kind of a configuration. And the way you go about searching it is, um, people sometimes do something, uh, that's called, um, a random search. We- we- we- we're actually gonna cover strategies for tuning your hyperparameters later this week. But in general, um, the answer of how you- how you choose these values is through cross-validation, right? You know, try some, you know, um- um- um, come up with some, uh, possible configuration, see how well it works, or try some other configuration, see how well it works. And, you know, keep trying that until you feel satisfied, you know, which is not a very satisfactory answer, but that's exactly how it is done in practice. Yes question. [BACKGROUND]. We're gonna come, why we need a learning rate. We- we still haven't seen how we, you know, optimize this yet. We're still talking about the network architecture. [BACKGROUND] The width. Yeah. [BACKGROUND] Yeah. In general, if- if the width of the network becomes extremely wide or if the depth of the network becomes extremely deep, you're- you're absolutely, you know, it's absolutely possible that you may overfit your network- overfit on your data. And um, and of which is why you need to do cross-validation well, you know, you measure the performance on a validation set and see that your training performance is very good, test performance, uh, validations set is not so good so you overfit. Yes, question? [BACKGROUND] So- [OVERLAPPING]. Is it something, I mean, is it something that we set? Yeah, so the question is, is the depth and the width, uh, something that the model learns on its own? Um, the answer is no. Uh, which is why it's called a hyperparameter, right? So the parameters, in terms of terminology, parameters are those variables which the model learns on its own using gradient descent or whatever. Hyperparameters are the ones, uh, are the variables that you as a- as, you know, a human being who's building the model will choose. Yeah. So, you know, the number of layers, uh, is something you decide before you start running your gradient descent and that's called a hyperparameter. Uh, so for cross-validation you just try different configuration, it's not [inaudible] Exactly for cross-validation, you- you try different hyperparameters and see how well each configuration of hyperparameter works on the validation set. Yes question? Can you not just do something like [inaudible] [OVERLAPPING]. I'm not gonna go into that right away [LAUGHTER] So that is- that is a lot of research where, uh, research of- on- on how to, um- um, most efficiently or even learn what the right hyperparameters are gonna be and that's way beyond our scope right now. Uh, but yes, there is- there is active research going on, um, to- to automatically learn the hyperparameters. Yes. Yes question. [BACKGROUND] Yes. So the question is, should the activation functions be bounded? Uh, 'coz tanh and sigmoid are bounded but ReLU is not. Um, there are- there are good reasons to keep it, uh, bounded and there are good reasons to not keep it bounded as well. Um, maybe we'll cover it later- later today or if not, uh, in the future. The- the, um, the problem with, right. So maybe we're- we're kind of, um, uh, skipping ahead. But in general, um, [NOISE] the- the problem with a sigmoid network, uh, sigmoid function like this is as you- as you kind of reach, as- as your z value goes further and further away from 0, the gradient of the sigmoid is pretty much 0, right? So which means, um, which means if- if, um, as we will see, uh, uh, later today, if your gradient becomes 0, then your learning effectively stops because you know, when you're- when you're doing gradient update, if your gradient is 0, your learning effectively stops, right? [BACKGROUND]. Yeah, so I'm going to postpone that for, you know another 20 minutes or so. And when we talk backpropagation, probably that's going to be a better time to discuss that. All right. So, um, we want- so this is how we- we, uh, construct a fully connected network, and now, uh, once we have constructed it, um, [NOISE] you're gonna see how we're gonna train this network. [NOISE] Right? And that brings us to backpropagation. [NOISE] So to recall, [NOISE] a naught is x, right? And z_1 is w_1 times b naught, plus b_1. A_1 is g of z_1, and z_2 is w_2 of a_1 plus a_2, and a_2 plus g of z_2. And we see this pattern of alternating between z and a, z and a, all the way until, let's call it, a of, say, L, where L is the number of layers in your network, is equal to g of z of L. And then y hat is equal to a_L. Right? And then our loss, we'll use script L of y, y hat is equal to-, for now, we're gonna assume we're doing, um, a classification problem rather than regression. If its regression you're gonna ha- you're gonna see the squared error here, y minus y hat square. For regression, this is gonna be, y log y hat plus 1 minus y, log 1 minus y hat. Yes, question? [inaudible]. Uh, what should be L minus 1? [inaudible]. So, uh, if there are L layers, should this be L minus 1? [inaudible]. And we don't count that. We don't count the input layer. Okay. Yeah. Right. So this is our loss. And the- in order to- to, to, uh, train our network, the- our goal is to do something like this. For l in 1, 2, L, w_l equals- equals w_L minus. So, um, because we are treating this as a loss, this should be the negative of the likelihood- of the log-likelihood, because we're thinking of this as a loss. And because it's the loss, we're doing gradient descent. Minus Alpha times partial derivative of L portion. And similarly, b of L minus Alpha times partial of L over partial of. Right. So very similar to- to gradient ascent that we solve for logistic regression. In logistic regression, we had just one set of theta vectors, and we would perform gradient ascent on that theta vector. Here we have a collection of w and b matrices and vectors. And we have L number of these matrices and vectors. And what we wanna do is for each weight matrix and- and vec- uh, bias vector, take the gradient of the final loss, right, final loss all the way at the end, with respect to the corresponding, um, weight layer, uh, or weight matrix of that layer, and perform a gradient update like this. And similarly to the, uh, uh, bias term as well. Right. So this is our goal. And the way we go about calculating these gradients is using an algorithm called backpropagation. Right. So backpropagation helps us calculate these gradients, but once we calculate the gradients, we perform gradient descent on the last function. Yes, question? Is it stochastic? So, uh, the question is, is this stochastic? In this case, it is stochastic because, uh, uh, we are- we have considered just one example. All right. So, yeah, good question. So think of this as y_i, so y hat i, and, you know, y_i. Yeah, so this is just the ith example. Right. So for- for a given example, um, this would be like the stochastic gradient descent update rule. All right. Now the question is, now how are we gonna calculate these, um, these, uh, derivatives, uh, of L with respect to w and L with respect to b? Right. And the short answer there is chain rule. Right. And if you're- if you're, uh, familiar with multivariable calculus, if you are already experts at taking, um, at- at, at, at ta-, uh, applying the chain rule in a multivariate setting, the rest of the lecture is probably boring for you because all what we're gonna do is apply chain rule of calculus- multivariable calculus, to calculate these gradients. Right. And that's pretty much all what backpropagation is. So backpropagation is a fancy name, but the reason, um, um, it's called backpropagation is, mathematically it is just the chain rule, right, but the way we go about calculating the gradients, is we start from the end and work our way backwards. So algorithmically, right, uh, we- we follow a particular sequence of steps, which makes it appear that the gradients are being calculated in a backward fashion, right. But mathematically, it is just the chain rule. Right. There's- there's a difference between, um, a mathematical answer and an algorithmic answer, right. For the same mathematical answer there can be multiple algorithmic answers. We saw that with kernels as well. Right. For the same- for the same inner product, we could either calculate the explicit feature representation. We take the inner- the, the dot-product between them, or we can directly calculate the- the kernel function. Right. They're- they're mathematically equivalent but algorithmically different. Similarly, the backpropagation mathematically is just the chain rule, right. But algorithmically, we calculate it in a way such that it is memory efficient and we reuse a lot of the computation while- when, uh, um, when- when, when- we use a lot of the intermediate computations when calculating, right. So that's- that's backpropagation. Uh, and let's- let's see how we go about doing it. Any- any questions so far? [NOISE] Before we- we go into that propagation, there was a- a question asked earlier. Right? Why- why can we not- what would happen- or rather the question was, wouldn't all the neurons in say the first layer, learn the same thing, right? Why- why aren't we just learning multiple copies of the same logistic regression? And the answer of that is, if we perform an initialization where the entire network is initialized to 0s, all the weights and biases are initialized to 0s, then that's exactly what will happen. We start with a 0 initialization. What we will observe is all the neurons in every layer are learning the same thing. So we- we'll just end up having multiple copies of the same neuron within a given layer. And in order to avoid the problem, we perform what is called as a random initialization, which means we gonna initia- initialize all the weights and biases in the network in a random way, by- by calling a random number generator and- and, uh, initializing them. And this is a necessary step, which is, um, also called symmetry breaking. Because the network is- is a- a symmetric within a layer, right? And in order to break that symmetry, um, we initialize them at- at different rand- at, uh, different random initializations, and because of that random initialization, they will end up learning different functions. So each logistic regression will end up learning, uh- wi- within a layer, is going to end up learning different weights and biases just because we have start- we have started from a random initialization. In case of logistic regression, no matter where we initialize from, we always reach the same answer. But over here, uh, in- in logistic regression, we always use the same answer because the problem was convex. But once we add in a neural network setting, we lose convexity. This- this giant composition of functions in general will not be con- convex. Which means depending on the initialization, we are going to reach a different solution, right? And which is why random initialization is- is- is very necessary. And the way we go about initializing is- so W^L. So in- in- in, um, in logistic regression or- or- or in general in our learning algorithms, the way we set about doing it is initialize w's and b's, and then for log of i in 1, through some kind or another t and some time step. Now Theta equals Theta minus Alpha times something, right? In order to do the initialization step, which in logistic regression and linear models we would just initialize them to 0. In that, in- in- in place of zero initialization, we will instead do an initialization like this. So W^l initialize it with a random number generator, a Gaussian random number generator of mean 0, and square root of 2 over n^l plus n^ l minus 1. Or this is one choice. Or we could also do W of l is uniform minus 0.1 plus 0.1 There are many different initialization schemes. So this is just a random uniform initialization and- and, um, this is commonly done. This has a particular name, it's called, um, Xavier initialization. [NOISE] And this is another trial of initialization. And what you will observe is the choice of initialization is also a hyper-parameter when you are learning to fit your model. Yes, question? The W^L is just specific of given layer [inaudible] So the superscript identifies what layer each thing belongs to. So W1 and b1, they belong to the layer 1? Yes. So now so the b1 is shared by every Bayes component- shared by every [OVERLAPPING] So the question is, in our- is this shared by every neuron in the layer? So the dimension of b1 is the number of neurons in that layer. And each element of this vector corresponds to one neuron. Okay. Yes, question. What's that small element [inaudible] Yeah, so this over here is the number of neurons in the L clear and the number of neurons in the L minus 1th layer. Yeah, we can- we can discuss the reason for that. Probably if you remind me again, we'll- uh, we can discuss the reason for that. Yes, question? I wonder would this get the same answer as multiple copies with respect with what you taught us because this is like if you see the second layer and the first layer, first layer input like in the second layer note that the [inaudible] So the- the- the question is- is all the nodes be the same? The the- the answer is no because W is a matrix and each row of the W matrix is associated with each neuron. Right? And if W is initialized randomly, then each row of W is going to be different- different vector. Right? So [OVERLAPPING] [inaudible] So, yeah. Over here, you know, I should probably put this as W_ij, right. So each element of your W matrix initialize it by performing a random sample from this question. [inaudible] when updating we take some. Yes, when we are updating, we need to update. So you- we can also see this as, you know, W_ij of lth layer equals W_ij of l minus alpha times partial of the loss with respect to W of ij. And this, and this are equivalent. There's just uh, um, uh, vectorized notation. All right, so the training algorithm, you know, first we initialize w and b with some kind of initialization. And for each- for each iteration we have a nested loop for l In 1, 2, l W^l equals Alpha times partial of loss with respect to partial W^t, right? So this is going to be our- our high level algorithm. First, initialize all the W's and b with some kind of a chosen random initialization, okay? Initialize the full network, and then for each iteration, we're going to- we're going to take some examples, maybe one examples or maybe a few examples, and we're going to calculate the gradient of the loss with respect to the weights and biases of each layer and perform a gradient update of each of the weights and bias parameter. Yes. Question. [inaudible] So what do you mean by weight 1 and weight 2? Do you mean the weight- by weight 1, uh, do you mean the first layer or the first iteration? [inaudible] we get a breakthrough from that, we update that weight. Does it have a dependence on someone this weight? [inaudible] So we'll see what the relation between the weights are now next. So now what I'm going to draw is you can call it a computation graph. This- this will allow us to understand how all the different terms are dependent on the loss, all right? So first, I'm gonna have- I'm gonna call this a_0. And this is basically just x. And to this a_0, we're going use matrix w_1 and a bias B_1, right? And here,now we have some outcomes, z_1 and the computation that happens here. So basically, each, um, um, square or a rectangle is some vector, right? And the clouds are basically computations, right? So some, some, some operation happening here. And here, the operation is z, z equals wa plus b, right? And now this goes into a cloud of operation and comes out as a_1. And here the operation is a equals g of z, okay? And similarly, next, we have w_2, b_2 to some operation for which these going as inputs and outcomes, z_2, and again something else, this we'll call it, um, a_2. And again this is just g, right? And here it is z equals wa plus b, okay? And let's just draw one more, um, okay? Now here- let's say we have w_3, b_3. Again, um, z equals wa plus b, and we get z_3 and another g. You get- so, uh, for the sake of simplicity, we will assume that the final layer has just one row because we want a scalar. And b_3 will also be just a scalar, okay? And z_3 is also a scalar, and we get a_3, right? And a_3 is also equal to y hat, right? And I just want to continue it here, and we have y hat and y. So one arrow comes into this cloud. This y comes in here, and outcomes our loss, right? And the loss is also a scalar, right? We start with the input vector. And this- the dimension of this is the dimension of our data, right? And the dimension of z was a hyperparameter that we chose, right? Um, and, and the dimension of z, the dimension of a and the dimension of b is equal to the number of layers in the first, first layer- the number of neurons in the first layer, right? And the way we can compute z is wa plus b, when the ws and bs are coming here, um, and so on. We, we, we, we ness this by, you know, we are adding them, and finally we have a loss, right? And what we want to do is take the partial derivative or the gradient of L with respect to each ws, and take the partial derivative of L with respect to each bs, right? And then perform gradient of it. Yes, question? [inaudible] Yes. So, so, uh, the question is, what if y hat was a vector, right? Um, y hat would, would, uh, um, in general, y hat could be anything as long as your loss is a scalar, right? Eventually we want the loss to be a scalar. [BACKGROUND] Yeah, so the question is, uh, could we not, uh, append? Uh, um, yeah, so, uh, uh, you know, have another, another, uh, one- extend the a vector by 1 and extend another row for, uh, w [OVERLAPPING]. Yeah, you could, you could do that. Um, in, um, the- you, you, you could do that, but, uh, in practice, ws and bs are kept separate, and there's a good reason for that. Um, we saw the, the reason in SVMs. So if you remember that we were only penalizing the no- norm of w and not penalizing b. And there's gonna be a similar reason why where we- we are gonna, um, uh, perform something called regularization that, that's gonna come in the future, only on ws and not on bs. So it's, it's- fun notation it's, it's- in this case, it's good to keep the bs and ws separate. [inaudible] Yeah. Yeah, yeah, that's right. Yeah, if you- you- you could do that. uh, I mean, that is- mathematically they are the same, right? Uh, if you- if you think of, ah, if you, [NOISE] at every layer, if you extend a, uh, if you extend a by adding a 1. Then you can get rid of b, sure you can do that. Uh, but in practice, you know, uh, it's exactly the same, but in practice, you know, nobody does that. Anyways, so now the question is, how are we gonna compute each of these partial derivatives of the final loss, which is a scalar, right? These is a scalar with respect to this matrix, this bias and, you know, and so on. And- and that's where chain rule comes to help. And that's what back propagation is all about. Okay, so I'm gonna switch to that board. [NOISE]. So first we are going to make a few observations. For this um, ex- example, we are assuming binary classification. That is, ys are in 0s and 1s and, [NOISE] y's are 0s and 1s. [NOISE] But in general, your y's could be anything and your last function could be something. Here we're just assuming the logistic loss, right? So first we uh will observe that um, so we will work it out for w2 um, and even in the notes we have done it for w2. With the steps are exactly the same for w1 and w3 and b1, b3, right? Once- once you know how to do it for one, you can just apply the same recipe for all of them. So, partial of loss with respect to w2 [NOISE] is equal to, if you remember, from our matrix calculus review is gonna be partial of L with respect to partial of w2 1,1, partial w2. In these case, uh in- in- in the nodes we, uh, the second node has three. So w- so this had three, but- but in general, you know, it's just uh, depending on the dimensions of w, you know, you have uh- in this case it's just 1,3 and, [NOISE] right? So this is a matrix and you take the partial of this loss with respect to every single element and arrange as a matrix. So this is it- you know, th- this is the definition of uh, uh, the gradient of a scalar loss with respect to a matrix. Yes question? [inaudible] w2, is the second layer [inaudible] you- you could, uh, um so first let's work out the math and then we look at the algorithm of how we go about doing it. So this is just math, you know, what's- what's the gradient of- of- [inaudible]. Yeah this one is the middle layer, there is the middle layer, right? And now [NOISE] make a few more observations [NOISE]. So the partial of L with respect to z3. So what is z here? z3 is- [NOISE] so the partial of L with respect to z3 is [NOISE] equal to partial of z3 times minus y log y hat minus 1 minus y log 1 minus y-hat, right? And these [NOISE] works out to be [NOISE] - So I'm gonna few- skip a few steps here. [NOISE] It's basically the same kind of calculation you did with logistic regression. And you get these is minus y log y hat is basically sigmoid of z3 minus 1 minus y log 1 minus sigmoid of z3, right? And you can do the steps- the- there's basically the same kind of steps we did for homework 1, question 1, Part A. When you're- when you're showing uh- uh the uh positive definiteness of the hessian of logistic regression. You would have calculated, you know, basically the same steps and these simplifies to, [NOISE] a3 minus y. So, partial of L with respect to z3 is equal to a3 minus y. [NOISE] And now- [NOISE] and know we are gonna calculate, the partial of each of these. I'm gonna pick 1 ijth element and see what it turns out to be. So the partial of L with respect to partial of wij, of the second layer. Is equal to partial of L with respect to, partial of a3 times partial of a3 with respect to partial of wij, right? This is a chain rule. Any questions for this? Okay. Then these will further break it down [NOISE] times partial of. Any question from here to here? Again, just the chain rule, and so on. And now, we observe that this- did I miss something? Z, um, so- so one more step. So partial of L with respect to partial of a3, times partial of a3, with respect to partial of z3, times partial of a- a3_ z3 and z3_ a2, times partial of- z2 times partial of of z, not z- w_2_ij. So basically apply the same thing and we get this- this long expression, right? And how did we reach this expression? To reach this expression, [NOISE] what we basically did is [NOISE] applying the chain rule, right? So dz- so d_a2 by d_z2 is a Jacobian, because it's a vector valued function by a vector valued function, right? So this is gonna be- so here we're gonna get uh, let me use the red to denote the gradients. So the partial of L with respect to y hat right, over here- so anywhere where there's a computation, we get a gradient or a Jacobian. So L with respect to y hat is gonna be a scalar. So I'm just going to denote it by a dot. There's another computation here, output to input, and this is going to be a scalar because it is a scalar- it is a scalar. The- the derivative is going to be a scalar. From scalar to vector. So the derivative of a scalar valued output times a vector valued input is going to be a- a gradient of vector. So I'm gonna write it as a vector, right? And the derivative of vector valued output by a vector valued input is going to be a? A Jacobian matrix. So I'm going to have a matrix here. Similarly, vector-valued uh, output to vector valued input is going to be a? Jacobian. Right? Now, the- the- the um, the way we go about doing this, uh, the common pattern that you observe is that you- if the deeper your network is, you're going to have this alternating sequence of Zs and A's and Zs and As and Zs and As, right? And from every a to every z, and from every z to every a, you're gonna have- keep getting Jacobians, right? And whenever there is a g, the Jacobian, uh, because this is element-wise operation, the Jacobian is going to be a diagonal matrix, right? And over here also, uh, this is gonna be a diagonal matrix. Whenever the non-linearity comes into picture because it's an element- wise operation. And whenever we- we um, from z to a, because z is w times a, you will observe that the Jacobian from z to a will always be z- will always be just w. There is a question. The weights are all independent of each other, but uh in order to calculate the Chain rule, [NOISE] what we do is we break them down into local derivatives. [inaudible] Yeah, the- the uh, well, what do you mean by independent? [inaudible]. So here the- it is- it is diagonal only for the non-linearity, because that's just acting element wise. There is no interaction between the other terms. It- it's just performing element-wise um, uh, operations. So this is going to be uh, a diagonal matrix, all right? And from z to a, it's always gonna be the corresponding weight matrix, and from a to z it's always going to be a diagonal matrix, right? And no matter how deep you go, you- you just gonna keep getting this diagonal matrix, corresponding weight matrix, diagonal matrix, corresponding weight matrix, and so on, right? And now when we want to calculate the gradient of- and the gradients that we want to update uh, for values over here, right? These are the ones we want to- these are the- the- the parameters that we wanna update, right? So we want to calculate the- the- the gradient of the final loss with respect to this blue dot- final loss with respect to this blue dot, and so on. And construct our update matrix, right? Because this matrix we saw is- [NOISE] no, each of this corresponds to a blue dot, right? So we wanna calculate the gradient of the final loss [NOISE] with respect to each blue dot and construct the corresponding update matrix. And in order to calculate the final loss with respect to the blue dot, we just apply the chain rule. And the chain rule says- back to the chain rule [NOISE]. The chain rule says, each of these are gonna be- this is gonna be- um, so z to a is always gonna be the matrix w of 3. a to z is gonna be a diagonal matrix of g prime, and this thing over here is what we calculated already. That's a3 minus y. All right. So we're gonna get a3 minus y, times w3, times diagonal of g prime of z2, times this one. Now, how do we handle this? So once we bootstrap this, no matter how deep our network is, we're just gonna keep adding on, you know, the w matrix itself and the diagonal of g prime, the next w matrix, the next diagonal of g prime, next w matrix, next diagonal of g prime. And that's gonna be a recurring pattern. And we just need to figure out the starting end, which we- which is common for all. Which is just you know, the a3 minus y. And all the way at the beginning. And for all the way at the beginning, we are trying to calculate the gradient of a vector times a scalar, right? So this is a vector and this is a scalar. z- z2 is a vector, w_ i_ j is a scalar, right? So in- in our picture [NOISE], now we are trying to calculate um, the gradient of the z vector with respect to the scalar. And this is again uh, pretty straightforward. [NOISE] And this comes out to be- so if you are doing it for wij, this will just be a matrix with 0s everywhere except aj of 1 in the ith position, right? And the reason is again, pretty, pretty simple because z is just wa plus b. So the- the derivative with respect to w- so wij can only influence the ith element of z, wij cannot, so if- if wij is- is say from- from the second row, so all these parameters of w can only influence the second, second element of z. So the derivative of this vector with respect to this is going to have zeros everywhere else and the influence of ij th element on the ith element is just aj, okay? So this is going to be just aj in the ith position. Any questions on this? And- and- and so if- if we kind of do a dimensional check, this is 1 cross 1, w3 is, in this case, in- in- in the- in the example in the notes it's 1 cross 2 and this diagonal is going to be 2 cross 2, but with only diagonal entries and this is gonna be 2 cross 1 and that example there's- there's- there's only- there's only 2 so the other one will be 0 and aj will be ith element, and now we can combine all of these into, into a single entry. So this is basically a_3 minus y, a scalar times w_3 and when you multiply, a diagonal matrix with another matrix, you get the matrix which is the same as performing element-wise multiplication, right? When you multiply by a diagonal matrix, with the- with the vector, it's essentially the same as performing element-wise multiplication and this whole thing is now a one by two times aj, which is 2 by 1 in the ith position. And further, this can be further simplified as a3 minus y times w_3 element-wise, g prime of z_3, all right? And because it's- this is zeros in everywhere except the ith position, this vector, we only need the ith index times aj_1, okay? And this is for ijth element. So partial l with respect to wij is the ith element of this vector times jth element of this vector, which means partial l with respect to partial w. The full matrix is just the outer product between these two vectors, all right. So that is a_3 minus y, w_3, g prime z_3. So this is the vector from which we took the ith ele ment instead. For the full matrix, it's going to be this vector outer product with aj_1 transpose. Next question. What is [inaudible]? It's aj_1 because we are calculating with respect to w_2, and for w_2, sorry, I should have kept the picture here. For w_2, a_1 is the thing that gets multiplied by. So what- what you see here is- there is a fair amount of notation, but the main idea to- to take away from this is the loss is- the final loss is always a scalar and we want to take the derivatives with respect to elements of a given matrix, right? And for that, we use the chain rule to, to start taking the gradient all the way from the final loss to the layer below- before to the layer before, to the layer before, to the layer before and each of those gives you a Jacobian, and those Jacobians have a repetitive pattern. The very first time it is something different, you know, a_3 minus y and this comes from logistic regression, and then onwards, all the Jacobians, you're gonna have this daisy chain of Jacobians and they are very easy to compute. They're gonna be the corresponding weight matrix or a diagonal of the derivative of the activation function, right? And you're going to have a daisy chain of this- of these Jacobians, depending on how deep your network is, right? And the dimensions are always gonna be such that the output of this layer is going to be the input of this layer and the output of this is gonna be the input of this and you know, they will always match in dimensions. So we're going to start with a 1-by-1 and gonna end up with a 1 and all the intermediate- intermediate dimensions where the- where they meet will always match and once you- once you multiply this whole thing, you're gonna end up with a scalar, a 1-by-1, 1-by-1, solution and that's gonna be the value you're going to fill in here and then you repeat that. When you repeat that we see that- that 1 by 1 entry was basically the product of the ith element of one vector times the jth element of another vector, which means the full matrix is just the outer product between those two vectors. Yes, question? In the final result will be g prime 2 [inaudible]. Yeah, this should be- you're right, so this should be 2. Thank you. Yeah. It's a 2 - 2 over here, I made it a 3 over there. Right? So that's basically- that's basically the- applying the chain rule to get the partial derivatives and basically backprop is now- no, this what- what we see, you know, this daisy chain structure. If we want to compute the partial with respect to w_1 now, right? It's basically you are gonna add two more lin ks into this daisy chain, all right? Into this daisy chain of Jacobians, you're gonna have two more Jacobians and the final and- and this is gonna be specific to the previous layer, right? And basically, backprop tells you that if we start computing the- the derivatives with respect to matrix all the way from the nth, then we can reuse the terms. We- we don't have to re-compute the- the product of this, which is gonna be a vector for the next- for the previous layer, we're just going to have another w matrix and a diagonal matrix and the- the corresponding a matrix- a- a vector for the previous layer. So we can reuse all this computation when we're calculating the derivatives for the previous layer and that's- that's basically backpropagation. It suggests you that you should start working backwards so that you can reuse computation. Right? Any question, next question. [inaudible]. Yeah, so, so here we are, you know uh,. In this daisy chain, you observe that we are moving from vector to vector to vector, right? So the local derivative will be a Jacobian. With respect to vector. With respect to vector exactly. All right, so then we'll break.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_18_Principal_Independent_CA.txt
CS229, Lecture 18. The topic for today will be continuing our study of, er, unsupervised learning. So the topics today is, we're going to wrap up factor analysis. We mostly finished factor analysis last time, uh, except we just stopped at the end where we finish the derivation and didn't really have- have time to kind of, er, answer some of the pending questions and really get a good feel for factor analysis. It almost felt like, you know, monstrous looking expressions and that's it. Er, but let's- let's, er, kind of quickly revisit factor analysis to- to get some better understanding of what's happening under the covers. It's a lot easier than it looks. Uh, and then the plan is to, uh, finish up Principal Components Analysis and maybe start ICA: Independent Components Analysis. I'm not sure, uh, how much time we'll have for this but hopefully, we'll at least start this and if possible finish it, all right? So a quick recap of, er, the last lecture. The last lecture, er, one- one big proof that we covered was the convergence of the expectation maximization algorithm and the proof is pretty simple. It was, er, we- we wanted to show that, er, the EM algorithm is guaranteed to increase the likelihood of our, er, of- of- of the observed data in every step, in every iteration. All right, and the way we went about doing that is to show that the likelihood at theta of t plus 1. We want to show that it is greater than, er, the likelihood at the previous, er, er, theta of t. And the way we went about doing that is to- is to show that the likelihood is greater than the ELBO. You know, Jensen's inequality tells us that the ELBO, er, for the same- at- at the same parameter value, er, the ELBO for any value of t- any value of q will always be less than or equal 2, that's Jensen's inequality and then we- um, we saw what happened when we switched from theta t plus 1 to theta t and we- and this basically was the M-step at time step t. So at the M step of time-step t, we optimize theta to obtain theta t plus 1, such that the ELBO was maximized. So theta t plus 1 is the maximizer of this- of- of- of the ELBO at qt. So therefore, at Theta t, the ELBO value will always be, er, lower and then we saw from the corollary of Jensen's inequality that, er, that the ELBO at- at theta t and qt is equal to, er, the likelihood, um, at theta t, right? So it was three simple steps and we- we also kind of saw the, er, pictorial intuition- intuition for this. So at Theta t, let's call this -this pen is not good [NOISE]. Yeah, so this is ELBO of qt theta t, right? This is theta t where it is tight at, er, the likelihood. So this is just L theta and this is the ELBO where at theta t it is tight and then we maximize it and we get theta t plus 1 and we construct a new ELBO at this tighter theta lt plus 1. So this ELBO is this, and this ELBO is this, right? And- and - and so we got that L of theta t plus 1 was- was less than the- the- um, um- l- l- likelihood at theta t is less than likelihood of theta t plus 1, right? So that was, er, the, er, EM proof and then we moved on to factor analysis. Where in factor analysis, we- we first- we were interested in the case where we want to model Xs that live in a d-dimensional space but we have n number of x's, where n is much smaller than d, right? And in these cases, er, we saw that, um, performing- modeling it as a simple Gaussian will give us singular covariance matrices and therefore, we- instead what we, er, want to do is to have a latent variable corresponding to z that lives in a low-dimensional space and have a- a matrix l from which the low-dimensional space is mapped up to a high-dimensional, a d-dimensional space and there is some extra noise added, er, in the d-dimensional space, right? And so the- so the model was defined like this. Z comes from a lowers- a low dimensional Gaussian which has these k dimensions. So in- in this problem, k is less than, um, d, but k is less than- is greater than n, right? So this is the assumption we are operating where n is much smaller than d, er, but k is also smaller than d but but k is bigger than n. That's- that's the, er, setting we're, er, er, working in. Or did I flip it? So we want- k should be less than d, but, er, d is much bigger than n but we want n to be greater than k. Yeah, that's better, right? So we want, um- so z lives in a k-dimensional, er, um, subspace, er, and it's- it's normally distributed with mean 0 and identity covariance and then x given z, we assume is generated in this way where mu, L and psi are parameters, right? So mu is- is like an offset for- in the x-dimensional space. So once we map- map z to high-dimensional space, it will still be mean 0, right? So this is like an offset that we add. And, um, psi is some kind of an extra covariance, um, an extra diagonal matrix where each dimension in the d-dimensional space has an independent noise that's added, right? So at first, this- these might seem pretty, er, arbitrary choices. However, if we kind of, er, visualize what's happening, er, basically what we have is, um, this- this process can be visualized like this, right? So, um, from this, we see that x minus Mu is distributed according to, um, um, mean lz and some, er, noise psi. So we can think of the set of Zs, the set of latent- latent variables that we have. Imagine we have, you know, a design matrix of these z's, right? Just like in linear regression. So each z_i is one example. We have n such examples which of- of k dimension and then suppose we have, you know, L1, right? Consider L1 to be like theta, right? So we have this- this theta, um, which is like, you know, a parameter and when we- when we multiply this- this design matrix by theta, we get one sort of column, one column over here, right? So you can think of this as d independent linear regression problems. All right, so L is d by k and k and d, um, um, k- k matches the, er, dimension of the design matrix. So think of this as the design matrix, design matrix and think of this as d theta vectors and think of this as d label vectors, right? So what's- what's happening is for each dimension in which x lives, we associate a parameter vector. That's one way to think of it. We associate a- a parameter vector and we get a noisy observation, which is x minus- x- one column of x minus mu, right? And- and we have- so- so this- this, um, um, in linear regression, we had a design matrix, one parameter vector, which we tried to fit to one column vector of Ys. So this was y this was theta and this was x but here the terminology is different, we have a z design matrix and d different parameter vectors, d different and for each vector we have, you know, a different, different label, right? So you can think of this as d different linear regressions happening simultaneously. Any questions? Any questions here? Right. Yes, question. [inaudible] Yeah, so every single, exactly, so every single dimension in the d dimensional space you can think of um, uh, you can think of the column corresponding to, uh, every single dimension in the largest space -corre- is like a label vector for that linear regression. So if you do not want to start seeing the prediction move to a certain dimension. We're gonna come to prediction soon. Yes, so for now, this is the setting we are operating and we subtra- subtract x minus mu with, you can think of mu as being the bias term, right? So bias term, you can add, uh, uh, you know, each component of mu, you can think of it as a bias term you're adding to each of the d problems. And so if you subtract the mean from x is, you get this kind of um, um, uh, a setting where this is, this is, this is like a linear regression problem. And in linear regression problem, we also assume that y is equal to theta transpose _ x plus epsilon, where epsilon was independent noise. Whereas over here, we are doing x minus mu is equal to, is equal to z l plus psi_ii. So this is one, you can think of this as z_i and this is the first, the first row. So this is like a z_i transpose l_1 plus psi_ii, where this is the diagonal matrix that we are adding, right? So this is, this model is basically trying to represent the different linear regression models in, in, in a vector- vectorized notation where all of them are written in this kind of matrix, matrix, matrix notation. Whereas previously we also had this matrix vector, vector notation. All right, that's all that's happening here. And from this we, ah, constructed the E step and the M step, right? So the E step was pretty straight forward. So q_i was this, so q_i we set, we set it with the posterior distribution at the current values of uh, mu, l, and psi. And this notation is pretty straight forward. It is just the conditional of the Gaussian. We've seen this kind of notation in Gaussian process as well. So for every i for every example, we set q_i to be a normal distribution that has this mean and this covariance. And these are just, if you remember this just comes from the Schur complement and this, ah, follows the, the, ah, notation where, you know, ah, we consider so much to be like the z value of, of the x variable, where you subtract the mean divided by the variance and then you, this also has like the correlation coefficient embedded in this, and then you multiply it by a- a, multiplied by l_transpose, right? So this is just like transforming, uh, uh, in the case of, in the case of multivariate Gaussians. And this is the Schur compliment from which we get the covariance of z given i. So the goal here was we calculated z_i given x_i and that's, that's, that's, that's q_i, right. And from this, we come up with the M step. So in the E step we always estimate the latent variables and in the M step we estimate the parameters, right? That's, that's going to be common for any model, including, including the Gaussian, Gaussian mixture model and factor analysis. So in the E step, we estimated the latent variables z_i, and in the M step, we re-estimate the parameters where l, mu, and psi are the parameters. So l, we had this monstrous looking expression, mu was just the mean of the exercise that should be intuitive, right? Because we are just, what we are doing is, is, is. uh, transforming z via the l and adding some mu. So l z has mean 0. So uh, obviously the, the, the maximum likelihood estimate of mu will just be the mean of x's. And in fact, that will not change from iteration to iteration, right? So that's just gonna be mu. So you calculate it once and in the next M step you can just reuse it. There's, there's nothing that changes here. And this psi_ii was this, you know, even horrible looking expression worse than the l, ah, ah, expression for the l. And, and, uh, and that's where we stopped last lecture. However, even though these look pretty scary, we can actually kind of dissect this a little more and, and get a better understanding of what's happening, right? So first of all, let's start with the l, right? So to, to, uh, to make this understanding easier, I'll introduce some new notation, right? So mu of mu z_i given x_i right? So mu of z_i given x_i is the mean of the posterior distribution that you obtain in the, in the E step, right? It's a Gaussian that is concentrated at mu. So you can think of this as loosely speaking let's, let's call this kind of z hat_i, right? So it's, it's our best, best guess of what z is, given our knowledge about x's, right? So, so this is our best estimate of, of z_i, right? Does it make sense? This just our best, best, best guess or best estimate about z_i, given what we know about x. So that's one notation. And now, um, x_i minus mu, alright let's, let's, um, and we also have this term, l mu_z_i given x_i. So now, l mu, l times mu_z_i given x_i is equal to basically l times z hat_i, right? We just defined ah, we saw that mu of z_i given x_i we, we can think of this as our best estimate of z. And l times z_i, we see that, you know, l times z_i is our best estimate of x minus mu, right? So l times z_i, think of this as our best estimate of, of x_i minus mu. And therefore, we will call this x hat_i minus mu. So, l of mu z_i given x_i is just l of z, z hat_i. And that's the same as x hat_i minus mu, right? Now, once we, we, ah, uh, so what, what, what do we, ah, ah, what can we, ah, think of this as? So think of l z hat_i to be the reconstruction of x. So z hat_i is our best guess of what z is given x's. And then l times z hat, uh, x_i is what, you know, is, is now we are going from z back to x. So you can think of l z hat_i plus mu to be the reconstruction of x, right? So over here, plug in z hat x_i here, multiply it by l, and we get x hat minus mu, right? So this is like the reconstruction of x_i. So that expression can now be written as l is equal to sum over i equals 1 to n, x_i minus mu. So x_i minus mu let's, let's, ah, you know, um, think of it as, as y_i, you know, following the linear regression example. And, mu, uh, uh, mu of, of, z_i, so mu of z_i, let's, let's call this, so mu of z_i is basically this entire vector z so mu of z_i is our estimate of, of, ah, the z vector, so let's just, so I'm just gonna call it capital z_i transpose, right? And here again, this- this basically is- is, can think of it as zz, z transpose z [NOISE] plus, you know, some- some, uh, some covariance inverse. Or if- if we, uh, um, we take the transpose, [NOISE] we get [NOISE] We don't need that much so this is basically like, um, um, you can think of this as, you know, L transpose is equal to and this comes here. Z transpose, Z plus, you know, something inverse Z transpose y, right? So basically we are- we are doing some kind of a regularized linear regression, right? That's what's happening in the L expression over there, right? So we have- we- we make our best guesses about Z, we have x's that are given to us. And using those best- best guesses for Z, we may- update our L and for that we're just solving d different linear regressions in parallel, right? So that's- that's, um, uh, that's L. Now, what about Psi, right? So Psi is also this, you know, scary-looking expression and we're gonna use, you know, the same terminologies, uh, in there as well. So, um, [NOISE] what we end up getting there is, um, something like you can- you can think of this as, you know, i equals one to n x^i, x^i transpose. And now in place of- of Mu of, uh, uh, z^i, um, so over here, uh, let's call L Mu to be x hat so that's- that is minus x^i, you know, x hat i transpose. And then L Mu is again x hat i minus x hat i x^i transpose minus, if we expand the- the- uh, the matrix on the other side, we get plus L Mu. You know, think of it as x hat i, x hat i and the L comes from the other side, x hat i transpose plus L Sigma z^i given x^i L transpose, right? And now this can be written as, [NOISE] i equals 1 to n x^i minus x hat i, [NOISE] x^i minus x hat i transpose plus L times z^i given x^i L transpose, right? So this, if you- if you- if, uh, if you recognize what we're doing is we're taking the- the correct label minus our estimate of the correct label, right? And- and, uh, uh, this is- is- is- is therefore similar to estimating the noise in linear regression, right? So in- in linear regression, if y equals Theta transpose x plus Epsilon, the way to estimate Epsilon from your data is to first find out Theta hat and then do 1 over n, i equals one to n, y^i minus Theta hat transpose x^i y squared, right? So this is- [NOISE] this is, uh, this the way we estimated the estimate Epsilon in- in linear regression if you need to. And similarly, you know, we are- we are estimating the Psi, which is the noise that's getting added to- to x^i's, right? And this, because of the notation, turns out to be a full matrix however, we are- we are interested in each of the d examples independently. And therefore, this- this matrix that you obtain, we're just going to take the i- ith element to be [NOISE] ith element of this. This is basically like saying we don't care about the covariance of the noise between one linear regression problem and another regression- linear regression problem, right? So we're just ignoring the off-diagonal elements here. Okay, does that make sense, any questions? How did you get the L matrix [inaudible]? [NOISE] So the L matrix is what we- what we have in the- in the update. That- that's or was it a different question. [inaudible] So x hat, you can think of x hat as- so if you- so can think of, you know, um, you can think of- of, um, factor analysis as this, you're given the y labels for different problems, right? And by just given, by just having the y labels, you want to construct both the covariates as well as the parameters, right? It's a really hard problem, right? You're given x's and you want to estimate both z's and L's. L's are the parameters that we- that- that's part of the model, right? And z^i's are the latent variables that's specific to ea- each example, right? So our goal is to not only estimate what our design matrix was, but also estimate what the parameters was by just having the labels, right? Which- which- which sounds, you know, kind of impossible but given these- these, uh, modeling assumptions that z's come from- from, uh, uh, normally come from a normal distribution and that they follow this kind of a linear relation. You can apply EM algorithm to this and you can estimate it. [NOISE] Was- there was another question? Okay. So you can think of- think of, uh, uh, factor analysis as this problem where you're given just the labels for d different problems, right? And you're assuming, a linear modeling of some hidden covariates we- we don't know the design matrix. We don't know the parameters, and we're estimating both the design matrix and the set of parameters given just the labels, right? It sounds of it- it's kind of magical here you're just given labels and you are estimating both what your x's were as well as the parameters with only the assumption that your- your- your, um, the only assumption that the inputs are distributed according to some kind of a normal distribution with Mu, right? And Mu here just ends up being the- the bias term, you know, one per- one per- uh, one per linear regression problem, [NOISE] right? Yeah, so this looks kind of monstrous, but if you- if you think of each of these L Mu [NOISE] to be like x hat, right? Then you can visualize this as, you know, uh, estimating just the- the reconstruction noise between x and x hat. Um, and, and we're only interested in the diagonal elements here. [NOISE] All right so that wraps up our coverage of factor analysis oh, and, uh, [NOISE] I think there was also one question which a- a student asked by doing all of this, how did we actually go about solving the- the singularity problem where, you know [NOISE] so if you remember, um. We briefly wrote this notation, [NOISE] that for factor analysis, you know, log p of x, of the marginal likelihood actually had this closed-form expression where it turns out to be, um, our x happens to be a normal with mean Mu and covariance LL transpose plus Psi, right? So the covariance matrix estimated for the marginal of x is therefore this LL transpose, which is a low-rank matrix plus a diagonal matrix. And because we are adding a diagonal matrix to this low-rank matrix, we are guaranteed that this is no longer singular, [NOISE] right? All right, so let's move on uh, so this- this basically wraps up factor analysis, and now we will start [NOISE] Principal Components Analysis. PCA. [NOISE] PCA. [NOISE] So in factor analysis, our goal was to- to model X's as something that approximately lies in a low-dimensional space, right? So for every X we had a corresponding Z, and Z lived in a low-dimensional space, and the relation between X and Z was- was, uh, what you call as affine. Affine is another word for linear plus some offset, right? So, um, L defined, um, L gives us from Z to the centered X, and then you add some kind of an offset, so X and Z are related by what's also called an affine relationship, right? So, um, and we had to go through this EM algorithm because we were, um, um, we were modeling it with this- with the assumption that d was much larger than n, right? PCA is another approach to, uh, where you are given X_i [NOISE] i equals 1-n, and X_i [NOISE] is, uh, in d. And now here we no longer, um, um, we no longer assume that d is much larger than n. We- we are back into the usual regime where we have, you know, generally have more number of- much more number of data points, than the dimensions of the space. And now we want to- [NOISE] we want to find out if our data actually lives in a low-dimensional space, right? So, um, for example, if- you know, here's one example. So if this is, uh, a dataset where we are- we are plotting, so this is X_1 and X_2. So if you have a dataset where you are plotting- [NOISE] the example in the notes is, uh, supposing you are, uh, uh, [NOISE] you fly helicopters or you fly, you know, toy helicopters, whatever. And, um, for every- for every such, you know, pilot, you have a skill level, and the level they enjoy flying the helicopter, right? And if you are trying to- to, um, if- if you look at the scatter plot of skill versus enjoyment, you know, you wouldn't be surprised if you see some kind of a linear relationship between them. It's more or less because, you know, people who enjoy flying helicopters are likely to be, you know, good at it and vice versa, right? So, um, even though you have two different variables, they are kind of, you know, reading off this underlying, you know, common, um, um, you know, common, uh, trait of the pilot, you can call it, you know, piloting karma or whatever. And, you know, and- and therefore these two are, are pretty correlated. And now, um, your data may therefore have, um, if your X is, in general so, you know, if you have any examples in d features, right? So some of your, um, uh, features may be kind of correlated. So if you- if you have, um, um, if this is a dataset about pilots and one of the features is, you know, their skill level, and another feature is, you know, the level of enjoyment they report flying helicopters, right? Uh, even though this looks as a d-dimensional space, because, you know, some or many of your features may kind of actually live in a lower-dimensional space, your entire d-dimensional space can probably be represented in a lower-dimensional space, which is k, right? By, uh, by this, I don't mean element-wise similar. I- I jus, uh, you're just thinking of representing this high-demens- high-dimensional data in a low-dimensional space. Yes, question. [inaudible] [OVERLAPPING] Yes. So the question is, uh, why isn't this supervised learning? Well, you know, you can think of one of them, uh, you can take, uh, one of them to be y and, and kind of regress it from the others. You can't do it as a supervised learning problem, but the, the problem is in general quite hard in the sense, uh, you're- you don't know upfront which label- which- which of these features you wanna choose as your y label. You can try it with all of them, you can try, you know, a- a- a subset of them. It's not clear, you know, how to go about actually, uh, solving this supervised learning problem. What if- if d is 10,000, right? Are you gonna perform 10,000 linear, you know, regressions? You could do that, but, you know, PCA is- is much more efficient. [inaudible]. You- you, uh, you could, uh, uh, uh, do something like that. Though this is, uh, PCA, as you will see, is much more straightforward. You don't have to, you know, um, um, pose it as a supervised learning and- and learn it etc. So it's uh, PCA is more straightforward, right? So in- in PCA, our goal is to therefore, uh, find subspaces of our data, uh, s, uh, find a subspace in our input space where the data kind of lives. [NOISE] Okay? So in PCA, the first thing we do is to sta- center our data and scale it to have variance 1. What we mean is, the first step in PCA is to [NOISE], you know, standardize your dataset. [NOISE] Okay? Which means we wanna set X_j_i to be X_j_i minus Mu j divided by Sigma j, where Mu j is the mean of the jth column and Sigma j is the- is the standard deviation of the jth column, right? So first thing is independently kind of center each column and scale each column so that the column has variance one, okay? So that's the first thing. Where Mu j is- [NOISE] is one over n, i equals 1-n X_j_i, and Sigma j squared is 1 over n, i equals 1-n X_j_i minus Mu j squared, right? The first thing is we want to standardize our dataset to have mean 0 and standard deviation 1. And the reason we do- we wanna do this is, um, it may so happen that the units in which your- your, uh, data is represented can be very different. So, uh, for example, uh, if you're, uh, representing, for example, the height and weight of a person in a dataset where each row corresponds to a person, the, you know, height may be, you know, you never know, the weight may be represented in, in kilograms, but height may be represented in millimeters, right? So the- the values in the height column will generally be- be much, much bigger than values in the- in the weight column, for example, right? So, um, the first thing you wanna do is to kind of become agnostic to the units in which your data is represented, and for that, we just standardize each of the column independently, [NOISE] right? And once we do that, the way we want to find the- [NOISE] the, um, underlying subspace is like this. [NOISE] So, right? So let's assume this is X_1, Xd, okay? And if your- if- if your data is- [NOISE] So we could try to find [NOISE] a subspace here and project all our points onto the subspace. So this is one possible projection. So each of the, uh, each of the crosses were the x's that lived in the high dimensional space. Right? So this is one way to, uh, uh, this is one possible low dimensional subspace onto which we are projecting all the individual data elements onto the low dimensional subspace. Right? And another possible subspace is x1, xd and trying to draw the same, um, uh, things here. Now, what if instead we chose this to be our subspace? Right? If, uh, that- that's one technical error in what I did, the subspace must span through the origin. So instead let me just redraw the axes so that it, yeah, let's assume this is xd. Right? And it's going through the origin. So instead we could- we could also have this to be the subspace. And if we can project all the points onto our subspace. Right? If you remember, projection means to find the closest point on the subspace to our- the point which you are trying to project. All right? So, and- and the, uh, a natural consequence of that is that the- the, uh, the line connecting the projection to the point will always be perpendicular to the subspace itself. Right? So we could have, you know, um, there are an infinite number of subspaces that we can come up with. Right? And we could either use, you know, this is one possible subspace onto which you are projecting, this is another possible subspace onto which you are projecting, but PCA tells us that the subspace onto which you want to project your data should be the one where the- the variance of the projected points is maximized. What do we mean by that? It means, so if you look at the projected points in this example, you know, there are kind of clustered- [NOISE] clustered within- within, uh, uh, within this space. And if you calculate the sample covariance it would be, you know, some- some, uh, uh, uh, it's- it's have some value. All right? However, with this we see that the projected points are much more spread out. Right? This is much bigger than this. Right? It's the same points, but different choice of subspace, but the- the variance of the projected points, the, uh, projected points in this case are much more spread out compared to the, uh, uh, projected points here. And PCA tells us, what PCA, uh, uh, does is to find this low dimensional subspace, this k-dimensional subspace of this d-dimensional space where the projected points have maximum variance. Right? So it's- it's a variance maximizing procedure where we find a subspace such that the projected point on the subspace have highest variance. Okay? And the way we represent the subspace is through these unit length vectors. [NOISE] So the idea behind, you know, doing something like this is that we want to- we want to retain as much of variance there is in the data to the extent possible, even after projecting it down to a low dimensional space. We don't want to lose the variance, we just want to lose the number of dimensions where the data lives, we want to project onto a low dimensional- a lower dimensional space but we don't want to lose the variance in our data. We want to retain the variance in the data but lose just the dimensions. That's the idea with PCA. Right? So- [NOISE] so suppose we are- we are, um, suppose we represent the low dimensional subspace with unit vectors u. So if- u, you can think of u in R_d to be unit length. Right? So if- if u is- represents, you know, um, a basis of the subspace of unit length onto which we want to project our data, you might remember from one of our audio lectures, um, if u is- is a basis vector, the projection of x onto the space spanned by u, will be the projection matrix of u- [NOISE] projection matrix of u times x. Where x is, you know, the- the point that we want to maximize. And so, um, so that would be the, um, um, that would be the projected point. And this, you know, the projection matrix of u is basically u u transpose over u transpose u of x. That's just the projection matrix of- of, um, projection y. And we see that, um, u transpose u is just 1. And so the projected point will be just x_i transpose u. Right? So this should be x_i- x_i transpose u, this is a scalar, times u. Now, the, uh, what we- what we want to do is to find u such that the- the variance of across these projected points is maximum. Right? So, um, if we have n examples, what we want to maximize is the- for all in examples, the norm of this- this, uh, projected point. So the norm of the projected point is basically the length from here to the projected point. Right. The, uh, if we max- if we find u such that the sum of the squares of all the projected lengths is maximum, then effectively we would have performed PCA. Okay? So what we want to do is, um, find u such that 1 over n, sum i equals 1 to n, the norm of the projection of u times x squared is maximized. Right. And this is the same as 1 over n, i equals 1 to n x- x_i transpose u, so this is just a scalar, times u, norm is maximized. So we want to maximize this- maximize this. All over u is equal to arg max of this. Right? And this, we see that it is some scalar times a unit length vector so the norm of this whole thing is just 1 over n, i equals 1 to n, x_i transpose u squared. Right? So the norm of, uh, a scalar times a unit vector, so the norm of that is just the square of the- the scalar itself. Right? And this, we can now, uh, write this as- so continue here. Right? And that is equal to u transpose 1 over n, i equals 1 to n, x_i- x_i transpose u. Right? And because our x's were center, this thing has mean 0, and therefore this entity over here is the sample covariance matrix. Right? [NOISE] All right? So you might remember this from some optimization class you took. This u equals Arg max U of u transpose some matrix u. In this case, this happens to be 1 over n, I equals 1 to n, xi, xi transpose, right? Does anybody know what the solution of- for u is for this problem? Where u is a vector. So u, in this case, is identifying so- so if- if you solve this, the solution u is basically the eigenvector corresponding to the largest eigenvalue of this matrix. Right? And-and that's, ah, that's- that's, ah that's because arg max of u, of u transpose AU will give you eigenvector of largest eigenvalue of A, right? And this analysis holds where instead of u, if you have um- um, where here- we- we just considered one u, one vector, but if you consider the whole basis of- of, uh, um, basis vectors, then nothing in this argument changes. By performing this maximization, you would have recovered um, where a single u if you had KU vectors, then by performing this maximization, where instead of one vector you have KU vectors, you will have recovered the top K eigenvectors corresponding to the largest eigenvalues. Yes, question? Can you explain from [inaudible] So, how we went from here to here? Yeah. So, ah, [NOISE] this step, I'm gonna write it here. 1 over N, I equals 1 to N, xi transpose U squared. Right? And this is the same as 1 over n. So I'm, ah, equals 1 to n, xi transpose U-xi transpose U is just the square. And this, we can write it as one over n, i equals 1 to n, xi transpose U, and this dot product, we just write it the other way and then we get U transpose xi. Rather I should have done it the other way. So instead, if you had swapped this, you will get i equals 1 to n, u transpose xi, xi transpose U. Right? And then you can take the U common out of the summation. And that gives us U transpose, i, 1 over n x1. I equals 1 to n, xi transpose U. So basically, ah, by- by following this variance maximizing argument, where we want to maximize the sample variance or the- the, ah, projected points onto a subspace must be as far as possible from the origin, right? By following this variance maximizing principle, if we find the basis U, such that the- the projected, ah, variance is maximized, we will see that we will end up with an eigenvalue problem where we want to, ah, calculate the eigenvectors of the sample covariance val- ah, ah, matrix of x's. Right? So, if x is our data- data matrix. PCA tells us that first calculate x transpose x, right? x transpose x is basically this written in matrix notation. Take x transpose x and calculate the eigenvectors and eigenvalues, or calculate the spectrum of- of the ah, x transpose x. And depending on the number of subspaces that we want to retain. So x transpose x will give us a collection of Lambda i, ui. So say Lambda 1 u_1, Lambda 2 u_2 until Lambda d UD, right? These are the D eigenvectors, ah, eigenvalues and u's are the D, and u's are the D, ah, eigenvectors. And now, what we wanna do is- is, ah, if, depending on the amount of variance that we want to retain, a common practice is to then choose as many lambdas, are as many pairs of- of ah, eigenvectors, such that the- a more precise way to say it is, find k such that sum equals 1-k. Lambda k over sum equals 1-d. Sorry, Lambda i- Lambda i. This is equal to, you know, some- some kind of a percent, say 95% or something, right? And it's- it's common to choose k in PCA by first performing the eigendecomposition of x transpose x, right? And then by-sort eig- eigenvalues in- in some-some kind of ah, ah, decreasing order- in a decreasing order. And then find k such that the sum of the first k- top k eigenvalues divided by the sum of all the eigenvalues is some, you know, uh, per- percentage of your choice. And by choosing k to be, you know, 95%, you can effectively say that you've retained 95% of the variance in your data, even though you went down from d dimensions down to k dimensions. Yes, question? [inaudible] Yeah. Yeah. So- so, uh, um, uh, uh, uh, I'm gonna come to that, uh, right after this. All right. So, um, any questions on this? The way we go about doing it is to take x transpose x, take the Eigendecomposition of X transpose X, obtain the- the- the, uh, eigenvectors of X transpose X, and then choose k such that the sum of the top k eigenvalues divided by the sum of all the eigenvalues is the fraction of variance that you want to retain. So if you were to plot, um, um, um, let's say this is k and this is fraction of variance, you will generally see that, um, if this is d, then you would generally see that this goes down something like this. Where- or rather this is d minus k which means, um, if- if you were to prune out all the d eigenvectors, then you- you will be having 0 variance remaining. But if you prune 0, you will, uh, you will be having all the variants remaining. And generally, you know, this remains flat for quite a long time, which means even if you prune a big fraction of your feature space, you will still be having, you know, like 95% of your variance Yes, question. Professor. So u is a matrix of Eigenvectors you want to [inaudible]. Yes. So u, in this case, we showed it- we showed the- the, uh, showed the intuition using a single- a single Eigenvector. But, uh, effectively, we- we just calculated the largest Eigen- you know, showed that the largest Eigen- Eigenvector is- is the one that minimizes- maximizes the variance if you projected it into one-dimension. Um, but if you want u to be larger then or rather if you want k to be larger- this was k equals 1, if you want k to be larger, then just do the, uh, the- the full Eigendecomposition rather than just this optimization, because this just gives you the top Eigenvector. So instead do the full, um, Eigen decomposition and hold on to the, you know, top k, um, Eigenvalue Eigenvector pairs. [NOISE] Okay? Any questions in, um, in this so far? So for those of you who may be familiar with singular value decomposition, what we performed here is also called singular value decomposition so SVD. Let me write it here. Okay. Okay? Maybe we covered this in one of the earlier lectures but if you have a matrix X, where X is square and symmetric, then X has orthogonal Eigen basis, Eigenvectors and real Eigenvalues, right? And if X is positive semi-definite, then- then it is obviously square and symmetric. And if it's also PSD, then Eigenvalues are positive- all the Eigenvalues are positive, right? Now, in this case, um, we are using X transpose X to be, uh, to be the matrix on which we are- we are performing, uh, uh, Eigendecomposition. So it is X transpose X is guaranteed to be square, symmetric, and positive semi-definite, right? Now, performing Eigendecomposition on X transpose X is the same as performing singular value decomposition on X, right? The way you'd go about doing singular value decomposition on any matrix X, in this case, for- to do singular value decomposition, your matrix need not be PSD, it need not be symmetric, it need not even be square. You're given any matrix whatsoever, the way you perform singular value decomposition on any matrix X is to take X transpose X, take the Eigenvectors of X transpose X, and Eigenvectors of X transpose X, um, um, and the Eigenvalues of X transpose X will, or the square root of the Eigenvalues will give you the singular values of X. So, um, this is the same as performing singular value decomposition on- on, um, on the X matrix, which is, you know, another way to think of the same thing is to construct X transpose X and just do Eigen- Eigen decomposition on X transpose X. Any questions on- on- on, uh, PCA? In homework we uh, you will see- so here we- we saw the variance maximizing argument where if, where, uh, you know, we want to find a hyperplane, where if you take the projections, then the variance of the projections- so this distance, you know, this distance, this distance, you know. This is basically the, you know, the- the- the spread of the projected points is maximized. In homework, you shou- you will show that this method is equivalent to, um, or- or- or you will see another motivation where if you were- if you- if you rephrase the problem as finding a subspace where the- the projected points are as close as possible or as near as possible to the original points, that is to minimizing this- this distance. All right? So maximizing the- the projected variance and minimizing the- the residuals are equivalent. And in- in -in your homework, you will show that if you phrase it as a problem of minimizing the-the- the, um, um, residuals or minimizing or- or improving the approximation of the projected points or your original points, you will recover the same- same, um, solution as maximizing the variance. There are two different- two different ways to think of- of PCA. Find- find a subspace such that the projections are as close as possible to the original points. That's, you know, interpretation two that you'll do in the homework. The interpretation one that we saw in the lecture is find a subspace such that the projected points have maximum variance. Any questions on PCA? Okay, great. So let's move on to ICA, so it looks like we're making good progress, so we might be able to finish ICA today. [NOISE] So in ICA, it's, uh, uh, somewhat different problem from- from, um- um, the problems we've seen so far. And to- perhaps to understand kind of the- the- the, um, the difference of ICA, it might be useful to kind of- kind of, uh, uh, place together the algorithms we've seen so far. So- so far we have seen- let's see, I can erase this one. [NOISE] So so far in unsupervised learning, we've seen four algorithms, right? So the first one we saw was k-means. We started off with k-means, [NOISE] right? And then after k-means, we studied Gaussian mixture models. Okay. And after Gaussian mixture models, we studied, uh, factor analysis, right? And- and today, ah, just now we study PCA, right? So we've studied four algorithms so far. And you can kind of classify them in two different ways. Okay. So these you can call clustering algorithms. And these you can call them, you know, subspace algorithms. These were probabilistic, and these were non-probabilistic. [NOISE] Okay. So the probabilistic algorithms, we solved them using EM, expectation-maximization. Whereas the non-probabilistic, uh, uh, problems, uh, in the non-probabilistic approach, there was no- there was no, uh, um- um, we didn't have to use EM, right? And loosely speaking, um, again, this is a- a very loose, uh, um- um, analogy. You can think of clustering problems to be the counterpart of classification problems from supervised learning. [NOISE] Right? You can think of them as- as a, um, a classification problem if you have not- where you're not given the- the, uh, the labels. And you can think of these subspace finding problems as the- the counterpart of regression problems on supervised learning. [NOISE] And they, you know, uh, both PCA and factor analysis. In- in both of these, we -we recovered a low dimensional subspace, wherein factor analysis z's were the subspace. z was the subspace. [NOISE] And in PCA, the way we recovered the subspace is to construct the- the, um, the matrix u, where u is the, um- um, there are k different columns, each of dimension d. And these are the- the, uh, the top k eigenvectors of x transpose x, right? And then using this U matrix, which is, uh, we can multiply UX transpose, so UX transpose will give us, um, UX transpose will give us k such- or it- it will be easier to just do it as U transpose. XU transpose will give us the- the, uh, projected points of X onto the lower-dimensional space, right? You take the eigenbasis and take each example and multiply it, um, uh, and just, uh, multiply it on to U. So, that's like projecting it on to U because they have unit, um- um, unit length and, uh, XU will give us the, uh- uh, subspace. Now, you might- you might ask what's, you know, what- what's- what else is different between factor analysis and PCA. The other difference, oh, in factor analysis, we assume d was much bigger than n. [NOISE] But here, we made no such assumption. Here, in fact, we assume that n will be bigger than d. If we don't assume, um, n is bigger than- than d, then again, you know, this would- it might work. Uh, if this would be the- this would be singular, but yes. So, in fact, actually in- in PCA you probably don't- don't- don't need to make, um- um- um, n to b because nd, but- but it's commonly the case that n is indeed bigger than d. And the other, uh, difference between the two was in- in factor analysis, we recovered the matrix L. All right. And L, the- the parameter that we recovered, took us from z to x- took us from z to x. [NOISE] Whereas in- in PCA, we recover U. U takes us from x to z, if you call z as the, uh, projected, uh, projected points. All right. So- so the- the, uh, the recovered parameters, kind of, go in opposite direction between, uh- uh, factor analysis and- and PCA. The other difference between PCA and factor analysis was in PCA, you can first perform PCA, get the set of all, uh, eigenvectors and eigenvalues, and then decide after the fact, what is the appropriate care for your problem. You can decide k after you perform PCA. Whereas in EM, you need to decide k first and then run EM. If you're not satisfied with it, you need to change, uh- eh, k to a different value and rerun, uh- uh, factor analysis all the way again, right? Whereas in PCA, you first perform Eigen, the, uh, Eigen decomposition, and then you look at what k will give you the best, uh- uh, variance retaining factor. Yes, question. [inaudible] to find k and then go [inaudible] Uh, you- so- so the question is, uh, is it reasonable to first do PCA to find k and then go to factor analysis. But then what's the point of going to factor analysis because you would have, you know, recovered a lower-dimensional subspace anyways. Okay. So these are- these are, uh, um, some- some, um, some ways in which we can classify what we've covered so far. And now we're going to start ICA, independent component analysis. All right. And inde- independent component analysis is- is somewhat different in nature from what we've seen so far. [inaudible] question. Yes. Question? Is [inaudible] of doing factor analysis [inaudible]? So, ah, is- is there- is there a benefit of doing factor analysis? Um, [NOISE] um, it- it, ah, so happens that, ah- so factor analysis of- last say of- there are benefits to doing factor analysis. For example, when d is- is, um, much larger than- than, um- so thi- this would also- In PCA [inaudible] Yeah, PC- PCA- PCA, ah, ah, it work, but the problem with- with, ah, so- so- so- so- so here's the thing. In PCA, we're not explicitly modeling the noise in the data, right? So there was no probabilistic assumptions being- being made. Whereas with- with a factor analysis, even though you choose, you know, um, k to be some value, it- it generally works, ah, pretty well because it will end up adjusting your, ah, the design matrix accordingly if- if- if, ah, k is different. So it's quite, um, um, you- you can think of them as two different kind of equivalent, um, ah, approaches, you know. The differences are what we discussed, um, you know. In- in- in some cases, it makes sense to use factor analysis, in some cases, it makes sense to use PCA. But PCA happens to be, you know, more commonly used because it's easy to implement. You- you just perform, um, um, eigendecomposition and- and you get it. Whereas a factor analysis the, you know- it- it also provided us a good- good way, good example to apply EM. So when we applied EM in one case, z's were discrete, but here we saw z's were continuous, right? So it was, you know, um, as- as- as an example to understand EM better, factor analysis is- is- is, um, helpful, like, you know, from an educational purpose. Ah, but PCA is probably used much more commonly in practice than factor analysis. All right. Um, another, uh, thing about PCA that we kind of, ah, skipped over was, um, we just said do eigendecomposition on x transpose x, but sometimes x transpose x can be, you know, really huge in terms of, you know, the number of dimensions. And, you know, ah, if you were to use NumPy or MATLAB and, you know, do, you know, eigendecompose on a large matrix, then, you know, that may- that may just fail. You may run out of memory or something. So in fact, when you- when you run PCA in practice, [NOISE] um, a common- a common technique to perform eigendecomposition is to use something called power iteration [NOISE], right? So power iteration is- is a commonly- uh, is one of the commonly used, ah, techniques to perform eigendecomposition on large matrices. Um, so, ah, again, this is- this is, um- there's- there's not gonna be something you're gonna be- you're gonna be tested on, but the idea of power iteration is to start with some u^0 and randomly initialize u^0, anything that's non-zero, right? And from u^0, um, [NOISE] calculate x transpose x times u^0 equal, right? So u^1 will be x transpose x times u^0 divided by the norm of x- x transpose x- x transpose x times, ah, u^0. So basically, take some randomly initialized matrix- ah, randomly initialized vector, multiply it by the matrix whose eigenvector you're interested to calculate, and you get, ah, u^1, then multiply, you know, x transpo- x transpose x times u^1, it will give you u^2, right? And if you keep multiplying, um, the vector over and over again by the same matrix, and after every multiplication, if you re-scale it to have unit length [NOISE] , and if you repeat this over and over, you will end up- uh you will re- you will see that- you will- the- the, ah, the vector that you'll end up in, the vector that you converge will be the largest eigenvector, right? The- the intuition there is, um, um, suppose, um, um, you know, this is x_1 and x_d. Now, if you remember, if you take a unit sphere in your input space and multiply it by a positive definite matrix, then the outputs will be on- will be on some kind of an ellipse, right? So if you were to, uh, take this as, uh, as the- as the input, and, you know, if this is u-naught, multiply it through- through, ah, x transpose x, then you'll- you may get this to be the point that got mapped to, so this would be, ah, x transpose x times u-naught, and then re-scale it back to unit length multiplied through x and you will get this, and then re-scale it back, multiply it through x and you'll get this. And eventually, you will see that you, you know, this will always converge to the principal axis of the ellipse, which is the eigenvector, right? So multiplying the same matrix, uh, the- the matrix over and over, and then re-scaling the- the multiplied, ah, ah, vector back to unit length, if you repeat that over and over, um, no matter where you start, you will either, you know, converge along this direction, or if you start here, you will end up converging along this direction, but that's gonna be your principal, ah, ah, eigenvector, right? And then one, you know, you can then take, you know, x transpose x, subtract from it uu transpose, you will get, um, um, ah, a matrix of one lower rank and then perform, um, the same, um, um, ah, power iteration on this matrix and you'll recover the next largest eigenvector and so on. So in practice, usually, you do something like this, ah, when the number of dimension is large. So this is called power iteration. This is just a- a good detail to know if you come across the word power iteration. Basically, what's happening is, you know, take a- take a vector and repeatedly keep multiplying it by the same matrix and re-scaling the- the, ah, length back to- length back to 1. Anyways, ICA. [NOISE] In ICA, um, the- the- in ICA, the- the motivation is probably best seen through this, um, cocktail party problem, right? In the cocktail party problem, um, assume there are d speakers, you know, people who are speaking, and there are d microphones, right? So there are d speakers and d microphones, and the d microphones are placed in different places of- of- of- of different lengths, and they are recording the d speakers simultaneously, right? And we will call s in R^d to be, um, you know, think of s in R^d to be one, ah, instantaneous recording of the- of the, ah, ah, speeches that are happening, um, at the sources, right? And we also assume that the- the recordings in the d microphones, we will call it x. It's also R^d. And we assume that x is equal to A times s, where A is commonly called as the mixing matrix. [NOISE] Right? So S^i_j is the- is the thing that the ith speaker is pronouncing at the jth time. Right? And similarly, x^i_ j is the recording in the- in the jth microphone at ith time. Did I say the other way? So j is the speaker identity. Maybe I told you the other way first time and i is the time. And similarly, j here is the microphone identity, and i is the time, right? So what's happening is we have d different speakers saying whatever they are saying, and b different microphones that's recording them. So we have speaker one over here, speaker two over here, right? Let's- let's just start with two, right? And we have microphone one over here, right? And microphone two over here. That's recording what they're seeing. So this is X_1, X_2, S_1, S_2. Now these are kind of placed at you know arbitrary distances from the speakers. You assume there is no lag in- in recording and let's say, you know, speaker 1. Let's assume this is the waveform of what the speaker 1 is speaking, right? And speaker 2. And this is the waveform of what speaker two is speaking, right? And this is time, right? Time is progressing this wave. And if we take a point in time and record a snapshot, right, this will give us S^i. So at time i, S^i will be set S^i_1, S^i_2. And this is in ER_d here, d equals 2, right? So the- the S^i_1 is this height, S^i_2 is this height, right? And- and this is the time. So you can think of a snapshot in time to be one example. And the- the kind of the amplitude of what the speaker is speaking at that time to be the coordinate value at that example. Right? And similarly, we have you know, the two microphones, right? So this is microphone one, X_1, X_2. And this microphone is recording some linear combination of the two, right? This is X_1 and this is X_2. Right? Similarly, over here, again, time is moving to the right, and at time i, right, X_i is equal to X^i_1, X^i_2. But here these are the values recorded by the microphone at time i. We will assume that we have the same number of microphones as the number of speakers. And we are only observing these X's. And the problem now is, we are only given these X's. We are told the number of, you know, we obviously know the number of microphones. We assume that there are the same number of speakers. And by only having these d different microphone recordings, we want to recover the original- original speech signal spoken by each speaker, right? That's- that's the- that's the setting in which we want to- in which we are operating. And the- and what that translates to essentially is we have- we make the assumption that X equals AS, which means X_i equals A of S^i, okay, A is a square matrix because we have the same number of speakers as the number of microphones. And what we wanna do is to calculate W is equal to A inverse, which is called the unmixing matrix. Right? W equals A inverse is the unmi- unmixing matrix where we are- where W can take X, right, and return S. Okay? So the W matrix can separate the speaker. So this is also called the source separation problem, where we're given X's, which are different- different mixtures of the sources. And using just these mixed sources, we want to construct this W matrix. It's called the unmixing matrix, which will recover back the original speech. Right? And the- the- the assumption that we are going to make, we're going to make only one assumption. The one assumption that we're going to make is that S_i's, are distributed- are independent in the sense each S- S- SJ is independent of the other SJ's, so the speaker-the assumptions we are going to make are. So what are the assumptions we've made so far? So the first assumption we want is the number of speakers equals number of microphones. Right? And the second assumption we made was that these two are- are associated with a linear relation, so S is equal to WX. Right? We made a linearity assumption. And the third assumption that we're going to make, which- you know, with just these two it's impossible to recover, right? We want- we need to make one more- one more- some more assumptions. The - the assumption that- that's going to help us recover is that we're going to assume SJ is independent of SK, where J is not equal to K. What that means is SJ has some kind of a probability distribution, and SK has- you know it belongs to another probability distribution. And these two random variables are independent. Right? What that essentially means is we are assuming that what one speaker is saying is independent of what the other speaker is saying, which may not be a valid assumption in reality, if two people are having a conversation, then most likely only one of them is speaking at a time and et cetera. But for- for- you know in- in- in- it generally turns out that this is- this still happens to be a reasonable assumption to make. Right? So independence assumption. And we need to make one more assumption that will allow us to solve this problem. The last assumption that we're going to make is that SJ is not Gaussian, right? So this is a non-Gaussian assumption. Right? Why- why- why do we need to make this non-Gaussian assumption? Let's- let's see a few pictures and hopefully, that will give some intuition. [NOISE] All right, so these are basically the- the slides we saw in the intro lecture just on- well, let's hope the- the audio works. So- so here we have- we are considering the case of d equals 2. And these are the versions that are recorded in - in the microphones, the two microphones. And these are what the original speaker said. Right? So. [OVERLAPPING] [FOREIGN]. It's basically two speakers are counting 1-10, one in English and the other in Spanish. And this is a recording from another microphone. [OVERLAPPING] [FOREIGN]. Audio is a little low. And once it's separated, it sounds like this. Can people at the back hear it? Okay. 1, 2 , 3, 4, 5, 6, 7, 8, 9, 10. And the, uh, other one once it's separated. [FOREIGN] Right? So all that was fed to this, uh, algorithm was these two, uh, mixed clips and nothing else. And by making certain independent assumptions and linearity assumptions, the algorithm was able to recover these separated versions of the two audio clips, right? And to, um, to understand, um, these- these assumptions, I made a few slides that might, yeah, so let me open it. All right, so what do we have here? So, um, we assume that SI and SJ must be independent. And we also assume that each of the SJs must not be Gaussian. So the- the first row- all the three rows correspond to Gaussian distribution. The second row, all the three correspond to Laplace distribution and a third of the three correspond to logistic distribution. So if in the case of Gaussians, here x1, the x-axis and the y-axis represents two independent Gaussians. And each point in this cluster of points are points that are sampled from a joint Gaussian distribution where the- the- the- the two components are independent. So if we sample from two independent Gaussian distributions, uh, assume both are, have mean 0, variance 1. Then you will see that the points are generally, um, the clustered points have this circular formation, right? And then we apply, uh, some kind of a linear transform. So the first column corresponds to, you know, one linear transform, the second column corresponds to another linear transform. The linear transform is applied to the points that were sampled here, right? And we see that if, uh, so, the- the- the first column corresponds to a linear transform that was more or less just a rotation. We see that the rotated version looks exactly like the original version if it's Gaussian, right? If it is some other, um, uh, distributed- some other kind of a linear transform, then it gets some kind of an ellipse shape, right? Now, instead of- instead of Gaussians, let's assume we have- we start with two independent Laplace distributions, right? Where the Source 1 and Source 2 are sampled according to independent Laplace distributions. We see that the shape taken by the product of two independent Laplace has this kind of, uh, you know, um, it has this diamond shape, right? And then if we, you know, apply linear transformations, we get, you know, shapes that look like this. And similarly, if you start with a logistic- two independent logistic distributions, um, and they're independent, they look like this. And as you- you know, apply transformations, they take different shapes. In ICA, we are starting with data that looks like this, right? The recorded- the microphone recordings. If you were to plot them, they look something like this. And given these, uh, when we start- if- if we were to start with, uh, data that looks like this, and we want to recover the original untransformed data. With Laplace and logistic, we see that we can make a good guess. You know, the corners corresponding- correspond to corners, et cetera. But in case of Gaussian, it's hard to tell whether the, you know, in fact, it's impossible to tell whether this transform data came from this version as the source or this version as the source, right? Because in- in- in Gaussians, you have the smooth ellipsis. And there is this rotational ambiguity of not being able to determine what the original shape was. Whereas with these other distributions, they do not have this, you know, smooth ellipse shape. They have these, you know, the- the intuition to have is that you have these corners, right? And these corners can correspond to one of these corners. These non-Gaussian distributions still do have the ambiguity that the- the axes can be permuted, right? You can construct- you can construct unmixing matrix- matrices that start from here and map you back to the, uh, the- the- the unmixed versions where the axes are permuted. Because one corner can get mapped to a different corner, right? And you can also have ambiguity that the axes are flipped in their, uh, in their sign. However, though in- in problems where those ambiguities are tolerable, ICA works very well. Which means, for example, if you are applying ICA to, you know, audio source separation, where once we unmixed the- unmix the, uh, mixed audio clips, we recover the original audio clips. But the- the Speaker 1 and Speaker 2, identities may have shifted, but that's okay because we never knew what the original identities were, right? In- in case of ICA, once we- once we run ICA and recover the unmixed versions, there is no guarantee that the first column of- or- or- or the identity of the first source will always be the same, right? There's- there's ambiguity about- about what the original sources are. And there's also ambiguity that the signs flip and an audio generally that's okay too. The sign of your- of your- of your wave file can- can- can flip and you'll still get the same audio, right? So this gives us some intuition of the ambiguity that- that arises only in Gaussians, right? This- this spherical shape is characteristic of the Gaussian distribution, right? In no other distribution, if you multiply the- the, uh, uh, uh, the, if you multiply the- the PDFs of two independent distribution, no other will give you a contour map that is spherical. The spherical shape comes only from a Gaussian. So as long as the, uh, the distribution of our sources is assumed to be non-Gaussian, we have some hope of recovering the- the original signal. Yes, there was a question? In theory, we believe to care about this last assumption. But in practice, no one actually speaks according to the distribution. So does it ever matter what our last assumption is? Yeah, so- so the question is, in practice, you know, nobody actually speaks according to a Gaussian or a Laplace, right? What's- what's- what's the, uh, you know, does it actually matter? Uh, it actually does matter a lot. In fact, that's the only thing that matters because even though your data may not be- may not be, uh, um, distributed according to a Gaussian. What I'm about to show in the next slide here is that these- these- these two clusters that we see are in fact- uh, so I just took the waveforms on the two audio clips that I, uh, uh, that I played short a- a few minutes ago and just plotted a scatter plot of the corresponding, um, uh, you know, audio amplitudes from the two sound clips. We see that the- in the- in the separated version, even though they are not Gaussian, it is still not spherical, right? So there- there is some amount of, you know, you- you can kind of see that, you know, there are some elongation along this axis and there are some elongation on- on- on this axis and it's not perfectly spherical. So when we start with the mixed version, so this is basically th- the scatter plot of the mixed audio clips, right? We see that the algorithm is able to recover, you know, the- the- the- the two independent components even though the- the, uh, even though the data was not actually, uh, uh, distributed according- exactly according to a Laplace or- or- or logistic, right? So, uh, back some intuition to- to, uh, show why non-Gaussian assumption is- is important. [NOISE] So with these two assumptions- so with, uh, these two assumptions, we can now formulate ICA. And we need one final thing to take care off, which is the- the linearity assumption. So, uh, supposing we have a random variable, let's assume it is uniformly distributed between 0 and 1. It's a uniform random distribution that lies between 0 and 1. So, um, x is uniform. Let me take a different pen. Uniform between 0 and 1. Yeah, this pen is better, right? And the- so which means x looks like this, 0 and 1 and the PDF of x looks like this. So this is x. This is p of x. So it has density equal to 1 between 0 and 1, and zero everywhere else, right? So this is just a uniform random, uh, variable over 0 and 1. Now what about 2x, right? What is- let's consider y equals 2x. What is p of y? So- so what's- what's- what's p of- so when we- when we, uh, when we do y equals 2x now y will be distributed between 0 and 2, right? It has density 0 outside this region. What is p of y between 0 and 2? Half, right? This is half. Why is this half? [BACKGROUND] Right. So- exactly. So we- we- the probability distributions require that the area under the PDF must integrate to 1, right? So if we stretch the x-axis, we have to shrink the y-axis by a corresponding amount and that's because the area under the PDF must integrate to 1, right? Now, what happens in a higher dimension? If x is, you know, in general x inste- instead of 1x, let's assume this is some script x, right? And instead of multiplying it by one scalar, we do y equals, say, some, um, um, y equals wx, where w is some matrix. Now, x is- x is- is- is in RD. This is R^ d by d, and y is again in R^d, right? And we have P- P of x to be some x- to be some value. So p of y will be, what can we say about P of y? P of y will be? [BACKGROUND] P of y is wx, right? So let's assume this is the density of y, and this is the density of x. This is correct? This is not correct, right? And it's- it's- it's not correct because of- of the same reason where P of y had to be- so P_y of y had to be P_x of 2x times 1 over 2, right? So similarly, here, we need to divide it by the determinant of w, right? This is - this determinant of w- you - you might have seen this from your probability theory course. This is also called the Jacobian. So Jacobian, when- whenever you perform a change of variable of your random variable, the density will be transformed according to, you know, this extra Jacobian, right? Yes, question. [inaudible] Sorry, this should be py. Yeah, because py is- is- is, oh, so- sorry, sorry, sorry. So py is p- px of x is y by 2, yeah, y by 2 times p, times half. That make sense? [BACKGROUND] And- and what did I write here? I think here also I- this - if - similarly, if py of y will be w inverse of y- w inverse of y times 1 order, right? So whenever- whenever we- we transform a variable, some linear transform, the density will get this extra determinant factor with just- with with these- with these pieces. In fact, we are ready to finish ICA. So in ICA, we define P of x to be product from j equals 1 to d ps of w_j transpose x times w. Over here, w is the on mixing matrix, where wj times sum x will give us back the jth source, right? So w is - w, j is the unmixing matrix. You give it this mixed x multiply each w j times x will give us sj, sj, so sj equals wsj times x. So w_j times x and we define the - the probability of x as- in term of p of s and we make the assumption that p of s is distributed according to the logistic distribution. So the logistic distribution is the distribution whose CDF is the logistic function. Okay? So if you remember the logistic function is- is p, so the CDF- the CDF of x is equal to 1 over 1 plus e to the minus x. So this is the CDF of logistic. You can call this sigma of x and so the PDF - the probability density of the logistic f of x is the derivative of this. That's basically sigma of x times 1 minus sigma x by sigma is the- is the sigmoid of the logistic function, right? And using this, so, so this sigma- sigmoid x times 1 minus sigmoid x will be ps of x. So we assume that the sources are distributed according to the sigmoid distribution which gives us, we can write the l of w, where the parameter here that we are training is w is sum over I equals 1 to n. Sum over j equals 1 to d log of the PDF, which is sigmoid of x times 1 minus sigmoid times 1 minus sigmoid of x I plus log determinant of w, right? So the parameter that we are interested here is just w. Previously, the parameters that we were trying to recover happened to be the parameters of the - of- of the probability distribution itself. For example, we would define the likelihood over mu or sigma or some such thing like that here, the parameter that we're trying to learn is this transformation w that's going to take us back into something that is distributed according to a sigmoid. And using the- the- the linear transformation, we- we- we get this extra log determinant term due to the extra determinant that came due to the Jacobian, right? And that's all there is to it. You take this objective, take the derivative, perform gradient descent and run gradient descent until you converge, right, and the corresponding update step, so you take the derivative of this and you get the update step that looks like this, w equals w plus alpha. Some learning rate times 1 minus 2g, where g is just the sigmoid, 1 minus 2 sigmoid of w1 transpose xi, 1 minus sigmoid w2 transpose xi. And so 1 times x, right - transpose plus w transpose inverse, right? So this is the update rule and you can perform stochastic gradient ascent and once this converges, take the converged w, multiply your x's with w, and you get Ss. And if you play these Ss, you write it into a wav file and you play these Ss, they'll be separated, okay? It's almost magical, it works. And in your homework. So here we have the update rules for assuming that PSs were logistic and in your homework you will do it. You make a small- you will try it with a different distribution, you assume it's Laplace distributed and in- in your homework you have five different audio clips instead of two audio clips and run it until it converges. Take the resulting unmixing matrix, the converged unmixing matrix, and multiply your mixed audio clips through it and you'll recover a new audio clip and you play it and they should- they should sound well separated. And one final tip for your homework, in the logistic distribution you will encounter the absolute value of x and you'll have to differentiate it, right? And the, what's d by dx of the absolute value of x. [BACKGROUND] Yeah, it is split. So absolute value of x looks like this, right? So it is- it has slope minus 1 when it's negative and slope plus 1 when it's positive. So d by dx of the absolute value of x is just the sine of x, right? So you'll- you'll need that for your homework. You can probably- you would have probably figured it out but in case you're struggling, the derivative of the absolute value of x, this is the sine of x and with this you can you know replace- replace the sigmoid PDF with the Laplace PDF. Take the derivatives, derive a different update rule and run it on the audio clips and you'll be able to recover them and yeah, so that- that basically covers PCA and ICF. Any other questions? If you have questions, you know, walk up- walk up to the stage here. I'm happy to take questions.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_5_Perceptron_and_Logistic_Regression.txt
Okay. Welcome back, everyone. Let's get started. So we're in the fifth lecture today and today we're going to switch over to classification. And before we switch over, let's do a quick recap of what we've covered. We covered prerequisites and we started with, um, supervised learning. Uh, and supervised learning is basically learning a mapping from X to Y. And we started with regression and looked at the very first, uh, regression algorithm, linear regression, where we're trying to learn a mapping from some example, X, which lives in a d-dimensional space to an output y, which is, a real value. And we assumed, or we limited ourselves to hypotheses or, you know, models which belong to this family of the form theta transpose X, where theta is some parameter that identifies the specific member of our hypothesis family. And we wanted or- or the desired outcome is X- um, X theta or, um, here X is a design matrix and Y is the vector of all- all- all- all outputs. We want X theta to be as close as possible to Y. But in general, this will never hold true. X- X theta will almost surely never be exactly equal to Y. And the- the, uh, method that we, uh, fall back to is to define a cost function, which is the difference between X theta and Y, X theta minus Y. And that is going to be a vector, and you take the squared norm of it. And the squared norm is basically exactly equal to the sum of the squares of the individual terms, or the sum of the, um, uh, errors of each example, right? And we want to minimize this sum of squared errors by choosing an appropriate theta. Okay? And we saw two different kinds of solutions for that, a numerical solution and a close form solution. So in- in numerical solutions again, we saw two different methods; gradient descent and stochastic gradient descent. The key difference between the two is with gradient descent, we consider all examples for every step. So these are iterative algorithms where at each step we start with a random initialization of theta and each- at each step, time-step, we make some change to theta, updated by some amount. And in gradient descent, for each update we consider all the examples. And for, uh, stochastic gradient descent, we consider, uh, one example at a time. And the way we perform the update is that for the next time step, the theta for the next time step is going to be theta of the current time step minus alpha, which is some step size, times the gradient of the loss. And this happens to be the gradient for the squared loss. Um, I probably forgot a half over here for this to be the gradient. And, um, another way of writing the same thing is, uh, in this notation we have different variables for theta T and, uh, theta T plus 1. But if you're implementing this in a computer program, you would have some variable, uh, for the parameters and you keep updating it over and over. So when- when, uh, we use the notation of colon equal 2, it mea- it means you're performing an in-place update of the variable, which makes more sense if you think about a computer algorithm. Whereas this makes more sense in as a mathematical expression. So, uh, you update theta in place by the current theta plus alpha times, uh, the negative gradient, right? These two are exactly the same. Uh, I switched- uh, I changed the minus to a plus and therefore swapped the order. And theta transpose x is, um, is basically the hypothesis. Um, so, you know, um, this is an important, uh, equation to kind of, you know, pin it in your memory. We're gonna, you know, um, uh, see this pattern, uh, over and over again. And then there is the other solution, which is the closed form solution. It exists for linear regression, but it may not exist for different kinds of, um, models or algorithms, uh, in which case you'd use gradient descent or stochastic gradient descent. But for the, uh, linear regression alone, um, you can- you can, um, start with the cost function, do some matrix calculus,, you know, take the derivative, set it equal to 0, and you end up with the normal equations which you can invert to get the closed form update for calculating theta. This is not an iterative algorithm. You calculate this and bam, you have the final solution. You don't have to iterate over and over. Whereas with, um, numerical solutions, you want to iterate over this over and over until you hit some kind of a convergence condition, which means your theta has stopped changing a lot or your cost has stopped, uh, um, reducing. Um, so there are- there are different kinds of, uh, um, convergence conditions we discussed last- uh, in the last class. Uh, but this would be iterated until you hit a convergence condition. But this is just a one-step algorithm where you- where you calculate the final solutions and you're done. Okay. And we also, um, looked at a couple of interpretations of linear regression. The first is if we assume a Gaussian noise, uh, for each example, which has mean 0 and some constant variance sigma squared, which actually does not matter for our calculation because we assume it to be a constant. It could be a big variance, small variance. It doesn't matter. As long as it's a constant variance for all examples, um, uh, we're good. Uh, and we assume that Y is, uh, the- the observed Y is some true Y. You know, this is some kind of a true Y plus some noise. So, right, uh, you can- you can kind of think- think of this as the signal and this is the noise, right? And you- you are making a noisy observation. Right? And the goal of, um, linear regression is to- given X's and Y's, we don't know what the individual noise terms are. Given X and Y, we want to recover theta. Right? We wanted to kind of weed out all the noise and extract the signal. That's- that's the, um, um, general intuition. And with this assumption, we get a, uh, likelihood expression, uh, P of Y given X or- or the density for Y, um, given X and theta to be this Gaussian distribution. And when we take the log likelihood, we saw that, uh, even though the Gaussian distribution has all these terms, um, when you take the log, kinda, you know, all of these are constants. The log and the exponent cancel and this is constant for all examples and this is just another constant. So the log-likelihood essentially boils down to just the squared error between- between- um, between the prediction and the observed noise. And we perform maximum likelihood up- um, uh, estimation. Uh, we sum- we've seen maximum likelihood a couple of times, uh, so far. Uh, maximum likelihood is basically take the- the- uh, define the likelihood across all the examples. And because of the independence assumption, you can, you know, the- the, uh, joint likelihood is just going to be the product of this term across all examples. And then take the gradient of the log likelihood with respect to theta, set it equal to 0. Or you can solve it with, um, uh, um, gradient descent. Right? So the- the, um, and what we saw was that performing maximum likelihood, uh, estimation was exactly the same as minimizing the squared error loss. Right? There's a negative term here. So maximizing the likelihood is the same as minimizing the squared error loss if you assume your noise to be Gaussian. Right? Um, so in- in general, if you are defining your objective to be the maximum likelihood objective, right, it has a one-to-one correspondence with having a loss, where the loss is equal to the negative log likelihood. I take the log likelihood, flip the sign, and that is your loss function. Okay. And in this case, you take the, uh, uh, likelihood, take the log, flip the sign, you just get the squared error. And this holds true for any probabilistic model, not just, uh, one that assumes Gaussian noise. It could be any, um, um, any probabilistic model whatsoever. You define the likelihood, take the log of it, flip the sign. That's the loss function. Okay? Now, we saw yet another interpretation, uh, the projection interpretation, where, um, we want to s- we would- we- we want X theta to be equal to Y. But that's never the case, uh, because Y lives in a higher dimensional space compared to theta. Right? And theta will map to some- some subspace in- in- in this whitespace, which is- which is, uh, you know, which goes- goes through the origin, which is also the column space of X. And the observed Y will almost surely never live exactly on the subspace because we are adding some noise. If the noise was 0, then, you know, you would expect your Y to be some linear combination of X's. But because there is some noise, your Y will be, um, will be outside the subspace. Right? So instead what we wanna do is project this Y onto the subspace. Let's call it Y hat, the projected point. Right? And once you project it, you can actually solve, um, X theta, you know, exactly equal to Y because now it's projected onto the column space. Right? And the way you project it is to take Y and multiply it by the projection matrix. And this- this- this whole term will be Y hat, the projected point. And now X theta will- will actually be exactly equal to this- this, um, uh, projected point. And, you know, because, you know, um, because they're exactly the same, you can- you can, uh, cancel X and you get theta equals X transpose X inverse X transpose Y, which is- which was the normal equation, what we saw there. Right? So that's what we covered. Uh, any questions before we move on to classification? Yes, question. How can we be sure that the X matrix is invertible? So in this case, how do we know X, um, X matrix is invertible? For- for now, we will just assume X matrix is invertible an, you know, we'll come up with, uh, remedies for later in the course when it's not. So for now, let's assume X matrix is invertible. Yes, question. [inaudible] So the question is, if the observations are not independent, does this- does this, um, um, um, theory all hold? Um, if the observations are not independent, none of this theory holds, right? All this- all this theory is- is making, uh, an independent assumption, IID assumption, right? Only because of which you can, you know, multiply them together, take the log, decompose it into separate sums. Uh, if they're not independent, it becomes much more, uh, complex, uh, to analyze. However, in practice, um, uh, even if examples are related, they tend to work well, uh, except, um, you know, that the intuition to have is that if your examples are not independent and somewhat correlated, your, um, one intuition to have is that your effective dataset sizes become smaller. You know what- you know, think about it. If you have a dataset of 100 examples, where all the 100 examples are the copy- exact copies of each other, right? Uh, even though, you know, you- you- you have 100 examples, they are not independent, they are all exactly the same and your effective data size is just 1, right? So similarly, if- if- if your examples are not independent, your effective data size is kind of, um- is kind of reduced and- and, yes, the theory also does not hold. In practice, you know, think of it [NOISE] as a- a smaller data-set. It is much harder and- and most of the theories, um, um, kind of assume you have independent, um, um observations. However, uh, when we go later in the course, for example, reinforcement learning, they are actually, you know, across a time step that is clearly, you know, there is dependence, you know, each- at each time step for example, playing a game of chess what you do in the next move depends a lot on what you did in the previous move, they are actually not, um, um. And in those cases, yes we- we kind of give up the IID assumption and you get, you know, different kinds of theories there. Yes question. [BACKGROUND] I- I'm sorry? So the question is, are there cases where you can- you cannot use numerical examples? Well, so generally numerical solutions always exist, um, the- the close form solutions may or may not exist. That's generally the case. All right. Let's move on. So today- the plan for today is we're going to cover- today, we're going to cover the perceptron algorithm. And this corresponds to section 5 in the note- section 6 in the notes. And then we will cover logistic regression, this is section 5 and Newton's method, that's section 7. And time permitting, we will, um - time permitting we will- we will try to, uh, uh, do a review of some ideas from functional analysis. Basically as a way to kind of prepare ourselves for some of the future topics. There is, uh, also one, uh, section in the notes that is, uh, locally weighted regression, that's section 4. Yeah. We might revisit this later, but for now we're going to, um, you know, just keep moving. Uh, Right, yeah. So that's the plan for today. And we'll start with the perceptron algorithm. So the perceptron algorithm is a very simple classification algorithm where you're given, um, examples, um, in let's R^d, and the correct answer or the label of the target, it's either 0 or 1 for each example. Right? This- this- this algorithm we, um- we study it, um, mostly because of historical interest, uh, and also because it's very easy to analyze and to build some good intuitions. Right? In practice, um, I don't know anybody who uses perceptron algorithm in practice, but it's- it's as I said, it's- it's- it's a very simple algorithm to kind of build intuitions of how classif- most- many classification algorithms work. And in the perceptron algorithm, the hypothesis, h theta of x equal to g of theta transpose x, where g of, say, some variable z is equal to 1 if z is greater than 0 and it is 0 if z is less than 0. All right? So g is some function. Think of it like this. This is z, this is 0, g looks like this, g takes the value 0 until it's- if the input is less than or equal to 0 and it is 1, you know. Yeah, so, and you can- you can actually define it as less than or equal to or greater than or equal to. It doesn't matter. Le- let's- so let's, um- let's see how it's in the notes. I think in the notes, it is greater than equal to, um, anyway, it doesn't matter. Let- let's assume it is greater than or equal to. Uh, which means, um, at- at 0 it is actually 1 and right. So what's happening here? The hypothesis for the perceptron algorithm has some parameter theta, right? And given a vector, an input- input vector x, we do, uh, theta transpose x, you know, just like in- in the case of, uh, linear regression, where the output is then- so theta transpose x can- can take on any value- any real value between minus infinity and plus infinity, right? It's- it's just you take two vectors and take the dot product. It can be any value whatsoever. Now, um, we take this, uh, you know, real valued number and map it to a value- you know, map it to either 0 or 1, okay? If- if theta transpose x is less than 0 we map it to 0. If it is greater than or equal to 0, we map- map it to 1. Okay? And with this hypothesis, the, um, algorithm looks something like this. So, um, the perceptron algorithm is also called, um, a streaming algorithm, uh, which means it is designed to work in a setting where you're encountering examples one after another. You don't have access to all the examples upfront. You're- you're- you're just given one example at a time and you need to update your model and then wait for the next example, and then update your model and then wait for the next example and so on. And the, um, algorithm for training the perceptron looks something like this. So, um- so set your theta to be, uh, you know, some initialization. So generally it is just the 0 vector. That works totally fine and for i in 1, 2, and so on, you know, um, just- just the stream of- of examples that are coming at you. You- you perform theta equals theta plus alpha times y^i minus h theta of x^i times x^i. Okay, what's happening here? So first we see that the update rule for the perceptron looks very similar to the update rule of linear regression. Right? But they're not actually the same. What's different between the two? H theta of x is different. They- they- they look kind of superficially similar, uh, where given the hypo- or the hypothesized value or the predicted value and the, uh, actual value. The update looks- rules look very similar, but the hypotheses are actually different. Right? In linear regression, your hypothesis was just theta transpose x. Here it is g of theta transpose x. And the- there is- there is theory to prove that if your data-set is actually linearly separable, then this algorithm will find that separation point, your hypothesis will be able to classify all the points. It's- it's, um, not entirely clear how this- you know, uh, just looking at this form, it may not be entirely clear how the- how this, uh, works or why- why it would even work. Right? Uh, for this let's- let's, uh, try to visualize this, okay? So let's assume this to be, uh, you know, x1 to xd. Again, in all these examples, um, I'm assuming your, uh, x to include the intercept term, right? Uh, so assume it to be d plus 1 or in general, it is fine to assume, you know, x in- in d where d just includes the intercept term and your true data was d minus 1 dimension, right? We don't- we don't stress about that too much. So assume the intercept term is, um- is included here. And let's assume your theta vector has some current value. So let's- let's- let's, um- let me use a different color. Let's assume your theta vector to be here. So let's- let's call this Theta of t. Your Theta vector is currently here. Now, it's obvious looking at this formulation that what really matters is for a given new value of x, whether Theta transpose x is greater than 0 or less than 0. The actual value, you know, actually, it doesn't matter. What really matters is whether it's just greater than or equal to 0 or less than 0, because it just gets mapped to 1 and 0, no matter how big- how- how larger than 0 it was or you know, how little, um, uh, bigger than 0 it was. You know, similarly, if- you know, if- if Theta transpose x is here or here, actually, that maps to 0. Okay. So what- what really matters is- is the- is- is Theta transpose x greater than 0 or less than 0? Which means Theta transpose x equal to 0 kind of forms a hyperplane that is meant to separate your data- your, uh, data points, you want all the points, all the- all the points that have y equal to 1 to satisfy the requirement that Theta transpose x is, you know, bigger than 0 for them and- and- and, um, less than 0 for those that have y equal to 0. [NOISE] Now, if Theta is this vector, Theta transpose x is the hyperplane, that is, this- this hyperplane corresponds to Theta transpose x equal to 0, right? And because Theta is oriented this way, all points in- on this side satisfy, uh, Theta transpose x greater than 0. And all x's on- on the other side satisfy Theta transpose x is less than 0. Is this clear? Is- is this- is this, you know, it- um, the idea here is that, you know, any x that lies on this- this- this, uh, line is perpendicular to Theta. So Theta transpose x will be equal to 0, right? Um, and if you have an x over here, you know, the inner product between them is- is, you know, it's basically less than 90 degrees. So Theta transpose x will be positive. If- if you have an x over here, you know, the angle between Theta and x is more than 90, so Theta transpose x should be negative, right? And the line that's exactly perpendicular to Theta is- is- is like the decision boundary. Is- is this clear? Okay. Right. Now, another intuition that we want to- we want to have is let's assume you have two vectors, a and b. Right. Here we go. This is vector a, and let's say this is, um, vector b. Okay? And a transpose b equals some value. Right? In this case, suppose a- you know, in this case, a transpose b is- is- is some value less than 0, [NOISE] right? And we desire, let- let's as- assume, you know, this was, you know, our Theta and this was some x. And let's say this was an x whose y label was plus 1. Okay. We want Theta transpose x to be- to be positive, right? Now, however, Theta transpose x is a negative number because it is- it is, um, greater than 90 degrees. And if the desired value of a transpose b or a dot b is bigger than the current value of a dot b, is there a way in which we can- we can modify one of the two vectors to make the desired value larger? How do we make Theta transpose, um, um, you know, a dot b achieve a bigger value, is- is- is basically the question. So one simple, um, answer for that is take the vector a, right? In this case, you know, think of a as your Theta vector and you have a vector, uh, x. What we can do is take a small- a small amount of x and add it to a, which means at- at the- at the- at the end of a, let's call this Alpha times x. Right. So x is a vector, x is a vector. Now, a plus Alpha times x. Oh, I'm sorry, let's call it b, so Alpha times b. [NOISE] a plus Alpha times b is this vector, right? This is a plus Alpha times b, [NOISE] right? And we notice that by adding some amount of b to a, the angle between them got closer, right? Which means you can expect a dot b to be, um, a dot b plus Alpha times a will be a transpose b plus Alpha times a transpose a. And a transpose a is always positive which means this will be greater than equal to a transpose b, right? So if we- if- if we have two vectors, x and Theta or a and b, and we want the dot product between them to be bigger than what it currently is, one way to achieve it is to take one of the vectors and add a small amount of that vector to the other vector, right? Similarly, if the dot product between two vectors is bigger than what you would like it to be, you can take a small amount of one of the vectors and subtract it from the other vector, right? And you know, here, you just flip the sign and a transpose a will always be, um, a positive. And so a negative value of a transpose a is- is going to be negative and that will make the- the- the- the updated vector be smaller. So what- what does that- what does that, uh, mean in, with respect to our equation, update rule? What we say is, suppose y is 1 and the current value of h Theta x is also 1, right? Which means current- according to the current, um, um, current parameter, we are correctly classifying ith example, then this will be 0. And you don't modify Theta at all which means if you encounter an example that's already correctly classified, don't do anything, okay? 1 minus 1 is 0. Similarly, if the current example is- has labeled zero and you're outputting your h Theta of- of x output 0, you know, don't do anything. So if you encounter an example that's already been correctly classified by your half trained model, don't do anything. However, if you encounter an example whose correct label is 1 but we are outputting 0, then this will be 1 minus 0 equals 1. So add some amount of x to Theta. If you encounter an example whose true label is 1, but we're outputting 0, then add some amount of that vector to Theta, right? Similarly, if the correct label is 0 and we're outputting 1, which means we're making a mistake but a mistake in the other way, then 0 minus 1 is minus 1. So subtract some amount of that vector from the current parameter Theta, right? If you start your Theta vector to be 0, then what we see is the Theta vector that we end up with is going to be some linear combination of sums of your x's. So everything that we have here, right, this is real value, this is an R^d, and this is R^d. So all of this is just a scalar, and the- the- the- the basis vectors that we are getting are basically just the examples themselves, okay? So what we're doing is- is adding small portions of our example to construct a Theta and we add them in such a way that if the current value of Theta is- is, um, is misclassifying it, then we either add or subtract. Um, so for example, um, let's see. So this, let's assume we encounter, you know, some x over here which is labeled 0, [NOISE] right? So this corresponds to y equals 0. And Theta transpose x is less than 0, or- or Theta transpose x is going to be, um, um, greater than 0 because the angle is less than 90 degrees which means now, we are going to add a small amount of this vector. Are we gonna add? No, we're gonna subtract because the label is- is y- y equals 0. We're gonna subtract some amount of this vector from Theta. So what does that look like? Uh, so this is the vector. Take a small amount of that vector and subtract it from Theta, and [NOISE] Theta t plus 1 will- okay, so this is Theta t plus 1. [NOISE] And the decision boundary corresponding to theta t plus 1 will be this, right? So the example which was classified as positive under the blue hypothesis, you know, after performing that update rule, could correctly classify it as negative under the updated hypothesis. Yes, question? [inaudible] So why is the separating hyperplane always orthogonal to Theta? So Theta is- is- is the, um, is the current vector, [NOISE] right? And if- if you look over here, Theta transpose x equal to 0 is like the decision boundary because- and if Theta transpose x is- is bigger than zero, it's considered positive, and if it's less than zero it's considered negative. So you can think of Theta transpose x equal to 0 as- as, you know, the- the- the- the- the separating boundary. And if theta is in a particular direction, the set of all x's that give you Theta transpose x equal to 0 are all the set of all points that are kind of perpendicular to Theta, right? Because if you take a dot product between two perpendicular vectors, the dot product is zero. Does that make sense? Yes, question? [inaudible] So in practice, you never use perceptron algorithm. Like with, um, it- this- this algorithm is hardly used in practice. Uh, however, it is very useful to analyze this algorithm and to get some- some intuitions about- about how the update rule works because this update rule pattern is- is gonna, you know, come again and again. Okay. Yeah, so this- this is- uh, uh, this is the perceptron algorithm. The- uh, there is- there is theory which says if you're given a dataset where you're the, um, x's according to the classes can actually be separated, linearly separated, then this perceptron algorithm, which looks very simple, will actually find the separating hyperplane. Yes, question. [inaudible]? So the question is will we- will we need to revisit a few examples again? [inaudible] Yeah. [inaudible] Yeah. So the question is, you know, uh, if it has 3 data points or if it has 1000 data points, does it matter? Well, so the- so the, ah, answer there is if you have like a very small dataset, then you may want to, um, um, iterate over them a few times, right? But if you have a large enough dataset, then, you know, generally just making one pass across the last dataset should be sufficient. Yes, question. [inaudible]? Yeah. So the question is, um, aren't there an infinite number of correct solutions for Theta for a given dataset? The answer is yes, you- there- there are an infinite number of, uh, um, solutions, and this algorithm will find one solution, some solution, yeah. Any other questions? Yes, question. [inaudible]? Will the solution change according to the order of the examples? Yes, it can change according to the order of the examples. I- and- and that's fine, um, you know, because we are generally interested in some solution that works. Yes, question. [inaudible]? That's correct. Uh, we use- you know, the question is do we use one example at a time and not all of them? Yes, this- you know, think of this as similar to SGD, you know, take one example, perform an update, pick another example, perform an update. Cool. So that's- that's the perceptron algorithm. Now let's move on to logistic regression. [NOISE] Yes. Does the order of [inaudible]? So the question is does the order matter? Um, the theory of- of- of perceptrons tell you- tells you that if you have a large enough dataset, go through it in any order and you will find some separating- uh, some solu- some value or solution. Um, yeah, so the order shouldn't, does not- [inaudible]? Yeah, you'll get a different answer, but you'll still find some- some solution. [NOISE] Why don't we use this in practice? Why don't we use this in practice? Uh, we don't, um, use this in practice because there are many other algorithms that work better. And, um, the perceptron algorithm, um, we- well, if you don't mind, ask the same question, you know, remember it, ask the same question after logistic regression so that we can- can draw some cust- custom comparisons. [NOISE] But the key takeaway from the perceptron algorithm is this idea that if you have two vectors and their dot product is smaller than what you would desire it to be, then take a small amount of one of the vectors and add it to the other, right? And that's- that will make the dot product bigger. And similarly, if the dot product is- is larger than what you would like it to be, then take a small amount of one vector and subtract it from the other vector. And how do we determine alpha? How do we determine alpha? Just the way you determine Alpha for- Gradient descent. Gradient descent, right? Take a small amount It'll be fine. [NOISE] All right. Logistic regression. [NOISE] So to start off with logistic regression, it would not be an under- understatement to say logistic regression is the workhorse of machine learning. It is probably one of the most commonly used algorithms in practice in production, right? Uh, go to any company in Silicon Valley, the chances are if they have some kind of a- a classification problem, it's way more likely that they're using logistic regression compared to any other algorithm. And- and, uh, it's- it's- it's, uh, surprisingly effective, right? And it should probably be one of the first algorithms that you try for a classification problem that you're trying to solve, right? Always start with logistic regression, and more likely, uh, than not, it's just gonna, uh, work pretty well. So, um, in logistic regression, uh, the setting is very similar to the perceptron algorithm. We have x's in R^d, y in 0, 1. Right. And when y^i equals 1, we call it a positive example, [NOISE] and when y^i equal to 0, we call it a negative example. Okay. And this is a description of the data, it does not, you know, it- it- it- it does not- a positive example and negative example does not mean the model is correctly classifying it or the model is negatively classifying it. It's just the- it's just the way we call the examples. If- if the label is- is plus 1, we call it a positive example. If the label is 0, we call it a negative example. Right. And the hypothesis for the logistic regression is gonna look something like this. h_Theta of x is equal to g of Theta transpose x, right, where it is again similar to the perceptron- um, the perceptron's, uh, uh, hypothesis where h_Theta of x was equal to, um, g of, uh, Theta transpose x. The difference is g, in this case, is defined as follows. So g of z, where z is then- you know, think of z as the value that you get after, uh, doing a Theta transpose x, g of z is defined as 1 over 1 plus e^minus z. Right. The- this function, g of z, which has the form of 1 over 1 plus e^minus z is also called the logistic function, which is basically why this algorithm is called logistic regression. And it actually looks somewhat like the- the- the- um, the perceptron's, uh, um, uh, uh, g function. So g of z looks like something like this, supposing this is z, 0, this is 1. Right. So g of z looks something like this. Right, the g of z as- so the limit of g of z as z tends to minus infinity equals 0. And similarly, the limit of g of z as z tends to plus infinity is, as z tends to plus infinity, g of z tends to 1. Right. And this line over here is g of z equals 1 over 1 plus e^ minus z, right? Now, if you- if you compare this with the, uh, perceptrons z but in the case of perceptron, it was exactly equal to 0 and 0 and exactly [NOISE] equal to 1 when z was greater than 0. You can think of the logistic- um, logistic function has some kind of a soft version of the perceptrons, uh, perceptrons version of- of g, right? And now we define, just as in the case of- of linear regression, we define a likelihood function, right? So let me say p of y^i equal to 1, given x^i Theta is equal to [NOISE] h_Theta of x, and similarly y^i equals 0, given x^i Theta is equals to 1 minus h_Theta of x. What this means is for any given, any given example x, we take the- the Theta vector that we have, do a- a dot product. We get some z value, run it through g of z, right? And it's gonna give you a value between 0 and 1. And now the claim is because it's a value between 0 and 1, you're going to pretend that it's a probability value because probabilities are between 0 and 1, right? So the probability of y equals 1 given x and Theta is defined to be h_Theta of x. So supposing we have an x whose Theta transpose x is plus 10. So z is going to be plus 10, g of z is going to be something very close to 1, right? Similarly, if you have a vec- some example, um, x whose, uh, Theta transpose x is say, minus 1, it's going to be here. And g of that is going to be a value that is close to 0, right? And if you have an example, x whose Theta transpose x equals 0, g of 0 is 0.5, right? So the- the- the interpretation here is that if the output of- of, um, h_Theta x or the output of h_Theta x will be closer to 1. If the model thinks it has a positive example and it's gonna be closer to 0 if the model thinks it's a negative example. Because if it thinks it's- it's a, uh, a positive example, then you expect the model to have Theta transpose x to be much bigger than 0. If it has much- if- if Theta transpose x is much smaller than 0, then, uh, it would give it a very low probability, right? And similarly, the probability, um, that y equals 0 is always going to be- be 1 minus of this function. Uh, if this height is p of y equals 1 given x, and this height is p of y equals 0 given x. And because it's- it's a binary setting, uh, it must be the case that probability of y equals 1 plus probability of y equals 0 must equals 1, right? Because we are talking about a binary setting, the sum of the two probabilities must always sum up to 1. And the height of- of this function is basically the probability that the model assigns to that example that the label is 1. So the, uh, the correct way to think of logistic regression is that it is a probability machine. It's not actually a classifier, right? It's a probability machine for any given example, it outputs a probability, right? And then it is up to us to choose some kind of a threshold and a common threshold is 0.5. Right? It is up to us to choose some kind of a threshold and check whether the probability that was outputted was grea- higher than the threshold or smaller than that threshold and convert it into a classifier. Right? So logistic regression plus some chosen threshold will give you a classifier. Logistic regression by itself just outputs probabilities. Right? But it's- it's common to just call logistic regression as a classification algorithm because, um, you know, we just assume that the threshold is 0.5. Right. Right? So this is- this is our hypothesis. Now, based on this- based on, uh, these two expressions, we're gonna define the likelihood function. Right? So instead of writing, you know, um, a different expression for different values of y, we will instead write it in a compact way, p of y given x Theta is equal to h_Theta of x^y times 1 minus h_Theta of x^1 minus y. What's happening here? What's happening is if y is equal to 1, that is if y is equal to 1, the second term just becomes 0, you know, 1 minus 1 is 0. Anything raised to the power 0 is- is, you know, times 1. So in the cases when y equals 1, it evaluates to this expression. In the cases when y equals to 0, it evaluates to this expression. And you think of this as -as, um, as how- how you would write if-else in a programming language and this like the mathematical version of writing an if-else. Right? So if y equals- equals 1, it evaluates to this expression because this becomes 0 if y equals 0 then, you know, this entire thing is reached about 0 and you get this expression. So it's- it's basically now if else of this written in- in a compact way, any questions? Yes. [inaudible] This is another way- is the question is, is this another way to write an indicator function? Yes, this is another way to- to, uh, write an indicator function but, you know, in a more compact way. Yeah. Think of it as an indicator function. Yes. Question. [inaudible] [NOISE] Okay, can you please repeat the question? [inaudible] So the- is there a version of this algorithm where the output of the algorithm is still a probability, but the- the labels- [inaudible] Sure, so I- I if I understand the question, what you're saying is, um, is there a version of this algorithm where we take the predicted h_Theta of x_theta as a probability and sample y from it and act based on that. You- you can do that. You can do that. But it's far more common in practice where people choose some kind of a threshold and have a deterministic output. So the- the version of the algorithm that you describe will behave randomly, right? And people don't like algorithms that behave randomly. They want their algorithms to, you know, to- to be- to be deterministic. So yeah, you- you can absolutely do, um, you can absolutely have a randomized classifier where you take the predicted probability and sample a y from that. And- and that's theoretically that's perfectly fine but in practice, people just don't do that. Yeah, good question. So this is- this is, um, this is how we define p of y equals y given x. And in case of linear regression, we came up with a p of y given x, which had a Gaussian form, right? In case of [NOISE] logistic regression, you know, this is actually a Bernoulli distribution. You know, think of this as a Bernoulli distribution where this is just p and this is, uh, the- the, uh, uh, outcome. And given this, we can now define the likelihood function for your full dataset. So L Theta is gonna be product of i equals 1 to n and P of y^i given x^i Theta, right? And we're able to break down our full dataset into individually- product of individual examples because of the IID assumption. Right? What do we do next? Just as before, we take the log-likelihood, log L of Theta, which is also commonly written as l of Theta, right? And this is gonna be equal to the product changes into the sum 1 to n. And- so this- this term is equal to this, and we apply the logarithm of this term. Again, the product breaks into sum, this product broke into summation and this product is also going to break into a sum of two terms. It's going to be log of this term plus log of this term. Right? Yes, question. [inaudible] Sir, can you please repeat your question? [inaudible] So why should we multiply by p of x given- [inaudible] So what- what- what you're saying is, should we have p of y given x times p of x? Should we have this as well? So this will be p of y comma x. [NOISE] So- Yeah, we parameterized by theta everywhere. So the question is, that's- that's a very good question. Now, why are we considering our likelihood function to be p of y given x and why is it not p of y comma x? Why aren't we maximizing the joint probability of x and y? And instead, why are we maximizing just p of y given x, right? And what we will see later in the course is that for many algorithms, we are going to maximize this as our objective. And those algorithms will be called generative algorithms [NOISE] or generative models. And those- and the- the- the other algorithms which only limit themselves to the interests of y given x will be called [NOISE] discriminative algorithms. [NOISE] For now, um, the- the- the intuition to have here is that when you deploy this model, somebody is going to give you x, right? You- you- you're- you know, you don't have a choice of deciding what x is, you know, x gets thrown at you at production time, and you want to make a prediction on y given that x, right? And, um, which is why, you know, we are most- mostly interested in p of y given x, making sure that we are able to make- making sure we're able to output, uh, uh, uh, a meaningful value of y when x is just given to you. Okay? That's- that's the, uh, general algorithm, but, you know, they're algorithms where we, uh, do with this, you know. For- for now, let's- let- let's just focus on discriminative algorithms this week. But good- good question. [NOISE] All right? So back to the log-likelihood. So the summation, so the, uh, log broke down this product into summation, and it's going to break down this product into summation, okay? And when we apply logarithm to the first half, it's going to be y^i log h_Theta of x^i plus 1 minus y^i log 1 minus h_Theta of x^i. This is our log-likelihood which we want to maximize because we want to do maximum likelihood. You can also optionally, um, formulate a loss function for this where you take the negative sign of this and you just think of that as a cost function which is- and then you want to minimize the cost function, which means- which is exactly the same as trying to maximize this, right? And- and then we- [NOISE] and then we apply gradient descent, you know. It- it is- it is- you know, once you define the log likelihood of- of- of the parameters given the data, once you're able to, uh, uh, write out data in this form, then it's just wrote calculus. You know- you know, just- just follow the same template, calculate the gradient, start your Theta at some random initialization, and update it in some direction based on the gradient, right? And you do that over and over until you converge. It's the same algorithm once you define the likelihood function, okay? So let's calculate the gradient. So in order to calculate the gradient, um- in order to calculate the gradient, what we- um- what we will do is first, we will observe that g of z, which is equal to 1 over 1 plus e to the minus z, and in practice, it- you know, z will be Theta transpose x. All right? The- the derivative of this [NOISE] is equal to- so you can- you can look this up in the notes, uh, I'm just gonna skip- skip the steps over here or anyway, let's just do it, right? Um, so how do we calculate the derivative of this? So derivative of 1 over something is minus 1 over the same thing, square, square, right, times the derivative of this which in this case, will be this times minus e to the minus z. All right? And this is equal to 1 over 1 plus e to the minus z times e to the z times 1 over 1 plus e to the minus z. [NOISE] All right? [NOISE] And- or here, you can add one and subtract one from the numerator. And this will be 1 over 1 plus e to the minus z times 1 minus 1 plus e to the minus z over 1 plus e to the minus z, [NOISE] this equal to 1 over [NOISE] minus z, and this will be just, um, plus 1 minus 1, 1 plus z, so this will be 1 plus 1 minus 1 over 1 plus e to the minus z. And we see that this is actually equal to g of z again times 1 minus g of z. [NOISE] Okay? So the derivative of- of the logistic function is equal to the logistic function times 1 minus the logistic function. All right? And using this, is there a question? No question. All right, so using this, let's calculate the radiant of the log-likelihood [NOISE]. [NOISE] So L of theta is equal to- so let's- let's just look at one example for now, um, with- with just one x and y and, uh, for, you know, across all of them, it's just summation over example, so I'm just gonna use x and y for now. So L theta is y log h theta of x, h theta of x is just g of theta transpose x plus 1 minus y times log 1 minus g of theta transpose x, right? There's just the log likelihood written for one example. Okay. And now, [NOISE] the gradient of this will be y times 1 over g of theta transpose x, because log of something is 1 over that, times g prime of theta transpose x times x plus 1 minus y times the, log of this is, 1 over 1 minus g of theta transpose x times minus 1 times g prime of theta transpose x times x. Just the chain rule. Any questions on this? Yes? [inaudible]. How does taking the derivative of? [inaudible]. So this is based on the derivative, with respect to theta, of theta transpose x is equal to x. We, uh, we reviewed this in class 2 or something, matrix calculus. [NOISE] Right. And well, this is y 1 over g of theta transpose x, and this is g of theta transpose x times 1 minus g of theta transpose x, plus 1 minus y times 1 over 1 minus g of, uh, theta transpose x times minus 1 of, again, g prime is, g of theta transpose x times 1 minus g of theta transpose x times x. And this, you know, these two cancel, here, these two cancel. So what are we left with? We have y times 1 minus g of theta transpose x, plus 1 minus y- 1 minus y times minus 1 of g of theta transpose x. I think I forgot a times x here, so times x, times x. So x and x are the only vector value quantities, everything else before that evaluates to a scalar, right? Um, whenever you're doing matrix calculus, always keep tracking the dimensions. Um, this- the left-hand side, the gradient is a vector and it's a vector because x is a vector and x is a vector, everything else here evaluates to a scalar. [NOISE] Right, and so this becomes y 1 minus g of theta transpose x minus, minus comes here, 1 minus y of g of theta transpose x, the whole thing times x. And this is just- [NOISE] so you have, uh, y minus y, g of theta transpose x minus g of theta transpose x minus and minus plus y g of theta transpose x, the whole thing times x. And these two cancel and you're gonna get the gradient to be equal to y minus, g of theta transpose x is just h theta of x times x. So the gradient of theta of the likelihood with respect to theta of this, which means the update rule will look like theta equals theta plus alpha times the gradient, which is y minus h theta of x times x. Right. So we basically got the same update rule as the perceptron algorithm, and the same update rule as linear regression, where the only thing that changes is the definition of h theta of x. Right, and again, this is not a coincidence. You will see why it's not a coincidence, but, you know, it looks very si- you know, they- they- they look very similar. Any questions on this? Yes, question? [inaudible]. Yeah. [inaudible]. Good question. Was there a reason why we chose the logistic function? Could we have chosen something else, right? And the answer is, um, the logistic function is a natural choice that we're gon- you're gonna see why it's a natural choice in the Friday lecture, when we study generalized linear models. Uh, you cou- you can in- in general choose any other function, um, instead of 1 over 1 plus e to the minus x. You can choose any other function that maps it between 0 and 1. You can define a likelihood, uh, a function, you know, based on that version of g of theta, take the derivatives and the update rule might look different, right? And, uh, by choosing the logistic function to be of this particular form, we get an update rule that looks similar to the oth- other update rules and we'll basically see the connection in- in the next lecture. [inaudible]. Will we, um, so the question is, uh, will we deviate from linearity? And I'm intentionally not gonna, uh, you know, go into more details. I know, and I'll, you know, um, I suggest you to wait. All right. So this is- this is logistic regression, right? We got an update rule and over here, if we want to do, uh, um, based on this update rule, we can perform gradient descent or stochastic gradient descent. If it is stochastic gradient descent, then we sample one example randomly, plug in the values of that example, update our theta. If it is, you know, a- a- a full gradient descent, then here there is gonna be, uh, summation across all your examples. Yes, any questions? Yes. [inaudible] So the question is here, I know, um, are we going to look at, uh, all examples at once or, you know, uh, one by one, like the perceptron? And that's gonna depend on whether we choose to optimize this with gradient descent or stochastic gradient descent, right? If we do gradient descent then we're gonna sum over all the examples. And if we choose to do stochastic gradient descent then we choose one random example at a time. Calculate this, you know, um, this update rule and update your Theta, and then pick another example. Update your Theta based on this update rule and so on. Yes, the question is, are we looking at binary classification? Yes. For now, we're just looking at binary classification, true or false. Right? And again, um, you know, the thing to note here, if you want to compare it to perceptron is that in the perceptron algorithm, um, assuming we start Theta from 0, right? A vector of 0s. In the perceptron algorithm, we were basically adding or subtracting our examples, you know, small, you know, fractions of our examples to Theta to- to kind of make progress, right? And each time we were multiplying it by some amount Alpha, and we're only performing this update if we got the example wrong in case of perceptron, right? But in logistic regression, what we see is that h Theta of x is gonna take some value between 0 and 1. It's gonna look something like this and it'll never be exactly equal to 1 or 0, right? Which means in every step, we will always take some amount of x, which is based on, you know, the difference between y and h Theta of x. H Theta of x is never exactly equal to 0 or 1. It would be some value between 0 and 1. Okay. And- and therefore, you can think of this as- as like a soft version of- of, you know, the perceptron algorithm instead of deciding that we got this example correct and just discarding it. No, we're gonna learn from every example, but by, you know, some amount depending to the degree to which we got it right. Yes, question? [inaudible] Yeah, good question. So the question is, uh, at test time- at validation time we're given- we calculated Theta during the training phase and we got a Theta, and a test example- a test time we got a new example x. Now, how do we classify it as- as, you know, 0 or 1, you know, classify it as positive or negative? And, uh, the answer there is that, uh, you know, first we go through the math, you know, and do Theta transpose x, run it through the logistic function, and we're- we're gonna get a value between 0 and 1, right? And that's- that's- you know, think of that as the probability that the model thinks the- the correct answer is 0 or 1. And then we're gonna use- in practice we're gonna use some kind of a threshold. Now, it's commonly, you know, 0.5. If the- if- if the, uh, uh, probability that was- that was predicted was, you know, greater than that threshold, which is typically 0.5, we kind of think of that as- as a positive example. And as, you know, another, another student, uh, suggested some time ago, uh, you could also take the probability and sample your way based on that probability. If- if the predicted output, uh, probability was like 0.99 then, you know, most of the times you're gonna sample y equals 01, uh, y equal to 1, right? Uh, and- and- and that can be your prediction, in which case your- your model is, uh, is- is like a random model. But if you have a threshold then it's a deterministic model. Good question. Yes? [BACKGROUND] Yeah. Yes. Yes. Yes. Yes. So the question is in- in, uh, you- we can think of in the case of perceptron the resulting Theta to be a linear combination of just the misclassified examples during the training phase, right? Whereas with logistic regression it's going to be some linear combination of all your examples. Exactly. And the- the weights of the combination is do- gonna depend on how correct or wrong we got each example, uh, during the training phase. Yes? [BACKGROUND] Yeah. Yes. So the question is, during training phase for- you know, when we encountered an example, if we got it like really wrong we predicted a very low probability. When the correct answer was a height we know is equal to one then, you know, the difference is gonna be big. Exactly. Right. So that's- that's logistic regression and you'll be implementing this in your- in your, um, homework. But in your homework, uh, if I remember correctly, you'll be implementing it with Newton's method. So here's another optimization algorithm that you may use in place of gradient descent, right? In- in the case of gradient descent, we just took the, uh, we just took the gradients and adjusted our Theta values according to the direction in which the gradient pointed us to. [NOISE] Instead, there is another algorithm called Newton's method, right? Newton's method. And think of this as alternative to gradient descent, gradient descent, or stochastic gradient. Think of this as, uh, an alternative to gradient descent, um, and the, um, intuition for Newton's method is- is something like this. Okay? So let's draw some pictures where- so let's assume we are trying to- we're trying to, uh, minimize a function- mini- minimize some kind of a loss function, right? And let's say your loss function, looks- looks like this. And similarly for the- for the same setting, here we are assuming, uh, uh, a one-dimensional input, right? Um, so the derivative, not the gradient, the derivative, um, let's call it l prime of- of Theta, how would that look like? So until this point, the gradient is negative or the derivative is negative. So in this region to the left of, uh, of this minima, the function will probably look something like this. And then to the right of this point, the gradient is kind of positive, right? So maybe it- it- I don't know, it kinda looks something like this. Okay? Does that make sense? So this is our loss function, and this is the- the, uh, uh, the first derivative of our loss function. And what we wanna do is find the point where the loss function is- is- is minimum, which means find the point where the derivative crosses 0. Right? Does it make sense? And there is this algorithm called Newton's method. So Newton's method is a way in which for any given function, sum f of x, it finds- it's a way in which you can find those values of x for which f of x equals 0, right? And tho- tho- those values of input for which a function evaluates to zero are also called like the roots of the function, right? So Newton's method is a root-finding method. Right? Where for any given function f, it will- it will find the input values at which the function evaluates to 0. And what we're gonna do is apply Newton's method to the first derivative of our loss function. Right? And the way Newton's method works is like this. Start with some random initialization. You know, just like gradient descent, you start with some random initialization of- of your Thetas. Um, let's call it this- no, let's call this Theta of 1 random initialization, right? And Newton's method will approximate the function by a linear function that is equal to the given function at that point. So maybe I'll use a different color. [NOISE] Right? So the- the black line is supposed to be a curvy line and the blue line is supposed to be a straight line, where the- the- the- the blue line is the linear approximation of the- of the first derivative, right? And once you have a linear- linear function, finding the root is very easy, right? And in- in Newton's method, what we do is approximate a function by a- a by the linear, by- by its, uh, uh, the linear, uh, uh, a linear function and solve for the root of the linear function and consider this to be your Theta 1. Right? And from this point, repeat the process, right? Calculate the, uh, construct a new approximation. So I'm gonna mark iteration 2 with red color. Um, and over here, the linear approximation might look like this, and solve for the second approximation and that's gonna be Theta 2, and- and so on. And once you get closer, Newton's method converges extremely fast to the- to the true answer. Right? Hence, uh, what did we do in gradient descent? In gradient descent, we started, um, over here, right? And the gradient pointed us in the rightward direction and we took a small step here. And we know, uh, uh, and we saw that the gradient was still and- and we took a small step here, and we kept taking small steps until we reached the minimum. In Newton's method, what we do is we- we look at the gradient function, uh, at- at the first derivative at each step approximated by a linear function and solve that linear function to be equal to 0. Right? And jump to that point, say from Theta 1 we- we, did I get this wrong, I'll call this Theta 0. From Theta 0 we jumped to Theta 1, and that from Theta 1 we jumped to Theta 2. And what we see is that the rate at which the sequence of, uh, Thetas converges to, say the true Theta star, or, you know, let's call this Theta star as the- the true global minimum. The rate at which Newton's method converges is much, much faster than the rate at which regular gradient descent would converge. All right? And the Newton's method update rule looks like this. Yes, question? [inaudible]? So- yeah. So the diagram was a little small, so let- let me- let me draw, um, a better version of the diagram. Right? So I'm gonna draw- let's assume this to be our, uh, uh, just the first derivative, right? So start at some Theta naught, right? So this is the derivative. There is a corresponding, you know, true loss function, right? This is the derivative. This is not the loss function. Right? So at this point, the function will, you know, the function value some- somewhere over here. Right? It's a curve function. Instead, we approximate it by a linear function, right? Which means- and here I'm gonna use a different color, um, so the blue line is the first linear approximation. Right? [NOISE] And this is gonna be Theta 1. Right? And similarly, [NOISE] you know, this is where- this is, uh, uh, l prime of Theta 1, and construct yet another linear approximation at the new point [NOISE] in the way and the place where the second one, this would be Theta 2. Okay? Does that make sense? At every point, construct a linear approximation, solve for the linear approximation, and the solution is the next- next location for our Theta. That's where we moved to. Yes, there's a question. [inaudible]? Yes. So the question is, what happens if there is a non-convex function? I'm gonna come to that. All right. Um, yeah, and- and the update rule for Newton's method is supposing we- we call this f of Theta. The update rule for Newton's method is Theta of t plus 1 equals Theta of t [NOISE] minus f of Theta over f prime of Theta, right? This is the update nule- rule for Newton's method. And now our f was the- the- the first derivative of our loss function. So if we plug in our loss function here, what we get is Theta of t plus 1 equals Theta of t minus l prime of Theta over l double prime Theta. Does that make sense? Yeah, the value of Theta t. [NOISE] Yes, question? When you're finding new Theta, does it do that for every point on the curve or does it do with your step size where equals to some common value? So when- when we, uh, find the- the, um, new Theta, is there some kind of a step size here? Yeah. Uh, in practice, you will use a step size. Uh, um, I'm coming to that. And- so this is Newton's method, but however, this was, um, using scalar-valued inputs, right? In practice, however, our loss function has a vector-valued input because we have, you know, um, uh, Theta as a vector. Um, and- so this was for in the scalar case, a- a scalar Theta. For vector Theta the update rule looks like this. [NOISE] Where Right? So in the- in the scalar case, we would divide the first derivative by the second derivative. But in the vector case, the first derivative is gonna be a vector and the second der- derivative is going to be a- the Hessian matrix, right? And we cannot divide a vector by a matrix, right? That- that- that's not a meaningful operation. Uh, so instead, what we have is this variant of Newton's method, which is called Newton-Raphson method, [NOISE] which is a- a- a- a vector version of- of Newton's method to, you know, uh, for vector-valued inputs. And H over here is the- is- is the Hessian. And invert the Hessian, that's like, you know, dividing the sec, uh, taking the- the- the reciprocal of the second derivative. So you invert the Hessian, multiply it by the gradient, and that's gonna be the- the- the direction in which we, uh, in which we're gonna head. And in practice, it is very common to have a learning grade here as well. Uh, though you, you know, in- in- in many cases you don't need an extra learning grade, so think of this, uh, you know, compare it to gradient descent. In gradient descent, it was pretty much the same except for having a Hessian, right? But in Newton's method, you multiply it by the, you know, you multiply it by the Hessian as well because essentially what- what's happening by multi- multiplying it by the inverse Hessian is that you're- you're trying to account for the curvature of the loss function. You know, that's- that's- that's the intuition to have, right? In- in, uh, in gradient descent, we're only looking at the direction of the s- of the steepest descent, uh, but then the curvature of the function can be pretty, uh, uh, unusual, and Newton's method kind of accounts for the- the- the second level curvature so. And this- this, um, func- this algorithm generally works much faster in terms of the number of steps required to converge, but there's a catch. The catch is that, um, calculating the gradient is- is, you know, O_d, uh, uh, in terms of time complexity, and that is all that we need for doing gradient descent. However, the Hessian is, you know, d by d. So you have O_d squared for calculating the Hessian. And even further, you need to invert the Hessian. So that's about O_d cubed, okay? So if your d is high-dimensional, then Newton's method, even though it requires a fewer number of steps, the cost of each step can be prohibitively high, right? And- and it is that trade-off that you need to- that you need to do in terms of deciding whether you wanna use gradient descent or Newton's method, right? Another, um, uh, point to note is that Newton's method is, a root finding method, which means if your prime_Theta look like this, right? If your l prime_Theta looks like this, which means your cost function l_Theta was like this and it went up, and it came down again, right? Newton's method is only- is gonna find you one of the roots, right? It could- it could find either the root where the- the- the first derivative is going up, which means over here, or it could find a root where the first derivative was going down, which means over here. And it's gonna depend on initialization. You know, did you initialize close to, you know, the first- uh, the- the- the first, uh, stationary point or did you initialize close to the second stationary point. It's- Newton's method is just gonna take you to the nearest stationary point of that function, right? Stationary point means a function where the gradient is 0. It could be a minima, it could be a maxima, it could be a saddle point, right? Newton's method is gonna just take you to the nearest stationary point, which also means that Newton's method, throw a function at it, it's gonna take you to the nearest stationary point. And if it's a convex function, it's automatically minimizing it. If it is a concave function, it's automatically maximizing it, So it's kind of like plug-and-play, right? You can even flip the sign of your function and throw it at Newton's method and it's still gonna recover the same point, right? It's gonna take you to the nearest stationary point. It could be a maxima or a minima. So flipping signs is not a problem, okay? So that's- that's Newton's- yes question. It's going to find the nearest stationary point, right? If the nearest stationary point is the only stationary point, then it is, you know, and- and- and it's a local minima, then it's the global minima. But if your- if your function has lots of local minimas or lots of local maxima [inaudible] take it to the nearest stationary point, right? And- and- and that's the reason, uh, Newton's method is most commonly used only with convex functions or, you know, concave functions. Whereas gradient descent does not have that limitation. Gradient descent will always keep- take you- will- will take you down, down, and down, right? If- if- if the nearest stationary point is a local maxima, gradient descent will not take you to it. The gradient descent will take you, you know, only down. Whereas Newton method, it's gonna take you to the nearest stationary point. Yes, question. [inaudible]. Yeah, so the- um, uh, e- for- for SGD- so Newton's method generally does not work well with SGD. Uh, and the reason is because in- in, you know, if- if you take just a few examples, the Hessian may not be invertible, right? You need- you need a lot of examples to make the Hessian, uh, in- invertible. So Newton's method generally does not work well with an SGD kind of setting, or maybe I misunderstood your question? [inaudible]. Oh, it's not analogous to SGD. So, uh, the scalar version of Newton's method is, um, is- is analogous to having a model where you have only one parameter. That's when you can use this method. [inaudible]. We take one sample at a time, not one parameter at a time, okay? Yes, question. [inaudible]. Yes. [inaudible] Yes. So gradient descent, uh, can be caught in a local minimum, that's right. Uh, whereas with, uh, uh, Newton's method, uh, you- you get caught in the most nearby stationary point. It could be a maxima or a minima. And- and it works really well when your function is convex, where you know that the nearest point is the global optimum. That's right. Yes, question. [inaudible] Uh, you could end up in a maximum if you- uh, uh, with gradient descent. Was that the question? [inaudible]. So, uh, with gradient descent, if you. [inaudible]. Yeah, so with gradient descent if you- if you- if you have a function that has maximas and minimas and you started exactly, you know, a- at the local maxima, then yes, gradient descent will not make progress because, you know, the gradient there is al- al- already, that's right. [NOISE] Okay, cool. So that's Newton's method. And [NOISE] in the last few remaining minutes, [NOISE] you know, let's- let's get some intuitions about- about, um, functional analysis. So this is going to be, um, uh, for the last few minutes, what we're gonna cover is, you can think of it as purely optional in the sense that they're go- there's not going to be any questions on your homeworks or in- in your, exam, uh, from the material for- for- for the rest of this lecture. But there are some, you know, the intuitions that you- that you- that you, get, uh, from a function analysis point, uh, standpoint can be very useful for understanding many algorithms that we're going to cover later this quarter, right? So what's functional analysis? So think of this as like, um, uh, uh, a five minute overview of functional analysis. Functional analysis is- is like, you know, an advanced mathematical topic which, you know, you probably spent a year studying. But now we're just going to look at some- [NOISE] some intuitions, right? So you can think of functional analysis as the study of functions. [NOISE] Right. You think of it as like, you know, a study of functions- [NOISE] functions, but a more common or- or- or- or a more useful, uh, interpretation of functional analysis is think of it as linear algebra in infinite dimensions. [NOISE] Okay. So the last 10 minutes or five minutes that you are gonna spend is- is- is basically, you know, think of it as- as- as- if you haven't seen this material before, think of it as, you know, some kind of a mental exercise to force yourself to think about- about vectors and functions in a different way, right? It's good to learn new things, but I think it's even better to learn new perspectives. And, you know, this perspective can be very useful when, you know, we study things like Gaussian processes and kernels and so forth. Right? So, um, so assume this is vector space R3. Let's call it X_1, X_2, X_3 at the axis, and you have some point V. Right? And the claim that we're gonna make is that functions and vectors are the same. Right? You know, functions are- think of them as an infinite dimensional vector. And that might sound surprising initially because we think- we think of vectors as, you know, something that's like straight and linear, right? And your function can be, you know, like sine X can be you know wiggly, right? So what's- what's- what's the connection there, right? So, um, one way to see the connection is a vector- think of this vector V as a function- think of this vector V as a function defined over the domain, where the domain is [NOISE], right? So the index that we are going to- the index of the- of the space is gonna serve as the input to the function. So, uh, supposing this function has, you know, let's- let's- let's see. So this function has, I'm gonna call this V_1, right, and this is V_3 and similarly, it has some- that's V_2. Right? Just the components of the vector. So you can think of writing this vector as V of 1 equals V_1, V of 2 equals V_2, V of 3 equals V_3, where, you know, those other components, right? Think of a vector as a function, right? Where the input that you're feeding is the axes, right? And the- the number of axes you have for the space is- is- is- is basically the domain of that function- of that vector. Right? Now, this vector can- we can also now visualize it as a function, 1, 2, 3. Right? [NOISE] So this vector, let's call it a blue vector. So V_1 is some value, V_2 is some value, V_3 is some value, right? So think of it as a function whose domain is just three integers and you know its- it's- its, uh, values is- is real value. Right? Now, if you look at a more common, you know, more x in R and let us consider a function, you know, think of this as f of x equals x squared plus 1. Right? Trying to draw the same analogy here, it's just that for this function, we have an infinite number of [NOISE] infinite number of- of inputs that we can pass. For these functions, we have just three possible inputs. For this function, we have an infinite number of inputs, right, and the value of the function at each point- so pretend this is one long vector, an infinitely long vector, where the value of each coordinate is the height of the function, right? So the value of the function is- is- is what you fill in- in that- in that vector, right? So a vector in three-dimensional, we would write it as, you know, V_1, V_2, then V_3, and a function, similarly, you can write this function as equal to f of x1, f of x2, right? And an infinite number of them, right? So a function can be explicitly written out with, you know, by the values of the function evaluates to at different points. And it's- it- it looks very similar to a vector, right? It's just infinitely long. And this analogy is, you know, you've got to use some amount of imagination because here we are- we're- we're- this works only for countably infinite kind of- kind of infinities. But imagine you can, you know, you write it to every possible real value, right? It's like, you know, an- an uncountably infinite, infinitely long vector, right? That's- that's the kind of intuition you want to have. Now, you know, just the way you, you know, if- if you have a function defined over three elements, you can map it into a three-dimensional space. In the same way you can- you can- you can map this function in, you know, an infinite dimensional space, right? So imagine you have an infinite number of axes, [NOISE] right? Imagine you have an infinite number of axes. I cannot write it on the board, I'm sorry. Um, and there is a point, let's call this f, such that X_1, X_2, X_3, you know, and- and there are an infinite number of these and the value of a function at a given axis, because each axis corresponds to one possible input. So the value of the- of a function at a given input is the same as the coordinate of the vector at that axis, which is- which is the same analogy from here to here, right? The value of the function at a given input is basically the coordinate value of the vector along that axis, right? Just extend that to an infinite- infinite, um, uh, dimensions. Which means now you can [NOISE] and having this- this- this functional space of- of functions being point in an infinite dimensional space will be super useful for some of the algorithms that we are going to study. And you can actually- to- to formalize this a little more or to- to kind of strengthen your intuitions some more, let's kind of draw comparisons of different concepts. So on the one side, you're gonna have finite case and the other case we have infinite case. In the finite case you have a vector, and generally call it V. In the infinite case you have a function, let's call it like f of t, right? In the finite case, you would have some kind of an index, right? Think of, you know, like, V_1, V_2, V_3, the- th- uh, the- the- the axis, ah, which are- which are like, uh, the indexes, and they are generally, you know, some kind of say 1-d for example, if you're- if you're in a d-dimensional space. And in case of the infinite case for functions, it's- it's basically the domain over which the function is defined, right? Over here it is- it's like R, right? And for a vector you have components, right? And in the infinite case, you have values of the function at- at- at- at different inputs. Okay, does it make sense? And vectors are generally represented with an explicit representation, right? So you- you generally write V equals, you know, V_1, V_2, V- V_d, right? You just write out the actual values of- of- of the vector. But functions generally have, you know, some kind of a symbolic representation. [NOISE] Which means, you know, you have some kind of a shortcut, uh, you know, given the index, you know, what's the value? You know, some kind of a rule that given the index, what the value should be, right? You can- you can, you know, just as well think of, you know, uh, an explicit representation of function, where you're just writing out all the values of the function at every possible input as one long vector, right? Just the same intuition you want to have, right? And few more things. So in- in vectors you have a dot product, right? Dot-product, uh, where, you know, you- you can write the, you know, between two vectors u and v equals sum over i u_i v_i, right? And similarly, uh, with- with functions, you can actually take an inner product between two functions and that would look like this, you know, f, g equal to integral of f of t, g of t dt. And that- this is- this is, you know, perfectly valid inner product between two functions, right? It's- it's- it's just the extension of the inner product to- to, ah, an infi- a infinite dimensional setting. Uh, and- and in fact, you know, um, this form, you- you might be familiar, you know, this- uh, uh, if f is like a probability density function and g is, like, a random variable, you know, your expectation of- sorry, the expectation of some g of x, where x is drawn according to some distribution p, is actually just the inner product between p and g, right? And you have a matrix, right? Matrix A. The equivalent thing over here is some kind of a- a two input function, right? Let's- let's- let's, uh, call it k of s, t, where in- in a matrix A, you have A_ij to be that, uh, you know, the specific element of the ith row and jth column. But for- for, uh, in an infinite dimensional setting, just evaluate the function at the two given inputs, right? And- and so on. And- and you can- you can- um, um, each matrix is- you can think of it as, um, a linear operator, which means you have linear operators on functional spaces as well, right? So for example, A of x, let's say Y equals A of x is a linear operation on a vector. Similarly, you can have a linear operation in functional space. And one such example is, you know, f prime equals D of f. There you have a function f, and differentiation is a linear operator, right? And you get an output function, right? D is, uh, uh, you know, er, er, you- you can think of differentiation as one big, you know, infinite by infinite-dimensional- uh, dimensional matrix, right? And the- the uh, and- and the- you know, um, you can draw further analogies. For example, you have eigenvectors for matrix A such that Ax equals Lambda x. Similarly for some kind of- you know, this is finite, this is infinite. Similarly, you can have, um, um, eigenvector eigenfunctions for some operator, right? So if f prime equal to D of f is equal to some Lambda times f, then f is an eigenfunction and Lambda is the eigenvalue. And for the differentiation, you know, can you make a guess of what can be an eigenfunction for differentiation? [inaudible]. Exactly. So, uh, say E to the kt, where t is the variable, is equal to k times e to the kt, right? So you can think of the- the exponential as like an eigenfunction for the differentiation- um, um, the differentiation operator, right? And- and the way we represent Ax, um. So if- if for example u equals Av, then we write u_i is equal to summation over j, A_ij V_j, right? This is matrix multiplication, which means now, [NOISE] the kind of equivalent in the- the, um, um, the g of s is equal to integral of let's call this 1 k of s, t f of t dt. Where you see that- okay? So this is like a matrix multiplication, right? But it's just happening in- in, um, infinite dimensional space. And these are also called integral transforms, right? Um, and you might have come across many functions of this form already. Uh, for example, um, when k of s, t is equal to e to the minus st, this becomes the [BACKGROUND] Laplace transform, right? And if you have k of s, t is equal to e to the minus ist, this becomes a Fourier transform, right? So you can think of Laplace transform as Fourier transforms as- as linear transformations. You know, instead of- um, um, if it were a finite dimensional, it would be like, you know, a- a matrix multiplication. In an infinite-dimensional it's- it's, uh, you know, you- you run through some operator, just like multiplying it through a- a matrix and you get, you know, uh, uh, you- the- the, um, uh, index changes from p to s, which means, you know, from one axis, the input changed to the other, right? And the idea here is the- is- is the- the- the big picture that you want to take away from- from all of this is that, functions are points in an infinite dimensional space right? And a lot of operations that you may already be familiar with will- you know, all kinds of transformation, like, the Laplace transform, Fourier transforms, are all just linear operations, you know, including differentiation are all linear operations on points in this infinite dimensional space and these points we call them functions, just the way we call vectors and- as points in a finite dimensional space. And- and this- this way of thinking, you know, um, of thinking of functions which we normally think of them as, you know, some kind of a curve along some axis. Think of them as points in an infinite dimensional space, where the coordinate along each axis is the height of the function at that particular input, right? And this kind of, you know, mental dexterity of switching between these two views, can be extremely useful when we study things like Gaussian processes or kernel methods or even gradient boosting, you know, fo- later in the course, right? Anyway, so, um, you're not gonna ask any of these on your- on your homeworks or exams. But it's just- uh, um, it's a new perspective where you start thinking of functions as just points, right? Not as curves or surface. Just think of them as some points that live in an infinite-dimensional abstract space.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_2_Matrix_Calculus_and_Probability_Theory.txt
Okay. Welcome back, everyone. Welcome to the second lecture of CS229 Machine Learning. So where were we? So, uh, in lecture 1, we- we were going through, uh, some of the review material, linear algebra. So, uh, and today for the, uh, for today the plan is to finish up the review of linear algebra, some of the topics we could not, um, cover on Monday, um, review some matrix calculus or multi-variable calculus, and then also review some probability theory. And with that, we will be finishing all the review. And from the next class, we are gonna start, uh, with actual machine learning, uh, with linear regression. Okay. So, uh, a quick recap. So- [NOISE] so we went over what a vector is and a matrix is, right? And, uh, we saw some of the- some of the applications of linear algebra. We saw why we need to, um, uh, study linear algebra, for example, uh, to represent, um, your data, um, covariance matrices, right? Kernels. You're not expected to know what a kernel is at this stage, but, you know, this is just to, um, um, name a few concepts that we'll be using later and- and also, uh, in general, multivariate calculus, right? Um, and then we went over some operations such as, uh, vector-vector operation, like the inner product or the dot product, the outer product. For the inner product, we- we take two matrices of the same dimension. For an outer product, they need not be the same. And then we also, um, went over some operations such as matrix vector product, um, where we saw two interpretations, the dot-product interpretation and the scaling interpretation. Yes, question. [inaudible] Is that better? Awesome. Thank you. Uh, we saw the matrix, uh, vector, um, operation. Um, the both the dot-product view and- and the scaling of the columns, um, uh, interpretation. And then, uh, we also covered matrix-matrix, um, uh, multiplication. And we saw a couple of interpretations of that, the- the inner product interpretation and the outer product, um, interpretation. And then, uh, we re-interpreted matrices as functions or operators rather than thinking of matrices as a grid of numbers and I'll think of them as an operator where they operate on a vector that you are multiplying. Think of the vector that you're multiplying a matrix with as the input and the vector that you get as the output, right? And with that function point of view, um, [NOISE] function interpretation of matrices. Right? With this function, uh, uh, point of view, we saw some geometrical interpretations of the subspaces, which is closely related to, um, the rank and the inverse- rank inverse. And we also covered projections, right? So projecting, um, a vector onto a subspace essentially means finding the point on the subspace that is nearest to the vector that we are trying to project, right? Projections and we saw the, uh, um, the projection matrix. So if you're trying to project a vector v on to the columns of x- of x, where x is a matrix and v is a vector, then the projection matrix is essentially x x transpose- oh, sorry, x x transpose x inverse x transpose, right? This is a matrix, so, um, x is a matrix, x transpose is a matrix, x transpose, x is a matrix, x inverse is a matrix. You multiply this matrix from, um, x itself from the left and x transpose from the right, you get one, you know, you still get a matrix. So this matrix is the projection matrix. And any vector you- you, uh, multiply it with. Right. You multiply this, uh, matrix with the vector, you get a new vector called u. And u will be the projection of v onto the subspace spanned by the columns of x, right? This is an important concept that we'll come back to when we talk about linear regression. Right? So that's the, uh, projection. And we also started talking about, um, eigenvectors. Eigenvectors and eigenvalues. Right. So that's where we left off. Eigenvectors and eigenvalues, um, are those um, special vectors for square matrices, um, where the operation of the matrix on the vector does not change the angle, it only rescales it by some amount. And the eigenvalue is the ratio by which the vector gets scaled along those, uh, specific directions. So each eigenvector comes with a corresponding eigenvalue. The remaining vectors which are not eigenvectors, are not eigenvectors in the sense they can change their direction. Is there a question? Yeah. [inaudible] So, uh, question. So the question is, do the columns of x span the subspace, uh, u? So, um, to clarify, x, um, the columns of x define the subspace onto which we want to project any given vector, right? So x, uh, the v- v over here is some any given vector and we want to project it onto the- the- the subspace spanned by the columns of x. By taking x, we construct this matrix, you know, x transpose x inverted, you know, and multiply from the left by x, from the right by x transpose, you get another matrix. Now, take any vector v and multiply v by this matrix, right. And you get an output vector u. Now, that u will be in the subspace spanned by the columns of x and it will be the nearest point to v such that it, you know, uh, uh, it's- it's, um, it's in the subspace and also nearest to v. Thank you. Thanks for asking that question. Yes, question. [inaudible] I'm not sure I followed your question completely, uh, did you mean you wanna add multiple- you want to consider different vectors that you want to project onto the subspace? [inaudible]. Okay, so we're just different from this way. Yeah. Okay. So let's call them x_1, x_2, x_3 as, okay. [inaudible] Yeah. [inaudible] [BACKGROUND] So the question is, uh, if- if, uh, I want to project V onto the, uh, subspace spanned by the columns of x, would it be different from what we would get if we were to project V onto first column of x separately to the second column of X separately to the third column of the x, and then sum those projections., uh, the answer is no, they will not be the same., uh, just because um, here's an example., uh, consider this to be, uh, x- x_1- x_1. So this is like the first column of x, uh, and let this be, uh, the second column of x, um, x_2. Now, suppose, there is a vector v, and let me, uh, the vector should ideally be out of the subspace. So imagine there is a- a vector over here, right? And now I want to project this onto the subspace of X, right? Now, if I were to separately project it to say, uh, x_2 first, this might be, uh, the projection separately to x_1 first, this might be the projection and then sum them up. It's going to be something here. But from here, the nearest prediction is- is- is this, right? If- if you predict them separately and add them up, you don't necessarily reach the nearest point. Does that- does that make sense? Yes. Question. Will it be the same for [inaudible] Yes. If- if- if they are- if they are the, uh, uh, orthonormal to each other, maybe but- but x_1 and x_2 will not be orthonormal. They're- they could be any- any vectors. I'm not sure about the orthonormal. I- I need to think about that. But in general, you know, it- it doesn't hold that you can project them separately. Any more questions? All right. Okay, so moving on. So, uh, next we're gonna, um, focus more on eigenvalues and- and, uh, see some of- some of its, uh, properties. So, um- so here's the- the picture that we started with, um, so imagine an input space in R_3 and this is your input space. So this is your x-axis, y-axis, z-axis, right? And, uh, the picture that we- that we kind of, uh, started with to- to build our intuitions was to consider, uh, a unit ball like a soccer ball that's centered on the origin of- of radius one, right? And you have the matrix your- your square symmetric matrix A, which is also in R3 and symmetric. And take the shape and run it through A. Now what does that mean? What- what does it mean to- I mean, we know how- how to- how we can take a vector and run it through A and get an output vector. But what does it mean to take a shape and run it through A and get another shape? It just means take every point on the surface separately, run it through A, and reconstruct the resulting shape on the other side, right? So, um- so this is our input. Take every point on the surface. So this is a three-dimensional ball, you know, I'm- I'm just trying to do a circle, you know, it has- it's a three-dimensional ball and run it through A and we get an ellipsoid, right? Now, um, at this point, we're still- still talking about symmetric, uh, square matrices. Now the- the, uh- the principal axis of this ellipse are the eigenvectors of A, right? So, um, the- the- the- the- the longest axis is going to be the eigen- uh, eigenvector corresponding to the largest eigenvalue, uh, and that's because points along this axis on- on the- on the input sphere will get mapped to a point on the same direction, but at a different- different distance from the origin, right? And the ratio of the output distance from the origin to the input distance to the origin is the eigenvalue corresponding to the eigenvector, right? Now, the product of all the eigenvalues of a matrix is also the determinant, right? So, uh, the determinant of a matrix is the product of all eigenvalues. All right? So the determinant of a matrix depends only on the eigenvalues. The eigenvectors don't matter. If you have another matrix which, uh- which results in a shape that is similar to this ellipsoid, it will have the same determinant, even though it may be oriented differently. All right? So the, uh, determinant is the product of all the eigenvalues. And another thing that kind of, uh, becomes obvious here is the product of all the eigenvalues code here is basically the ratio of the- the- the, uh- the principle you know, of- of this distance to this distance is one eigenvalue, the ratio of this distance to this distance is another eigenvalue, and so on. And it so happens that the product of all the lengths of, um, uh, these half diameters of- of the different axes of an ellipsoid is also the volume of the ellipsoid. All right. So another interpretation of the determinant is it is the volume of output shape over volume of input shape. [NOISE] For any shape- for any shape that has, you know, some non-zero volume on the input, it need not be just a- a sphere. It could be, you know, a cube, it could be any arbitrary shape. It has some volume. Run that shape through the matrix A, you get an output shape, right? And that output shape has some volume as well. Now the ratio of the output volume to the input volume is the determinant of the matrix. Now, it should be obvious that if the rank of the matrix is- is, if it's not a full rank matrix, it means one of the eigenvalues is 0, which means this sphere would have collapsed into an ellipse rather than, uh, retaining its ellipsoid shape. And the volume of an ellipse in three-dimensional space is 0, exactly, right? So what you will see is that the, uh, volume of the output shape will be, um, um, for non-full rank matrices. Right? The volume of the output shape will be 0 over, you know, some non-zero. And again, you know, uh, this is also, you know, uh, directly clear because it's the product of eigenvalues. One of the eigenvalues is 0, so, you know, the volume of the output shape will also be 0. [NOISE] Any questions about this? Now, the reason why we focus on this- this volume interpretation is because this is gonna, uh, help us further down in the course, right? Um, especially when we are- are talking about- talking about operations that we perform on random variables, right? Uh, this concept of determinant will become very important because even though we- we, um, um, perform some kind of a linear operation on a random variable. We will then need to adjust the resulting, um, uh, random variables probability by dividing it by the determinant so that the volume of the- the, uh- of the PDF should still integrate to one. So, you know, to maintain constant volume as you perform, um, uh, linear operation, you- you will, uh- you will need to divide the- you know, divide the output by the determinant to- to uh-uh, maintain the, uh- the volume. Uh, but if- if you don't- you know, that's- that's, you know, advanced. We're gonna go over that again in- in a lot more detail, uh, uh, for later in the course. But it's- it's important to kind of have this- this, uh- this understanding of the determinant. That it is- it is, uh- the determinant kind of tells you how much the- the matrix either expands or contracts the space, right? If- if the volume- the resulting volume is- is bigger, it's kinda expanding. And if it's- if it's, uh- if it's smaller then it's kind of contracting, um- um, contracting the input space. All right. [NOISE] Which also means that you know, if one of the- one of the, uh, eigenvalues is 0, it means the determinant is 0, and therefore the matrix does not have an inverse, right? Now, that's- that's, you know- that should also be the- be kind of obvious because you take- you take a soccer ball, um, you- if you- if you kind of transform it into an ellipsoid, you can still map it, you know, one, uh- map every point on an ellipsoid back to the soccer ball. But if you- if you can squash it into an ellipse, then you know, uh, there's no way you can map back a two-dimensional space back into a three-dimensional space, right? So there's no- there's no inverse if any one of the eigenvalues is zero. [NOISE] Okay. That's all about determinants. Any- any questions about determinants? Okay. I'll take that as a no. Next. Moving on. [NOISE] There's a technical term for the collection of all eigenvalues of a matrix and that is called? Anybody? The spectrum. Yeah, so the- yeah, yeah, yeah. Um, the- the-, they're all pretty related. So the spectrum is uh, [NOISE] collection of [NOISE] eigenvalues of a matrix and you know, you generally sort them in descending order. You write the- the largest eigenvalue first and kind of um, sort them in the- in the- um, in the- uh, uh, uh, uh, descending order. Now, uh, for most operations, the spectrum of a matrix pretty much contains all the information that we care about, uh, about, you know, the matrix for- for most of the operations. Essentially, uh, you know, it tells you what is gonna be the resulting shape of- of uh, the ellipsoid, um, for- for a given uh, in- input unit. But the orientation may be different. But, you know, you can kind of adjust that by doing a change of basis, you know, to orient things differently. But essentially the- the- um, uh, core of the operation of what a matrix does is kind of captured by the spectrum or you know, the collection of uh, eigenvalues. [NOISE] And there is this, um, theorem called the spectral theorem, [NOISE] which we're not gonna prove. [NOISE] The spectral theorem which says, for every square matrix, um, [NOISE] uh, say d by d, every square matrix that is also symmetric, [NOISE] right? Has real valued eigenvalues [NOISE] and orthonormal eigenvectors. [NOISE] Right? Uh, which- which makes, you know, um, um- and this actually covers a whole lot of uh, different- different uh, matrices that we are interested in. For example, Hessians [NOISE] are square matrices and they are symmetric, right? Which means they have real valued eigenvalues. All their eigenvalues are real. Some could be 0, some could be negative, but they are all real valued, they are not complex eigenvalues, what- that's what it means, and- and- and has, you know, orthonormal eigenvectors. Hessians, covariance matrices, [NOISE] right? Kernel matrices. Again, we're gonna see what kernels are later. But, you know, ju- just so that you- you know, why this is kind of uh, uh, relevant for the course, right? All these have, um, uh, they're all squared, they're symmetric and therefore they have real valued eigenvalues and orthonormal uh, eigenvectors. [NOISE] All right, with this, next we're going to start, um, something called the uh, quadratic form, [NOISE] quadratic forms. And- and- and this is closely connected to uh, uh, um, you know, uh, definitiveness. You know, positive semi-definite, negative semi-definite. Um, and that's gonna uh, uh, pop out of this. So what's a quadratic form? Given a matrix A assume it as a square, [NOISE] right? And you have some vector X [NOISE] already, the uh, quadratic form is also written as X transpose Ax. Right? Uh, so far we have seen Ax, but X transpose Ax is- is uh, also called the quadratic form. [NOISE] Okay? And the- the, uh, quadratic form holds for any squared matrix. But in general we assume, whenever we are working with a quadratic form, we just assume that A is symmetric as well. And that's because for any um, um, quadratic form expression A, where A is may or may not be symmetric, there always exists another B, such that X transpose Bx equals X transpose Ax, where B is symmetric. It's pretty easy to show, you know, uh, why this is the case. Um, it's- it's- it's- it's also in the notes. I don't wanna spend too much time on proving this. It's a very simple proof. But in general, whenever you're dealing with quadratic forms, it is safe to assume that the matrix is- is um, um, symmetric just because you can always represent it as another symmetric metric- matrix B, where X transpose Bx equals X transpose Ax for all values of X. Okay? And uh, and B actually happens to be just half of A plus half of A transpose. It's- it's, you know, um, a pretty simple proof. Yes. Question. What is the equation X transpose Bx [inaudible]? What's the equation X transpose Bx? [inaudible] Oh, it just shows that for every- for any matrix square matrix A, there is a corresponding square matrix B, which is also symmetric, which by the way is calculated like this. Such that X transpose Ax equals X transpose Bx for all X. [NOISE] Which is why you can just assume that A is symmetric. Uh, because even if it's not symmetric, you can use the corresponding B in its place because the values are the same at all values of X. [NOISE] Right? Now, this quadratic form is used to- oh. [NOISE]. Can you pull down the whiteboard? Yeah. [inaudible] Thank you. I just didn't want that to be distracting, that's all. All right. So we use the definition oh- the- the uh, quadratic form to define what's also called as definitiveness. [NOISE] All right? So, uh, the way we, uh, define definitiveness is, uh, if x transpose Ax, you know, for a matrix, uh, uh, a square symmetric matrix A and any value of x, um, is greater than zero, you know, for all x. So this symbol means for all x. Then we call the matrix A, we say A is positive definite, [NOISE] okay? Similarly, if x transpose Ax is greater than equal to 0 for all x, here, uh, for all x not equal to 0, not equal to 0 because when x is 0, you know, uh, A transpose x, uh, A transpose x will be 0. Then A, it is positive semidefinite. [NOISE] Okay? If it is less than 0, for all x, uh, are not equal to 0, then A is negative definite. [NOISE] If it is less than equal to 0, for all x not equal to 0, then A is negative semidefinite. [NOISE] Okay? And if you cannot say, uh, um, any statement about this where for some values of x, it is greater than zero, some values of x, it is less than zero, then we say it's just indefinite. Okay? So this- this is the definition of what is a positive semidefinite matrix or a positive definite matrix. We take the square matrix A, you know, I- you can assume it to be symmetric. Um, and for any value of x, for any vector x, calculate x transpose Ax, right? If all the- the resulting values of x transpose Ax for every x that's not equal to 0, if it's greater than 0, then A is positive def- positive definite. If it's greater than or equal to 0, it's positive semidefinite, and so on. Okay? Now, let- let's- l- let's try to kind of, um, understand this geometrically. What- what- what does this actually mean? Why is this, you know, what- what's- what's kind of- what's the meaning of x transpose Ax? Um, because, you know, we seem to care about its- its value so much. [NOISE] Now, first recall, you know, uh, let's go back to the dot product, a transpose b where a and b are any two vectors, okay? Let's- let- let, um, this be a, this be b, okay? Uh, if the value of a transpose b, ah, o- or if the angle between a and b is less than 90 degrees, then a transpose b will be? Positive, right? Um, if a and b are- [NOISE] and a transpose b is 0, [NOISE] and the other one is obvious. Um, if it's more than 90 degrees then a transpose b is less than 0, [NOISE] right? Now, um, this is for any two given matrices, a and b. Now with the quadratic form, uh, we see over there, you know, quadratic form, what- what- what it's actually doing is it instead of looking at, uh, a and b, it's looking at x transposed with Ax. We're saying, what happens to x, and now, and then you run it through A, you get Ax, right? It's- it's- it's doing a dot product between the input and the output of A for any given, uh, value of x, right? That's- that's essentially what the quadratic form is doing. And it is, um, and we saw in the picture previously before, let me try it again. Um, this time I'll just write in two dimensions. Um, so assume this is the- the- the- the unit sphere and [NOISE] this is the ellipsoid that will get you to A. [NOISE] So this is R^2. So this is the input- input shape- the input ball that we're gonna run through A and that's the output that we're gonna get. And, um, what- what this is, um, essentially saying is that for a positive semidefinite matrix or a positive definite matrix, for any value of x on the input that we take, run it through A, we get a corresponding output, okay? Now the angle between these two is less than 90 degrees, okay? Does that make sense? X transpose Ax is greater than 0. So x is- this here was input x, and this was output Ax, okay? As long as their angle is less than 90 degrees, it will always be greater than 0, right? And this is, again, um, this is how we relate extra the- the- the definitiveness statement, to eigenvectors- eigenvalues. Because if you have a- a matrix that has all positive eigenvalues, then the eigenvectors only scale- gets scaled in a positive direction, right? And this eigenvector gets scaled, you know, in a positive direction. And x transpose Ax for eigenvectors are always positive because they're just scaled in a positive direction. And- and- when you- when you do a dot product between two vectors which are oriented similarly, it's always positive, right? So you kind of think of the eigen- eigenvectors as, you know, acting as pivots and anything inside them get mapped to vectors that are also in the same quadrant, right? If one of the eigenvalues was negative, so for example, if this eigenvalue was positive but this eigenvalue was negative, then a vector over here could have gotten mapped to, you know, s- to something in this quadrant, which means its angle could have been bigger than, uh, 90 degrees. Which means x transpose Ax could have been negative, okay? If you have all positive eigenvalues, then the eigenvectors kind of act as pivots where the vectors inside one quadrant remain in the same quadrant in the output space, right? So, how does that connect to the, um, definition of positive, uh, definite, uh, uh, positive definitiveness? So, uh, for a positive definite matrix, [NOISE] all eigenvalues are greater than 0, right? For a positive semidefinite matrix, they all are greater than or equal to 0. For a negative definite matrix, all the eigenvalues are less than 0 and for a negative semidefinite matrix, they are less than or equal to 0. And for an indefinite matrix, they can be, you know, greater than or less than 0, right? So the- the- the- uh, the definitiveness of a matrix which is defined by the quadratic form has, uh, one to one relation with the spectrum of the matrix. Any questions about that? Yes. [inaudible] Yeah. So, um, when I say it is less than or greater than or equal to 0, for a few x's, it's gonna be greater than 0. For a few other x's, it's going to be less than 0. So if you construct any positive, uh, definite matrix, where, uh, a- any symmetric matrix where one of the eigenvalues is positive and one of the eigenvalues is negative, you're gonna get a matrix and they're gonna be kind of, you know, you know, um, eigenvectors corresponding to the- uh, they're gonna be vectors corresponding to the eigenvectors of that matrix, which will result in, you know, the value- the quadratic form being greater than 0 or less than 0. [NOISE] Okay? Any questions on this? Okay. So next we're gonna move on to, um, decomposing matrices. [NOISE] So given a matrix, there are many ways to decompose it. And what do we mean by decompose it- decomposition? We're going to look at it right away. The two decompositions that we are gonna, um, that we're gonna look at today are the singular value decomposition [NOISE] or, it's also called the SVD [NOISE] and the eigenvalue decomposition. [NOISE] Right? Those are the two things we're gonna, um, um, uh, go over today. Uh, because these are, you know, these two are- are, uh, probably the most, um, um, interesting ones from a theoretical point of view, there's another, you know, um, decomposition that's used very, uh, uh, frequently for the Cholesky decomposition. Maybe we'll, uh, um, I'll just mention in passing what- what, um, that is. But the, um, interesting ones to analyze are the singular value decomposition and the eigenvalue decomposition, right? The singular value decomposition. Um, so this is SVD, this one I'm gonna call it, eigenvalue decomposition. Right? So, uh, matrix A for singular ma-, uh, value decomposition can be any matrix A. Right? It can be any matrix whatsoever. Right? But for eigenvalue decomposition, you need square matrices. That's- that's, uh, one main difference. And the decomposition itself is defined like this. [NOISE] Right? So singular value decomposition is generally, you say A equals USV transpose, and eigenvalue decomposition, we generally say A equals UDU inverse. What- what- what are we saying here are basically? Basically what we're saying is given a matrix A. So let's- let's again think of this in terms of functions, right? Um, matrices, you know, think of matrices as function that's- that's always, um, a useful approach. Um, what this is basically saying is Ax [NOISE] can also be written as USV transpose x, which means first, multiply x with V transpose and you get some output mat- vector. Then take that vector as input to S, and you get some output. And take the output of multiplying by S and feed it as input to U, and you get some output. And what it- it's basically saying is the operation performed by x can be decomposed into three sub-matrices, which you apply like a pipeline. You know, first apply V transpose to x, er, and each of these is a matrix. First you apply, V transpose to x, you get, you know, a different, uh, vector. You take that vector, you know, um, feed it as input to S, and you take the output of that, feed it as input to U, you know, it's- it's just a misfit operation, right? And- and similarly, the Eigen decomposition of- of, uh, A can be written as, um, [NOISE] right? We saw what a matrix A, uh, essentially- essentially does, right? So, uh, [NOISE] so what the singular value decomposition and eigenvalue decomposition are saying is that you can decompose a matrix A into three sub-matrices, [NOISE] such that the operation performed by A on any vector x can be split into three subparts. Right? Uh, so the operations. So before we move on to operations, let's talk a little more about these, uh, individual matrices. So U and V are- [NOISE] they are ortho- orthonormal matrices, and S is also called a singular, the- the set of singular values. It's called, uh, uh, and this is a diagonal matrices- diagonal matrix where the singular values are along the diagonal of- of, um, of S and D is also diagonal, and the eigenvalues are along the diagonal of D. So the eigenvalue decomposition decomposes into UDU transpose, where the columns of U are the eigenvectors and D is the set of eigenvalues. And, um, you would, for example, if this is U and [NOISE] this is D, then there is a one-to-one mapping between the eigenvector and the corresponding eigenvalue and a different eigenvector and the corresponding eigenvalue and so on, right? The action performed by A is completely defined by its set of eigenvectors and eigenvalues for a square matrix A. And for any matrix A, you have, uh, you have a collection of singular values in place of eigenvalues, [NOISE] and you have a U and another matrix V transpose. For the singular value decomposition, U and V are, you know, have, they are- they are- they are different matrices in general. They are, um, uh, they just need to be orthonormal, which means that unit length and all of them are orthogonal to each other. Whereas for the- the eigenvalue decomposition, the- the third matrix has to be the inverse of- of, uh, the- of U itself. So this is just U inverse. Right? Now, um, we have- uh, so we ca- we can- we can, uh, break down the eigenval- the- any matrix, uh, into the singular value decomposition, but the cool thing about the s- uh, the singular value decomposition is that you are guaranteed that the singular values of the matrix are gonna be real-valued, right? No matter, you know, what matrix, A, U, you feed in, it needs not be full rank; it could be anything whatsoever. Any real-valued matrix, you know, as long as, you know, the values in the- in the grids are all real-valued, you can always come up with a decomposition of this form [NOISE] where the- [NOISE] where the U and V matrices are orthonormal and the singular values are all real. Yes. Question? Oh, no question. [NOISE] Now, uh, however, with eigenvalue decomposition, um, things are a little different in that, it is limited to square matrices only. Yes. Question? [inaudible] So uh, the question is, uh, aren't- isn't the eigenvalue decomposition diagonalizable? [inaudible] So if you, uh, um, b- by diagonal- diagonalizable- if you mean diagonalizable into real-valued eigen- ah, uh, eigen- uh, values, then yes, but in general, you can [NOISE] take any square matrix and the eigenvector- uh, the eigenvalues may be complex, but you can still break down- [inaudible] [NOISE] breaking too much [inaudible] like, uh, [inaudible] I think, uh, [inaudible] diagonalizable? So, um, if it's not, uh, uh, diagonalizable, then t- this is- they're gonna be complex values in there. [NOISE] But if you're okay with- with- with- with complex numbers, then you can always represent it like this. [NOISE] Okay. So, um [NOISE] Question? Yes. Question? Is- is S also- uh, S and D, are they both diagonal, uh, matrices with the eigenvalues? So uh, are S and- uh, uh, D diagonal matrices with eigenvalues? So in the eigenvalue decomposition, it is um, um, the D matrix is diagonal with eigenvalues. In the singular value decomposition, [NOISE] the- the- uh, S is a diagonal matrix with what's called singular values, not the eigenvalues. Eigenvalues are only defined for square matrices. Ah, [NOISE] okay. So [inaudible] stronger case. Uh, so- so when we- when V happens to be U inverse transpose, that is when the singular value of decomposition [inaudible] Well, I'm- I'm- I'm coming to that. I'm coming to that, right? So this is just the way you can, you can break down a matrix into, into, uh, into subcomponents, [NOISE] right? Now, [NOISE] it's kind of interesting to see, you know, how- how these are related. [NOISE] And let's call this A [NOISE] Right, um, SVD. [NOISE] So I'm gonna call this, ah, Step 1, [NOISE] Step 2, and Step 3, right? Where Step 1 corresponds to what happens when you take the 1st piece and, ah, and- and- ah, look at its action on X. Step 2 is take the output of Step 1 and run it through the next matrix, and Step 3 is take the output of Step 2 and run it through the 3rd matrix, right? Now. U and V or U and U inverse are all orthonormal, right? Which means [NOISE] Step 1 for both singular value and- and- um, eigenvalue decomposition [NOISE] is um- here it is. It is, um, V transpose and here, it is U inverse, and the action is gonna be just the rotation [NOISE] and mirroring, [NOISE] right? Rotation [NOISE] and possibly mirroring. [NOISE] In fact, it's actually gonna be just- just, uh, rotations. It's not even gonna be, uh, mirroring because it's orthonormal, [NOISE] okay? It's just gonna be some kind of, um, a rotation, right? And then, Step 2 is you're multiplying it by a diagonal matrix, [NOISE] which means you're just scaling along the axes. That's what a diagonal matrix- multiplying by a diagonal matrix does, right? Take any vector, multiply it by a diagonal matrix of it, then you're taking each component and scaling that component along the corresponding axes by the value of the- the, uh, uh, element in, uh, in- in- in the corresponding diagonal entry. All right? So this is just scaling. [NOISE] Now scaling. [NOISE] That is scaling along the axes- [NOISE] along the, uh, the x and y-axes, not the eigen- eigenvector but scaling along the axes. [NOISE] Right? Now, [NOISE] the scaling for the singular value decomposition is always real-valued, right? Uh, because S is always gonna be real-valued for SVD. So this is a real-valued scaling, [NOISE] right? But for the eigenvalue decomposition, some of your eigenvalues may be complex. And scaling by a complex value essentially means there is some amount of rotation involved, right? Scaling by a- a- a complex number, ah, means there could be rotation. [NOISE] If eigenvalue is complex, [NOISE] right? And then, Step 3 is another rotation. Here, the Step 3 is gonna be, ah, rotating it by U. So this is just Rotation 1 [NOISE] and this is Rotation 2, [NOISE] right? Means your- your- we rotate in- in SVD. We [NOISE] rotate it, scale it along the, ah, diagonals after rotating. And then, rotate it by a different amount corresponding to U. Whereas with singular value decomposition, ah, with eigenvalue decomposition, first, we rotate it, then scale it. But the scaling may- you know, if you have complex, ah, eigenvalue, the scaling may, ah, involve some implicit rotation, and then we [NOISE] rotate it by U, which is basically the inverse of the 1st step. [NOISE] Inverse of Step 1, [NOISE] right? [NOISE] In eigenvalue decomposition, first you rotate it by U inverse, then scale it, and then undo the rotation, uh, that was done by, uh, um- yes, question. Uh, the [inaudible] is it along the eigen- eigenvector axes? So, uh, the- so I would say ignore this case for now. Um, uh, this is- this is, uh, not very, you know, crucial for us. I'm happy to go deeper into that after the lecture. But, um, essentially what you wanna think of this, you give some shape as input, you know. Uh, you can characterize what a matrix A is doing by thinking of it as first rotating it by some amount, and then scale the rotated version by different amounts along, you know, the x and y-axis, [NOISE] and then rotate it by a different amount, right? And it could be- it is- it is, uh, different for SVD in the sense it could be, you know, arbitrarily different, but for the eigenvalue decomposition, you are gonna just undo the initial rotation, right? And what you- what you, uh, what you see is that the- the direction that ends up aligning with the axis after the first rotation are the directions of your eigenvectors, right? Because you're gonna- you're gonna scale tho- scale the points along- the points along the eigenvectors after rotation will end up aligning with the x-axis. And when you scale them along the axis, these points are not changing their direction once you undo the rotation. Does that make sense? So, uh, what this means, again, in- is that, uh, for square matrices, um, especially square symmetric matrices where we don't care about complex eigenvalues, for all square symmetric matrices, we are- we- we gonna rotate the- the action of A can be summarized at- as rotated such that the eigenvectors align with the axis. Scale it by different amounts. The scaling could be negative, which means you're, kind of, mirroring it. And then undo the rotation you did in the first step, which means the- the eigenvectors and the axes of the ellipsoid are gonna be the same, right? Now, for singular value, uh, decomposition, um, there are no eigenvectors. Um, there are singular vectors, uh, but the- the, uh, interpretation is- is not- not- not as easy because, um, you're effectively rotating it by some amount, scaling it along the axes, and not undoing rotation, but just rotating by a different, you know- by U instead of V inverse. The- the benefit of SVD is that you always get real singular values, but with, uh, eigenvalues, you require the matrix, uh, to be square and symmetric to get real eigenvalues, but the singular values are always real. And it so happens that, if you are having square symmetric matrices in your eigenvalue, the singular value decomposition and eigenvalue decomposition will give you the same three components. So singular value decomposition and eigenvalue decomposition are the same for square symmetric matrices. For, um, for A arbitrary, which means it need not be square, it need not be symmetric, right? SVD is the only thing that works. So SVD is like a sledgehammer, throw any matrix at it, it's gonna decompose it into three parts, right? Um, if A is square, you can still do SVD, but you can also do an eigenvalue decomposition. Some eigenvalues may be complex, but if you do an SVD on a square matrix which is not symmetric, you still get, you know, real-valued singular values. And when A is square and symmetric, then SVD and eigenvalue decomposition are the same. You know, you get the same, uh, um- U will be the same, S and D will be the same, V transpose and universe would be the same. [NOISE]. Professor? Yes, question. You said if A is squared, then both of those exist? So when A is square, you can perform both the eigenvalue decomposition and the singular value decomposition, but the decomposed components may not be identical, right? But if A is square and symmetric and you can perform both the decomposition, and the components will also be the same. So SVD and eigenvalue decomposition are, you know- you- you- it's- it's the same decomposition if it's square symmetric. And if it's non-square, then the eigenvalue of the decomposition does not exist? Yeah. It does not even exist. I mean, it's not even defined. So eigenvalue decomposition is defined only for square matrices. So eigenvalue, you know, algebraically, um, the eigenvalue eigenvector is basically the solution to Ax equals Lambda x, right? What it means is take a matrix A, multiple it by a vector x, and you are gonna get an output vector which is, you know, along the direction of x, but scaled by some eigenvalue, right? And all the solutions of this are your eigenvalues and eigenvectors, right? And in order for this to hold true, A has to be squared, right? For- for this- for this to be- uh, uh, for the output to be, uh, uh, a scalar ver- a- a scaled version of the input, A has to be squared, otherwise, it's just gonna be a- a different dimension. So, uh, eigenvalue decomposition holds only for square matrices, right? [NOISE] Uh- okay. Next, we're gonna move on to matrix calculus. Any- any questions about this? [NOISE] Yes, question. [inaudible] Yeah. Um, for- for- for, uh- it's- it's- you know, what- what you say is true. In terms of- in terms of the intuition, you know, you probably wanna have this as, you know, in- in- in your mind, in- for- in terms of having the right intuitions. Yes, some matrices are- are- are- uh, you know, not diagnosable. Uh, but for the most part, we're gonna be dealing only with symmetric matrices, you know, square symmetric matrices. And we are gonna be mostly dealing with- with, you know, um, um, quadratic forms, you know, for any- when- in- in which case you have a corresponding symmetric matrices. And in case of, you know, symmetric matrices, you can always, um, um, uh, you can always, uh, um, decompose a- a- a square symmetric matrices- matrix. Professor? Yeah. Uh, on the definition of [inaudible]. Yes. So all those categories, how do we actually test it? Because we can't try all axes. So the question is, um, how do we test, uh, whether a given matrix is- you know, uh, belong to any one of those categories because we just cannot test, you know, filling all the possible values of x and- and- and see it. And, um, that's a great question. And the answer is, we saw a one-to-one correspondence between the definiteness and the spectrum, which means you can always calculate the spectrum of a matrix by doing an eigendecomposition. We haven't, um, discussed how you actually do this decomposition, right? And- and we'll- we- we'll probably see that later in the course, how you can, uh, uh, decompose it. But there are lots of methods to decompose a given matrix into its, you know, eigenvalue decomposition and singular value decomposition, and then you can just inspect the eigenvalues. Are they all positive? Are they all negative? And, you know, you get the corresponding definiteness. One more thing for the eigenvalue decomposition. If you like to reorganize the- in the spectrum models and the [inaudible] model, the aigen models, is it [inaudible]? Can you switch around, like, the columns of the, uh, corresponding aigen with the columns- Mm-hmm. -of the corresponding [inaudible]? Yeah. You can. You can. [inaudible]. Yeah. So- so the question is, um, can we swap these two, uh, rows and columns and also the corresponding, you know, diagonals and also the corresponding rows? Yes, you can do that. Uh, but as a- as a convention, you just write them in- in- in descending order. It's just a convention. Okay. So moving on to matrix calculus. I should have probably swapped, right? [NOISE] All right, so matrix calculus. Let's see. Yeah, I'm assuming all of you are familiar with- with calculus, you know, with the concept of differentiation. Um, and as a prerequisite, you should have already seen multivariate calculus in terms of, you know, um, um, differentiating, say a scalar-valued function that has a vector-valued input, um, and such. So, uh, in fact, everything that I'm, uh, covering today [inaudible]. You know, hopefully this is not your introduction to this subjects. Hopefully you- you should have already seen them and this should be like, um, a refresher or, you know, just- just to, um, you know, recollect things if you've- if you've forgotten them. [NOISE] So, uh, I'll- I'll be skipping over a lot of, um, I'll be skipping over a lot of, um, um, um, precise rigorous statements and just giving you the intuition so that, you know, you just remember things that you've already studied. So hopefully this is not an introduction to these subjects for you. All right. So, uh, [NOISE] matrix calculus- calculus. Right. Let's see- let's- let's look at a few functions. All right. So we write a function. [NOISE] Right? Is this notation familiar to everyone? What this means is we have a function that we're going to call as- as F. F is the name that we assign, and this is like the type signature of that function. It takes as input a real-valued number and produces as output a real-valued number. Right? Now, um, an example of this is, you know, x square. Take a number x, that's going to be the input, you square it, that's going to be the output, right? Real value input real value output. And the- the value of this function is real value. Okay? This is just an example, um, the- the first derivative of this function is going to have a value in, also in R, right? And the second derivative is going to have a value also in R. So if you have a function tha- that takes a real value input real value output, for example x squared, its value is- is real value, right? Its first derivative, in this case 2x is also going to be real value. Its second derivative, x2, is also going to be real value. Okay? Now, consider a function f, [NOISE] What does this mean? It means it's a function that we can call this as f. The input of the function is going to be some vector, a d-dimensional vector, right? And the output is going to be a real-valued number, right? So vector input, scalar output. And we're going to encounter a lot of functions of this kind in this course, right? And the most common one that we are going to, uh, encounter is, or what you call as a loss function. [NOISE] Right? We haven't covered what a loss function is yet, but, you know, this- this- this is just to give you a sense of what, uh, what's the kind of things you're going to be, uh, looking at. Uh, the loss function is, that's a value, uh, a scalar value, right? And what's the first derivative of a vector? What's- what's- what's the type of the first derivative of a function like this? [BACKGROUND] Gradient. It's called the gradient and the gradient is going to be in RD. And the second deriv- derivative of a function like this is called the [BACKGROUND] Hessian. And the Hessian is [NOISE] and the question in this case, not only is it a squared matrix, it is also guaranteed to be symmetric, right? And symmetric matrices are also written, um, as S with just a single D, because it's implied that it's D along both the dimensions because it's- it's symmetric, right? And again, further later in the course, we're also going to encounter functions of this type. RD to RP, and its vector input, vector output and- and of course the dimensions are different, it- they could be the same, but in general, the dimensions are different. Um, can anybody think of functions like this? Why you might use such functions in machine learning? I mean, it's- it's totally fine if- if, uh, um [BACKGROUND] , bias. So a- a function that takes a d-dimensional vector and outputs a p-dimensional vector. Projections, and- and, uh, another, uh, um, um, a commonly used- used, um, a component are going to be like a neural network layer. So you're going to take, um, the way neural networks work is you- you transform one vector to another in- in some way, and it's, uh, you know, one layer of neural network can- can look like this. It takes a d-dimensional vector as input and produces some other dimensional vectors as output. And, is there a question? Uh, I was going to say discrete classification to see bigger image. Yeah- yeah, classification, you- you feed a- an image as input and, uh, you know, output o- of a vector of probabilities. Yeah, that's- that's, you know, that totally falls into this, uh, category as well. That's a good answer. And so the output here is going to be an RP. Right? And what's the first derivative of this? Of this kind of, uh, uh, uh, function going to look? What's- what's the type of, it's called a Jacobian, exactly, and it's going to be R of D by P or is it P by D, anyway, one of the two is called the Jacobian. Um, and that is why if- if- if, uh, if some of you are already familiar with neural networks, you're going to encounter Jacobians a lot in neural networks because, you know, that's how you, um, um, train them, uh, by, um, evaluating the Jacobians. And, uh, the second derivative for this is going to be some kind of, um, um, D by P by P, some kind of, uh, you know, let's just call it a higher-order tensor. [NOISE] Not very interested. All right. So, um, we see right away that, you know, vectors and matrices are going to show up when you do multivariate calculus, right? And the tools that we saw above, uh, for example, eigenvalue decomposition, can be applied to Hessians and that's going to tell you a lot of information about the nature of the function, right? Uh, for example, if you- if you take the Hessian of a function, and if the Hessian of a function is positive semi-definite at all input points x, then it means the function is kind of bowl-shaped, it is convex. If you take some function and if you, uh, uh, calculates its Hessian, and if it is negative definite or negative semi-definite, then it's kind of an inverted bowl shape, right? And if the Hessian of a function is indefinite, which means it's neither positive definite or, you know, semi-definite, then it is [BACKGROUND] exactly, it- it has saddle points. What does a saddle point mean? It means along one axis, it- it- it is, you know, cup-shaped and along a- another axes it is, it's kind of bowl shaped, just like the shape of a saddle that you place on a horse. It- it- goes down along one axis and go- and goes up on another axis, right? So the- the- the connection between multi-variable calculus and linear algebra is- are- are very deep and- and you're going to- you're going to be using an analyzing Hessians of different loss functions to kind of characterize the convex, which means if they are convex, you know, that is good news. It means the- when you minimize the loss function, you reach a unique global minimum, because if, you know, any bowl-shaped function or, you know, take any bowl, there's always like a unique, uh, global minimum, whereas if- if- if a function is- is not convex or if it has, you know, uh, uh, saddle points, then, you know, optimization is- is- is a little harder, trying to minimize the loss function is going to be harder, right? Now, some examples of, uh, how we actually go about calculating, uh, gradients. Uh, for example, if you have a function x, the gradient of the function with respect to x is written like this. This- this is the terminology that we generally use, the inverted, um, uh, triangle, um, is- is- denotes the gradient symbol. And if you- if you are writing your homeworks in LaTeX, then this, you know, to get the symbol in LaTeX, you use backslash nabla, um, and, um, the- the- the, uh, uh, uh, subscript for nabla indicates what is the variable with respect to which you are, um, um, taking the, um, um, gradient and, you know, for gradients, this is going to be vector-valued, obviously, and the definition is just this. Take the, um, uh, partial derivative of f with respect to x1 and evaluate it as x. So I'm going to use a red x here to indicate that this is the value that we're feeding and this is the variable with respect to which we are differentiating [NOISE] XD of. All right? This is just the definition of a gradient, which means take every, uh, uh, it- because this is a vector valued, you know you can also kind of think of this as x, x_1, x_2, x_d, right? And this is just a short for- short notation for- for this. So the, the function has multiple inputs because you're feeding in a vector of values and the gray- and its output is scalar. And the gradient is then defined as the partial derivative of f with respect to every input evaluated at the, the, the vector that you're feeding in. And this collection of partial derivatives is called a gradient, right? And the gradient also has an interpretation which is, it is the- it is the direction of steepest ascent. What does that mean? If you have, uh, um, so let's assume this is x_1 and this is actually, you know, x_2 and this is f of x. And a, a, a scalar valued vector input, input function is going to be some surface along this, you know, in, in, in such a graph. And the, the- I'm not good at drawing surfaces. Uh, but, you know, imagine there is some kind of a surface here, um, and the height from any point up to that surface is the value of that function, right? It's, it's, it's similar to, you know, f of x except now it's- it, it has, you know, uh, multiple, uh, input dimensions. Now, along the surface, um, the surface has got to have some kind of, you know, a shape. And the gradient of the function at this input tells you the direction in which you want to move to, to increase the value of the function the most, right? If you want to, um, um, the- I'm just going to try to draw a surface, right? Imagine this is some kind of a surface. Now, if you are at this point and you want to adjust your input to a different value such that the function evaluates to a larger value, then you calculate the gradient. At this point, it's going to give you some direction because it is a vector. It's going to give- point you in some direction. And if you take a very small step along the direction from- and from this new point, if you evaluate the value of the function, it should be, it should be larger, right? It gives you the, the direction in which you want to move your inputs such that the output value goes up, right? And similarly, you can also calculate the, the gradient of a function that takes as a matrix as input with respect to the matrix. Okay? It- that- this is just a generalization of this, right? And this is going to be again, a whole collection of, of partial derivatives. So even though this is kind of a- A is, is two-dimensional, just pretend it's one long vector, kind of serialize it into one long vector, right? Calculate the partial derivatives and then reshape it back into, you know, um, a, um, a matrix. Any, any question? Yes? F is a function? Yes, F is a function that takes A as input, a matrix as input, and produces a scalar as output, in this case. In this case, we're looking at, uh, f as a function where it takes a vector as input and scalar as output. Yes? So you reshape the output in the shape, uh, of the input? Yeah, yeah, exactly. So you know, uh, A is going to be some kind of, you know, uh, um, some kind of a, um, you know, A, it's going to have a_11, a_1, you know n, assuming it is m by n, you know a_n1 till- a_m1 to a_mn, right? So you- you're feeding, uh, you know, these many different inputs, uh, to f. The output is, is a scalar. So take the partial derivative of the output of f with respect to a_11 and, you know, and, and so on. Any questions about this? All right. Now, um, so this was f from R_m by n to R, right? And over here f was from R. What could I use? d to R. All right. We could also look at, um, what does, what does the Hessian where x is again, so f is from R_d to R. Now, the Hessian is going to look like this. Now, instead of taking the first partial derivative, we're going to take, you know, two partial derivatives, which means it's going to be del square f by del x_1, del x_2 of [NOISE] x del square i, del x_d, del x_d of f of x. Similarly, del square of del x_1, x_d, x_d, del x_d f of [NOISE] x. Right? So here we take the second partial derivatives of f with respect to every input. Yes, question? [BACKGROUND]. Oh, yeah, you are right. Thank you. X_1, x_1, x_1, x_d, x_d, x_d, and this will be a squared x_d x_1 f of x. The reason I'm- I'm- I'm highlighting things in red is to tell you that, um, by performing the partial, um, um, the- the partial derivative or two times, uh, partial derivatives, you're gonna end up with a function. And that function is this function for which you feed any arbitrary x as- as the input, right? Is this clear? Good. And a few examples. So, um, now, so- so- some- a few- few things that you're gonna be using, you know, very liberally throughout. What is the partial derivative, uh, or what's the gradient with respect to x of b transpose x, where b is some vector, some constant vector. You- you take a dot product with x and we wanna calculate the- the- this- this is scalar value, b transpose x is a scalar, and we wanna calculate the gradient of b transpose x with respect to x where b is some constant, b, right? And why is that? We apply this directly. So by definition, this is- I'm just gonna write for some ith element. So partial, uh, of x_i of b transpose x, right? I have just applied this here. And this is gonna be x_i of b_1 x_1 plus b_2 x_2 plus b_d x_d, right? And when- when- when- when you calculate the partial derivative with respect to this, all the terms except b_i x_i are gonna cancel out, um, because the rest of them are all constants with respect to i. So maybe let's call this b_i, b_i, x_i, okay? And this is gonna give you afterward b_i, and this is just b, right? Is this clear? The- the- the, uh, the- the function is simple, but the methodology is the same for any- any arbitrary complex function. Where we- the gradient is just defined as the collection of partial derivatives with respect to every input. And then you expand out the function, calculate the partial derivative with respect to the corresponding input. And- and then you finally see c- you know, is this a pattern that we recognize? And in this case it happens to be just b, otherwise your- your- your gradient is just some expression where, um, in general it may not have, you know, a simple expression like this, right? And so that's- that's how we calculate gradients for simple functions, with respect to time. [NOISE] Okay? And similarly, you can also apply, um, just like regular calculus, you can apply the product rule. And we're gonna see just one example of the product rule. So what does the, uh, gradient with respect to x of x transpose Ax, right? We're gonna apply the product rule as we know it. So this is gonna be the gradient with respect to, um, let me use two colors to- to, um, right? Or precisely, right? Treat it as a product of two things and, you know, um, ah, ah, by them, uh, uh, differently. And this is going to, um, come out to be, um, um, so here the- the gradient here is, um, you can think of this as gradient of x with, sorry, I forgot a transpose here, right? So, yeah, so this is gonna be, um, just Ax d transpose- Ax, and this is going to be, um, A transpose x, right? So, um, the- this is gonna be x times A plus A transpose. And if A is- is, uh, uh, symmetric, this is going to be just 2Ax, right? A few more, um, matrix derivates which are gonna be very useful. Yeah. [BACKGROUND]. Uh, I'm sorry. What's the- what's the question. [BACKGROUND]. Oh, this is the product rule, so for example, um, um, when you are taking, um, so d by dx of f of x, g of x is equal to d by dx of f of x times g of x plus f of x times d by dx of g of x, right? This is the product rule, and this is the multivariate version of that. You think of this as f of x and g of x? [BACKGROUND]. Yeah. [BACKGROUND] Exactly. Exactly. Exactly. You know, because this is- when I- when I- when I write it as, you know, uh, the gradient, it's not clear to which parts I'm applying it to, right? So the red parts are the parts that are getting differentiated, the black parts, you treat them as constants, you know, just like this. [BACKGROUND]. I'm sorry. [BACKGROUND] You could- you could do that too, but, you know, generally, you- yeah, you could- you could- you could do that too. Yeah, you could. [BACKGROUND]. Yeah, you could, you could, right? So that's, uh, you know, this is- this is the product rule. Um, another, uh, very useful, um, identity that you're gonna be using is gonna be, um, gradient of A of the log of the determinant of A. And this looks pretty- pretty nasty. So what's happening here? A is a matrix. Take the determinant of the matrix, and then you take the log of the determinant of the matrix and you wanna differentiate it with respect to A. Why would you want to do that? I mean, [LAUGHTER] uh, but it- it- it turns out that this is gonna be a recurring, um, uh, um, uh, pattern. This is gonna show up in multiple places, um, especially when you are- you are dealing with, you know, Gaussian distributions and stuff. You remember the Gaussian has a determinant, in- in- in the- in the denominator, anyways. But this is gonna be, um, uh, this is gonna show up many times in your homeworks, possibly in your exams. Um, and the- the, um, the answer is just A inverse, right? And the intuition is, you know, d by dt of- or d by dx of log x is x inverse. It's- it's, you know, think of it like that. All right, uh, we have another 20 minutes remaining. Um, let's see here. I'm gonna do a quick review of probability in the meantime. Uh, do you have any other questions? Yes, question. [BACKGROUND]. I'm sorry. [BACKGROUND] This one? [BACKGROUND] The first thing? [BACKGROUND] Above it? [BACKGROUND]. Well, I mean, do you- do you agree with this? I mean, so this is- this is just the same thing in a- in a- in a multivariate setting. [NOISE]. [BACKGROUND]. It is pretty straightforward. It's in the- in the- in the lecture notes. I mean, in the- in the review notes, steps posted online, you can go through the steps. [NOISE] All right, so this- we're gonna switch gears now and, um, briefly review probability theory. Um, that was it about in terms of, um, um, review of matrix calculus and linear algebra. Um, so we're gonna switch gears to probability theory now and this is gonna be the last- um, um, last topic that we're gonna review and from next class, we're gonna, you know, start machine learning and, you know, with linear regression. All right. Um, again, um, treat this as a review and not as your introduction to the subject. We are not gonna teach it in the way it has to be taught to a student the first time. We are just gonna review things so that it refreshes your memory. Okay, so first of all, what are the basic elements of probability theory, right? So in probability theory, there is- um, probability is- is basically the study of uncertainty about, um, things that can happen randomly, all right? And whenever we are talking about probability, there is always an implicit sample space. So sample space are- is- is basically the set of all- all outcomes that can happen, random outcomes that can happen, all right? For example, if we- um, your sample space can be what- what- what- what is the sequence of, um, um, coin tosses you get if you- if you flip it twice, right? Is it gonna be heads heads or is it gonna be heads tails is it going to be tails heads, tails tails, right? Now, if- if the experiment that you are performing is two coin tosses, then any of these is a possible outcome. Right? And the set of all possible outcomes are called- it- are- are- are- are- are- is basically called the sample space. Right? Now, um, the thing that we assign probabilities to are called events. All right? An event is a subset of the sample space. All right? Uh, you want to think of- um, let's see, an event is- is a subset of- of- um, the, uh, uh, sample space and in this case, you know, for example, if A is- is- is some- some event, uh, where it- it includes, you know, heads heads and heads tails. Uh, basically, we are saying we are interested in the event that the first coin toss turns out to be a- a heads. All right? And the full- the entire probabili- sample space is also an event, which means we are interested in, you know, anything that happens though it's not very interesting. All right? So that's- that's the event space, the set of all possible subsets of your sample space, okay? And now if your sample space is- is finite, the- the event space is basically the power set, which means the set of all subsets is- is the, um, event space. And then we assign something called as a probability measure. Now, the probability measure takes us input an event, not an outcome, an event, and assigns it a value between 0 and 1, right? That's- that's- that's something you wanna, um, um, keep in mind that we don't assign probabilities to outcomes, but we actually assign probabilities to events in the case of finite- um, finite sample space, the distinction is- is- is moot because you can always create a subset that has only one- um, that has only one event and assign a probability of that subset. Uh, but when we, you know, move on to, uh, continuous space- uh, uh, or continuous-valued samples, then, you know, the distinction became- becomes, uh, more relevant. Now the- the- the- the three main axioms- uh, the three axioms of probability are that the probability assigned to an event is always greater than equal to 0 for all events. Again, the- we're not talking about outcomes, we're talking about events. Events are- are sets of outcomes, are, you know, subset of- of the sample space. The probability assigned to the entire event space or- or- or the entire sample space is always 1, which means some event or the other in- from the sample space will always occur. Right? And if you take disjoint events where the- the- the intersection between any two events are- is the null set, then the probability assigned to the union of the- of- of those disjoint events is equal to just the sum of the individual probabilities, right? All- all very intuitive. There's- there's- there's nothing, uh, fancy happening here. Then, uh, conditional probability, um, let B be an event such that the probability of B is not zero. Again, B is an event, not an outcome, which means that's we are talking about sets of outcomes, and probability of A given B is- is- this is just the definition. Um, it's the probability assigned to the intersection of A and B. So the intersection of A and B is also a set and therefore also an event. Uh, so it's the event of A intersection B divided by the probability of, uh, uh, the event of B. Now, uh, A and B are independent. So the inverted T symbol means independent if the probability of A intersection B is the same as the probability of A times the probability of B. All right? And, uh, to- to- I mean, this should- this should make sense, right? A intersection B is always going to be a subset of A of subset of B and probability of A and probability of B, are, you know, values between 0 and 1 and when you multiply two numbers that are smaller than the- one, then you're going to get a value which is, you know, even- even- even smaller. Again, uh, probability of, uh, uh, A and B are independent events if and only if probability of A given B, uh, is- is equal to the probability of A, which means the probability of A does not change whether B occurred or not. Right? So you- you- you- you- then A and B are independent. Now, random variables. Now consider, um, um, an event where, you know, the experiment is, say, 10- 10 coin tosses and, you know, this is one such event. Now a random variable is a function that maps outcomes to real values. Now we're not talking about events, we are talking about outcomes. You know, the- the- the- um, the outcomes of- of- of, uh, uh, what happens in- in a- a- a random- as a random occurrence, right? A random variable is a mapping from outcomes to real values. This- this is just an example of, you know, a- a random variable here, we're given an outcome, you know that sequence of heads and tails, the function just counts the number of heads in it. You know, that's- that's just a function and you know, the- the output is- is a real value. So it- it- it's kind of useful to, uh, think about it like this. Um, I'm gonna- I'm gonna- I'm gonna call this the- the, uh, uh, outcome space and this to be the real line. Right? Now, the outcome space has, you know, lots of different actual outcomes, and then on top of these outcomes, we define events. Right? And events could be overlapping. Right? And a random variable is a function that maps outcomes to real values and here, this is- you know, this is straight line, which means- which means to say, you know, it's- you know, there's just the real line and I intentionally draw the outcome space with a wiggly line because this is just a bag of things that happen. There is no natural order among these, right? So if- if you- if you take a die, which is colored, for example, without the numbers 1, 2, 3 on it and you roll the die, some colors are going to show up, right? And you can think of those as the different outcomes where colors are not ordered. But then you can define a random variable which maps each side of the die to a number. For example, you- you number them 1, 2, 3. You know, that's essentially you're defining a random variable where you are now mapping the random events to the real line. right? So, you know, this could be a random variable. A random variable is a function, you know, it's a- it's a very poorly chosen name. A random variable is neither random, neither is it a variable. It's just a function. It's a function that maps outcomes to real values. I mean, it's- it's- uh, it's just like computer science. It's just a horrible name. It's not about computers, it's not a science, but you still call it computer science, right? So, uh, the- a random variable is neither random nor, uh, is it a variable. It's just a function that maps outcomes to- to the real line. Right? However, we assign probabilities to events. Right? We don't assign probabilities to outcomes but we assign probabilities to events. And the reason why we- we- we kind of, are interested in random- random variables is because, uh, by defining a random variable, we are now kind of, uh, bringing any experiment into a level playing field. You know- you map them into the real line and you start the rest of your analysis starting with the real line, right? You're kind of abstracting away what the specific details of the random event are, just map them onto the real line and now, you know, you can- you can have like a unified theory of- of what you can, you know, um, do with, you know, random things happening on the real line. Right? So, uh, by, uh, uh value of X, we mean, you know, the set of all values that X can- X can take. So in this case, you know, um, this random variable can take any of these values. And if it's a discrete random variable, then your function will look like this. Each thing gets mapped to a different number, each outcome gets mapped to a different number, right? And this is going to be a discrete random variable, right? And the val- value of that would be, you know, different values. Random value can- the random variable can take. And then we have something called the cumulative distribution function. Now the- the, as I said earlier, ah, probabilities are defined on events, and that probability is being defined in this space on the events. Now the cumulative distribution function is trying to map you know, these probabilities to probabilities along the real value, right? Um, the definition of the cdf is fx of x is equal to the probability that X is less than x. And what this actually means is what- what does this actually mean? It means the probability assigned to the set of all outcomes omega, such that X of omega is less than, maybe, let me call this t, just less than t, right? What does it mean? So look at your random variable that is some choose a t and get the set of all omegas such that x of t, x- x of omega is less than t, right? So in this case it is this one, this one, this one. And create a set out of those ah, ah create a set out of- of- create an event out of those outcomes and measure the probability on that event. Because we always measure the probability in- in the sample space or- or the event space, right? And we pretend that the cdf is measuring probabilities on the real line. But actually what it does is we map it back, find the pre-image corresponding to- ah, corresponding to the- the set we are interested and measure the probability in the event space, right? And the cdf looks like this. Um, a cdf looks like this because er, the picture that we see- that we see is- is, you know, you gotta flip it over here. We're mapping from the- the outcomes to the real line. But over here, you know, once we are- once we are kind of comfortable with this agreement, you know, moving forward, we start with the real line as- as- as our basis and define probabilities there. Yep. [inaudible] Yeah. So omega is the sequence of heads tails, that the long sequence is omega. It's just a string of heads, tails, whatever. And then given that as an input, you can count the number of heads in it. And the function that takes the string as input and returns the number as output is the random variable. Okay? And then you have discrete versus continuous random variables. Uh, we went through this again already. Um, and I'm just gonna skip over these. And ah, the way we ah- ah calculate the- so- so the cdf is defined this way, right? And the cdf is essentially gives you- you know, it- it- it kind of decoupled you from the event space and allows you to only think about the- the real line. And once we define the random variable, we- we forget what the event space was, what the outcome space was, and- and I only deal with the real thing. And so the, um, the cdf tells you what is the probability that the, uh, that uh, your random variable is less than or equal to some value t. And the height of the cdf is, um, um, gives you that- that value. And the cdf is always between 0 and 1, right? At minus infinity, the cdf is 0, at plus infinity, the cdf reaches 1. And the probability mass function is- is defined for discrete random variables where there is a probability assigned to an- to uh a given number. So a probability mass function would look like this. This is r, then at 0, you have some height. Where the height describes the probability assigned, right? And again, all of these will be between 0 and 1, and the sum of all these heights should add up to 1, right? That's a probability mass function. Similarly, if it's ah, ah, continuous valued, then you have a probability density function. And the probability density function is basically the derivative of the probability ah, cumulative- ah, cumulative density function, or the cdf. Expected value. So this is probably uh, the most important concept that we are going to encounter. And I'm just going to finish expectation and then we can break for the day. So what's the expected value? The definition of expected- uh, value is like this. So let g be a function from R to R. Which means- let's assume g- you know, this is some function g, this is R and this R. And the expectation of g of x. Now, if the input that we feed to g is a random variable, which means the actual event that can get fed is random, that get fed as input as random, then the expectation of g of x is defined like this. For- you know um, the sum of every possible um, x that can be fed as input. Evaluate g at, for that input, multiplied by the probability that- that x can get fed according to the random variable. And if x is continuous, then it is the integral. Now, one way to think of it is like this. If this defines, if- if- if your x that you're feeding to g is random, then your g of x- your X has some kind of a probability density, right? So this is- this is x. This is the pdf of X, the probability density function, and this is g of x, right? Now, what the expectation is telling you is if you were to sample your x's according to this density- if you randomly sample x according to this density. And then for each of those samples, evaluate g of x. And then kind of average them. You know, and take the average over here. It's going to be some- some value. And this is expectation of g of x. Which means if your- if your sampling x's according to the random variable X, evaluating g on those sample inputs, what is the average value of g you would get if you were to repeat this experiment you know, indefinitely long, right? What's the expected value of the output of g if your inputs are sampled according to x? So that's- that's ah the expectation of g of x, and this procedure of calculating the uh, expectation. So the analytical uh, definition is here, that's the integral of g of x, f of x dx. The, um the other interpretation of this is to um, take the average 1 over n, g of some of equals 1 to n, g of x i, where x i are random samples that you- that you- the random samples of x, that is the input drawn according to the density, right? Evaluated at g of x. And take the average, right? And what we basically know is that the limit of this as n tends to infinity, which means that if- if you perform this procedure with larger and larger number of samples, the limit of this, thanks to the integral of g of x and p of x dx, where p of x is the probability density, g of x is the function that we're trying to calculate expectation over. And this- this statement is also called the? Anybody? This is called the law of large numbers. Large numbers, right? Super important. And this estimate of the expectation? You know, so this- this is an approximate estimate of the expectation which the approximation gets better and better as you increase the number of n. This estimate is also called the? Anybody? It's called the Monte Carlo estimate, right? I'm sorry. [BACKGROUND] Monte Carlo estimate. There's the Monte Carlo estimates. And as you- as you increase n to infinity, your Monte Carlo estimate becomes the true expectation. And that's- that's basically as a consequence of the law of large numbers. And we're going to be seeing the Monte Carlo estimates for various, you know uh, uh, for various purposes in- in machine learning and- and in this course as well.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_13Statistical_Learning_Uniform_Convergence.txt
So today, uh, the- the plan is to cover bias-variance trade off and the role played by regularization and the role played by the hypothesis class in bias-variance trade off. So, uh, in the last class, we spoke about the concepts of bias and variance, and we probably briefly mentioned that there is a trade off between the two, some kind of a trade off. And today we will see two- two kind of, uh, more concrete ways of how this trade off comes into picture, um, in practice. And, uh, the second part, the role played by the hyp- hypothesis class is actually, you know, if you're interested in- in machine learning research, it forms a good, uh, entry point into statistical learning theory. However, the kind of, uh, the- the big picture that we want to take away at the end of the class is probably, uh, extremely important if you want to be a machine learning practitioner, right? So, uh, in order to build good predictive models, it's very important that you understand the role of bias and variance and how you- how you, uh, handled the two, uh, the two evils so to speak; uh, both bias and variance are bad and you know, how do you balance between the two? That's at the heart of, you know, uh, successfully building good machine learning models in practice. So even though some of it is theoretical, there are very good takeaway messages, even if you want to just apply machine learning in practice. [NOISE] Before we go into today's topics, a quick recap of, uh, what we covered in the last lecture, or at least the parts that we covered in the last lecture that are relevant for today. So we saw the concept of bias and variance in classical statistics. Classical statistics, we are interested in constructing an estimator for our parameter Theta. And that, um, the estimator that we construct, uh, we call it Theta hat n; where n is the number of examples that were used in your training set, so to speak. And what we saw was Theta hat n is a random variable because it depends on, uh, the noise and the data that was sampled from which the model was fit. And it has- because it's a random variable, it has an associated probability distribution. And that distribution is called the Sampling distribution. And the bias and variance are directly related to the sampling distribution of this estimator of Theta hat n, righ t? So the centered mean of- of, uh, of the sampling distribution or the center first moment is called the bias. And the variance or the covariance of the estimator is called the variance, right? And, um, supposing, um, in- in the parameter space, theta- theta star, which is indicated by this, uh, by this black square, was the true parameter from which data is being generated in the world. And we collect the sample of size n, a random sample of size n, and we run it through our estimator and we get an estimate, a point estimate for theta, and supposing we plot it. And we repeat that over and over where we're resampling data from the true data generating distribution, though in practice that is hard, but, uh, as a mental model, imagine you're gathering a new set of data. You collect new data and you run it through your estimator, you get another point estimate, you plot it, and you will see that they end up being samples from the sampling distribution, right? And- and this the- the sampling distribution will be centered around some point here, uh, calling it, uh, you know, indicating it by a triangle. And this is the mean of the sampling distribution, and this gap is called the bias of the estimator, right? And the variance of the estimator is how wide the sampling distribution, um, how, what's- what's the variance of the sampling distribution. That's the- the variance of the sampling, uh, of the, um, estimator. There are very, uh- there is closely concept, uh, closely related concept of bias and variance in prediction problems in machine learning. Over here we are interested in prediction, uh, whereas over here we are interested in statistical inference over the parameters. Here we are interested in making predictions on unseen data, right? And over here, similarly, let's, um, this is the- the x, um, and this is y, and we get samples xy from- from, um, from the data generating distribution. Now supposing the true relation between x, y, which we, uh, define as y- expectation of y given x is this dotted line. So this dotted line over here is f equals expectation of y given x. And because of this definition, we get y equals f of x plus epsilon, where epsilon is some, uh, zero-mean noise. And here what we see is though this is the true function, we take some training data of size n and construct an estimator. We call it f hat n. And using f hat n, we make a prediction on a new test example, x star, right? And the prediction may be anywhere on this line in gen- in principle, it could be anywhere, uh, the value of, um, f hat n of x star could be anywhere. And what we see is, if we just like before repeat this sampling, uh, process where we collect a new set of n examples, construct a new model using the new set, and make a prediction on the same x star using the new estimator. It's gonna be a different prediction, and the set of all those predictions is gonna follow some distribution. So the distribution that is, um, having a thick line is something similar to this distribution, except it's in the prediction setting, right? And again, this distribution has a mean, which, you know, just like before, I'm using, uh, a triangle to indicate the- indicate the mean of, uh, the f hat n. And this- this- this triangle can in general not be equal to the- the point at which the true function makes, uh, its prediction, right? And this difference is called the bias of f hat n. Just the way this difference was the bias of the, uh, theta hat estimator, this difference between the square and the triangle is the bias of, uh, bias of- of f hat n. It is the gap between f hat n at x star and f at x star, f hat n at x star, and f at x star. And similarly, the variance is just the variance of this distribution. The variance of f hat n is just the variance of f hat n, right? However, uh, the- the, uh, bias-variance decomposition of the squared error loss has a third component, which- which you call as irreducible error. [NOISE] So the dotted distribution that's indicated here is the distribution of y given x. So this is y given x. And it is from this distribution that we make the observations y, right? So if we were to sample an example from the data-generating process at x equals x star, then the corresponding y will come from this distribution, right? And the- the expected error that we would make using f hat n on a new example, y star, x star, can be, uh- was written as- [NOISE] so mean squared error of f hat n is equal to the expectation of y star minus f hat n [NOISE] x star, right? So we use some training set, constructed our model, made a prediction on x star and it was a sample from the thick line distribution, right? And it's a- it's a random sample because there is noise in the training set- set that we used. If you use the same training set, then we always make the same prediction, obviously. So suppose the- the predicted f hat n was, let's say was over here. Similarly, the true label also has some noise in it, right? Because it's- it's sampled from the data generating distribution and let's say the observed y star was here. So this is, uh, y star, right? And now this gap is what we are trying to decompose from y star to f hat n x star, right? And this can now be broken down again into three parts. First, we, uh, uh, what we have spoken about. So this can be broken down into three parts. One part is the gap from the black square, that is like the true function till y star, right? That we call as irreducible error. There is just no way we can fight that no matter how well we construct our f hat n in the observation that we are going to make this going to be noise. There is nothing we can do about it. So the expected, uh, squared error between y star and- and our- our prediction f hat n x star can be broken down into three parts. One is the irreducible error, the second is the bias, and the third is the variance, right? So the irreducible error is noise in the test data, right? It's not noise in the training data. Irreducible error component is noise in the test data. Because when we- when we deploy the model, that is going to be noise in- in- in- in the data that you're going to encounter. The bias and variance are properties of the training data and the model class, right? If the data is noisy and we don't have a lot of data, then this variance, uh, the variance of the thick line distribution is going to be large. It's going to be more spread out. However, if you collect a lot of data, then it- this thick line distribution, the- the- the distribution of hat n would be more concentrated. But still, even- even if it is concentrated, there could still be a bias. That is the mean of our- our, er, f hat n and the true value, er, of f can still- there could still be a bias and there is a variance because of, you know, the remaining is- is the variance. I think of it as the variance where the variance is due to the noise in the training data, right? So there are three parts. Irreducible error due to noise in the test data, variance due to noise in the training data, and bias. There are three parts. And we also covered regularization before we go into regularization, so we've just discussed the three components. We still haven't spoken of any trade-offs between them. But this- this is the mental model to have in, you know, in your mind to decompose the- the test error into three parts, right? Part due to noise in the test data, that's irreducible error, part due to noise in the training data, that's variance and bias. Now, we also spoke about regularization and we will see why regularization, er, plays a role in, um, um, shortly, soon. So regularization is a way in which we want to penalize our estimated parameters from having very large values. We want the estimated Theta values to have small values. And the reason is that, er, the- the intuition there is that if you have large values of, um, in your Theta vector, then your predicted hypothesis can be very squiggly. It can make very sharp turns and it can sway a lot. And the intuition there is we want to, you know, therefore penalize large values of- of, er, Theta and therefore, we add an extra term to our cost function. So if j of Theta, which previously used to be x Theta minus y node. You know, this is standard linear regression cost function and we will add an extra Lambda times Theta node. You're going to add this extra regularization term into our cost function with the hope that when we minimize the cost, not only are we trying to fit the data well, but we're also trying to, you know, to some extent, make sure that our Theta values are not too large and the balance between the two is decided by what is the regularization coefficient we're going to use to wake this- the regularization term. And we also saw that the motivation that we- we went through just now, it sounds a little arbitrary, it sounds a little hacky. But in fact, there is, uh, a more principled interpretation to this from a Bayesian point of view. So remember in, um, in Bayesian statistics, we construct a posterior distribution over our parameters given the data. And what- instead of holding on to the full posterior distribution to make- to construct, er, posterior predictive distribution. Instead, what we could do is just calculate the mode of the posterior distribution and use that mode of the posterior distribution as the output of our estimator and hold onto just that single point estimate, which we call as Theta MAP. That's Maximum a posteriori estimate, as our output of the estimator. So the- uh, we saw how Theta-hat MAP can be written as the argmax of the product of these two terms. So one is the likelihood term. In case of linear regression, the likelihood term will turn out to be exactly equal to this or rather it's going to be some scalar times this. But effectively, uh, this corresponds to the likelihood term. And then there's going to be a regularizer term due to the prior we assign on the parameters. And if the prior happens to be a Gaussian prior, then this prior will take on the form of a squared error on Theta. Right? So that's another thing you want to keep in your mind in general. And whenever you assign something to be- to have, er, a Gaussian distribution or Gaussian prior, it's going to show up as a squared error somehow because, you know, in- in- in the exponent of the Gaussian there is, you know, a square on the, er, a squared term. And when you take the log likelihood, the exponent and the square cancel out and you're just going to be end up with, er, the log and the exponent cancel out and you're just going to end up with the square term in the, er, exponent. So, um, when we have a Gaussian prior, this corresponds to- corresponds to the regularizer term and the variance that we assign to the prior distribution of the Gaussian is directly related to Lambda, right? The- the stronger our prior is, then the larger Lambda is. And in your homework, you know, you work out what is the exact relation between Lambda and the- the variance. Uh, but that's, er, the- that's the idea and in general, so it means if- if this is the, um, x-axis are the parameters and this is zero, that's the origin. And we have a prior distribution over data centered around the origin. And we have a likelihood function that is maximized at the maximum likelihood estimate. So this is our theta hat, MLE, this is 0. And we are trying- now trying to maximize the product of these two, right? So the product of these two is this dotted function and the way you wanna think of it is, since you're multiplying two, er, two functions, any region where either of the two functions is close to zero, the product will be close to 0, right? So it is going to take, um, er, it's going to be non-zero in regions where both, uh, the prior and the likelihood are- are non-0, right? And so, er, assuming this is the product of these two functions, the Theta-hat MAP is the value that maximizes this- the product of these two. So this is like the posterior distribution. It- it might not be normalized, you need to, um, divide it by p of s in this case to normalize it. But, you know, this is- think of this as the unnormalized posterior distribution. And the value where, er, the unnormalized posterior gets maximized is- is the same value at where the unnormalized posterior gets maximized. And this is, er, Theta hat MAP. And therefore what we see is the posterior is going to- because it's- it's the product of these two, the posterior is going to kind of lie between the likelihood and the prior, right? Because of the product of the two functions. And therefore the value that maximizes MAP, is going to be closer to zero than the MLE. And therefore, um, the estimator Theta values will be smaller in value because they are closer to zero, right? So this, this was, er, this was one way to kind of think of regularization using the Bayesian framework. And in fact, in your homework, you also have another, uh, question where instead of assuming p Theta to be- to be a Gaussian, what happens if it's a Laplace distribution, right? And- and there, the intuition to have there is, um, again, assume this is Theta 1, Theta d, the prior of a multivariate or other product of independent Laplaces is gonna look like this. It's a star, you know, it's kind of like a star-shape function. Whereas in a Gaussian, it's going to be circles. And if- if this is Theta hat MLE, so if this is the maximum likelihood estimate and, let's say, these are like the contour plots of the likelihood. Now the product of- and this is the contour plot of the prior, and you can see that the product of this function and the product of this function will achieve a value, or high values generally along the axis, right? And- and- and therefore the value where MAP is maximized will- will lie along the axis, exactly along the axis and in high-dimensional spaces, it turns out that this results in sparse parameters, which means most of the axis- most of the Theta i values will be exactly 0 because it's going to be along the axis because you get you- in order to- to the way you- the way the prior is- is- is, um, structured is that if you want to reach out as far as possible and still have a high prior value, then you need to go along the axes to- to kind of have high- to have high- high prior probability, but still reach- stretch out as far as possible to Theta-hat. You need to do that along the, um, along the axes therefore, applying a Laplace prior will result in- in sparse- sparse parameters and that's called the LASSO dis- the LASSO, um, uh, estimator and you- you explore this in your homework as well. Any other questions before we move on to today's topics? Yes, question. [inaudible] Yeah. So the question is, how- how do the- the definitions of- of the- the statistical inf- the bias and variance in the statistical inference relate to bias and variance in- in the prediction setting? The- the short answer is that, um, the f Theta uh, - f Theta hat of n is generally composed of, you know, Theta hat n. So Theta hat n is- is kind of embedded in this, uh, uh, f- f hat n and- and therefore, you know, the bias and various, in- in- in simple cases- there- they directly transfer, you know, um, which we'll go over, um, um, we'll go over today. But in general, the bias in f hat n is assuming, uh, you're using some kind of a parametric model and not non-parametric, assuming you're using some kind of parametric model, then the bias in f hat n is related to the bias of Theta hat n, uh, which is the parameter of f. [inaudible] They are related in general, no matter what the parametric family is, but it's very hard to come up with an analytical relation between the two. But in- in- in general they're the- the- the noise in f hat n in the prediction is due to the noise in the corresponding Theta. Yes, question. Can you explain that part where [inaudible] So the question is, how does this translate into this? [inaudible] So yes, so the- the, I mean, so this is exactly what you're showing in your homework. You need to show in your homework. The question is, how does the prior result in this? And when you, when you're taking the argmax of this, you know, if you, instead of the product of p of Theta, you- you take the arg max of the log of these two that separates into two additional terms, and this will have a Gaussian distribution, assuming if it's Gaussian. Then the log and the exponent of the Gaussian cancel, and the square root and the prior will end up being this. It's pretty straightforward math. [inaudible] Yeah, so if, if the prior is not Gaussian, this will not be squared error. It could be a different, so if it's, if you apply a Laplace prior then you will have just the Theta norm, the 1-norm instead of the 2-norm, right? So the squared error happens only when you apply, um, um, uh, Gaussian prior. All right, so moving on to today's topics. So today we're going to see how bias and variance there is a trade-off between the two using two kind of, uh, case studies. So in one case study, we're going to see the role played by the regularization to affect, uh, or- or- or how it affects bias and variance. And we're going to use the- the- the, uh, - the case of L2 regularized linear regression. [NOISE] All right, so, uh, I posted some notes over the weekend that gave- that give the, uh, detailed derivation of all- of all the- the steps that we're going to do today. We will just go to the results and discuss the intuitions about the results, but, you know, the step-by-step derivations are there in the notes. So in case of- in case, um, of L2 regularized linear regression. First, we will talk about the bias and variance in Theta, in the Theta hat itself. So remember, L2 regularized linear regression has the cost function i equals 1 to n, yi minus Theta transpose xi square, plus lambda times Theta squared. And if you, if you, uh, if you work out the maximum- if you work out, uh, what the optimal value of Theta is for this case, there is a closed form solution. And the closed form solution will turn out to be this. Theta hat n N is equal to x transpose x plus lambda i inverse x transpose y. And this is very similar to the normal, normal equations that we derived, except there was no extra lambda i term in that case. So if you remember the- the- the normal equation give, had given us Theta hat equals x transpose x inverse x transpose y. And back then we had- we had spoken or at least briefly mentioned that it's, sometimes it's possible that X transpose X may not be invertible, right? And in the case of L2 regularized linear regression, what we see is, you know, we are adding this diagonal, uh, matrix with lambda as the diagonal values to x transpose X, right? And what does this, ah, give us? So supposing X transpose X has an Eigen decomposition where it is U sigma 1 square until sigma d square U transpose. So suppose this is the Eigen decomposition of X transpose X. By adding a lambda parameter to a lambda i to this matrix, you're essentially just adding lambda to all the eigenvalues. If you remember adding a diagonal matrix to, um- adding a diagonal matrix to some- some matrix just increases the eigenvalues by that diagonal value, right? And this is, um, X transpose X plus lambda i, right? And now if we invert this matrix, then we're just inverting the corresponding eigenvalues. Yes, question. [inaudible] So I will not go into the details of that. That's a standard linear algebra result where if you add a diagonal matrix to another symmetric matrix, then you're just increasing the eigenvalues by, you know, the, the uh, amount by which your- the diagonals are scaled. That's a standard linear algebra result and it's very easy to show it. If you can post on Piazza, I can reply with the steps, but that's, you know, just take it as given for now. Can I ask a question? Yes, question. So are U and U-transpose [inaudible] So X transpose- so U here are assumed to be, you know, invertible, where U has the property UU transpose equals U transpose U equals i. So U transpose is the inverse of U. So that's again, that's eigen-decomposition right? So, um [NOISE] so now we see that because we are adding a non-zero value to all the eigenvalues, all the- the inverse will exist because you know, we're not doing a 1 over 0 anymore. In the original X transpose x some of Sigma I squared could have been 0. But now that we've added a small Lambda value, this matrix is always invertible and you always get a unique inverse, right? And now, Theta hat n is now, therefore, this um Theta hat n is ah - so x transpose x plus Lambda I inverse is now this matrix. And we can now see that the expectation of Theta hat n is equal to um - again I'm not going to go over all the steps, but ah I'll directly, ah, write out the last, ah, equation that we get is equal to Sigma 1 squared over Sigma 1 square plus Lambda Sigma d square, or Sigma d square plus Lambda. Rest are all 0s, times U transpose of Theta star. Right? Does Lambda take on only positive values? So the question is this Lambda take on only positive values necessarily, yes. So Lambda in case of regularization is always greater than 0. Yeah, good question. So, um so the expectation of Theta hat-Theta hat n, where Theta hat n is the solution of the L2 regularized regression -linear regression is this matrix. So this whole thing is a matrix times a vector, right? So we can make a few observations here. Now, supposing Lambda was 0, right, this whole matrix reduces to the identity matrix, right? Because each of the diagonals become 1 along the diagonals, U and U transpose has this property that U, U transpose is the identity. So this whole matrix reduces to a large identity matrix if Lambda is zero. Yes, question. Can you please explain how you get this expression? How we get this expression? So it is pretty, pretty straightforward. It's in page four of the notes. The way you go about doing this is Theta hat n is equal to -we write it over here. We write this as x transpose x plus Lambda i inverse x transpose and then in place of y, we write x Theta star plus epsilon. Because y is x Theta star plus epsilon as the assumption. And then you, you, you expand this and you will get two terms. And because of the epsilon, the second term just becomes 0. Expectation of epsilon times a scalar is 0, and the first, first um, expression you apply an eigen-decomposition you get this right. The detailed steps are there in the notes. But the idea here is that um, the mean of Theta - Theta hat n is this matrix times Theta star. So, if Lambda is 0, the standard regular linear regression, then expected value of Theta hat ah n is equal to Theta star. So the linear regression, the standard linear regression ah, estimator is unbiased. However, if we add a regularization term, then we are multiplying the, the true Theta star by a matrix whose eigenvalues, all the eigenvalues are less than 1. Right? And if you remember, the, the eigenvalues, decide how much the input space of that matrix shrinks or expands in the output space, right? So if you feed some- any vector into a matrix whose entire spectrum, whose set of all eigenvalues are small, then in the output space is essentially shrunk, right? And so what we see is that the when, when we have Lambda greater than 0, the norm of Theta -of, of ah - the norm of this entire expression is gonna be smaller than the norm of Theta star itself, right? Which is how you see the shrinkage effect coming into picture. You are multiplying the true valu- the true parameter by a matrix whose entire set of eigenvalues is less than 1. And so the result of multiplying ah, the vector is going to be a shrunk version of that vector, right? So, so that's how we get, you know, that's how we see that linear- the regularization term has a bias. When Theta is non-zero, then when Lambda is non-zero, then the expectation is never exactly equal to Theta star, right? So that's bias. What about the variance? For reference I'll just continue it down here so that you can just compare it with what's above. So, so the covariance matrix of Theta hat n- again, um using a very similar argument, if we expand it out, ah you're gonna get that the covariance matrix of Theta hat n will be U times tau squared Sigma 1 squared over 1 squared plus Lambda squared tau squared, sigma d squared times mu transpose, where tau is [inaudible] So if Tau- Tau squared is the variance of the noise in- in, um, in our x and y, such that y equals, uh, Theta star transpose x plus Epsilon. So if Tau squared is the- is the variance of the error, uh, um, of the error terms, then the covariance of the Theta hat N, that's the covariance of our regularized estimate is this matrix. Now, this matrix you can observe is symmetric because, um, um, it's symmetric and it will be, uh, positive definite because all the- all the eigenvalues are greater than 0. So it's a value covariance matrix, right? Just sanity check. And what we also see is that as we increase the- the, uh, la- Lambda value to, uh, to a larger, uh, value, then the covariance matrix is going to have a smaller spectrum, right? The set of all eigenvalues of the covariance matrix also reduces as we increase Lambda. And because these are the, each of these terms is just the, uh, um, are directly related to the- the, uh, um, are- are exactly the eigenvalues. So if you have a larger Lambda, then the eigenvalues of the covariance matrix are also reducing. Which means the intuition to have that is that your covariance or the variance in your estimator is also reducing. So if you remember, if you have, uh, um, a Gaussian distribution that has some mean and some covariance, the covariance, the sign, uh, uh, uh, um, the covariance implicitly, uh, defines, uh, an ellipse around the mean. And the smaller the covariance, the shorter or the- the tighter the, uh, um, uh, ellipsoid will be. And that's the intuition we have here. The larger that, the smaller the eigenvalues you have, the- the tighter the- the ellipsoid is around, uh, um, around the mean, right? So here is one- so, uh, here what we see is as we increase Lambda, the bias in our estimate increases, which is bad. But the variance in our estimator reduces, which is good, right? So we want our variance- our estimated to have low variance, but we also wanted to have low bias. But what we see here is, as we increase Lambda, the bias increases, which is bad, but variance decreases, which is good. Similarly, if you reduce Lambda, the bias decreases, which is good, but variance increases, which is bad, right? So this, this, uh, this is one example of, you know, what's- what's called the bias-variance trade-off where some action that we take will improve one of them, either bias or variance, but at the same time, it can hurt the other one. Yes, question. Can you clarify then what you mean by the or by the bias would reduce. So the bias will increase when we increase Lambda because we are now multiplying the true value of Theta star through a matrix that shrinks it, and it's- it's getting farther and farther away from the true value. So, you know, uh, we- we- that's- that's bad, uh, we're going farther away from the true value itself in expectation. At the same time, there's also, you know, a variance term involved, where- which is our sensitivity to data, right? The larger Lambda- the value of Lambda we have, then the less sensitive we- we become to the noise in the training data. And, uh, and- and- and that's good. So therefore, Lambda is- is, you know, you're kind of, uh, by choosing the value of Lambda, you're doing a trade off between bias and variance, right? And these two terms directly translate into prediction as well. So this was about, you know, about Theta hat itself. Now, if you wanna do prediction, [NOISE] if you wanna do prediction, let's remember- let's recall the- the, um, the da- bias-variance decomposition. So mean squared error of f hat n. This one was, uh, Tau squared plus x star minus x star the whole squared plus variance of- so this is the, uh, standard bias-variance decomposition which we saw last- last time as well. This is called the irreducible error. And this is bias squared. This is variance. And- and this is again in Page 7 of your notes, the detailed, uh, steps of deriving this. But let's just look at- look at the result and- and, uh, trying to get some intuition. So this is, uh, our bias. And what we see is if- if we define f to be now Theta f of x to be Theta star transpose x, which is the- the- the true value of- of Theta, then the bias, uh, will relate- the bias of the predictor will now relate to bias of the estimator in this way. So bias of f hat n of x star is equal to the expectation of f hat n x star minus f of x star. So that's just, um, this and this is equal to the expectation of Theta hat n transpose x star minus Theta star transpose x star. This is equal to expectation of Theta hat n minus Theta star transpose x star, this is equal to bias of Theta hat n, transpose x star. So the bias and the predictor is related to the bias in the estimator. With this relation, it's you- you- you just take the transpose of the bias and the estimator with the point that predict- the prediction point, prediction input and that gives you the bias in the, uh, uh, predictor. Yes question. Can you like also,like, do you have an expectation for Theta hat and have the covariance can you use, uh, those two formal closed-form solutions of Lambda in terms of Sigma and Tau to minimize them? So the question is, can we- can we come up with a closed form expression for Lambda in terms of? Sigma and Tau. Sigma and Tau and minimize what? So if you what the eigenvalues to be as big as possible so that you can stay close to the data but you want the covariance matrix to be minimized. So like you're some sort of a relationship and you minimize your, vari- uh, you minimize your variance and maximize [inaudible] ? And so I guess if I understand you correctly, you're trying to, um, you're trying to say, is there an optimal way to find out kind of the best value of lambda [BACKGROUND] or- or the optimal value of lambda. To simultaneously minimize that and maximize that. We'll- we'll come to that shortly. So we're going to answer the question of how to choose lambda. Uh, the- the, uh, closed form expression has some, uh, challenges which, you know the- the- with the method that you suggest. But, uh, let- let's- let's, uh, come to that shortly. Right? So- so this is the, um, so this is the relation between the bias and the predictor and the bias and the estimator. The relation is defined by the point at which you are making a prediction. And this kind of a closed form relation is possible only for simple models such as linear regression with L2 regularization. For more complex models, if f hat n is a neural network, then it's very hard to come up with a simple relation between the bias of the predictors of the neural networks, of the- uh, the parameters of the neural networks and the prediction that it makes, right? But for simple models, we can- can see the relation between the two. Similarly, the variance of f hat n is equal to x transpose covariance of theta hat n x. So it's a quadratic form that uses the covariance matrix of theta hat n along with x as the input to get the variance of that. So- so- and- and similarly the- the, um, and therefore, because there is, you know, uh, a straight forward relation between the bias from the estimator to the predictor and the variants from the estimator to the predictor, the effect of lambda is also the same. So if we increase lambda to a large value, the variance of the estimator comes down and the variance of the predictor comes down. And if we reduce lambda, then the, um, uh, bias of the, uh, uh, estimator comes down and the bias of the predictor also comes down, right? Because it's- it's a very simple, uh, uh, relation between the two. Now any questions on this? So in general, what we care about is we want to minimize the squared error on future predictions, right? In machine learning, that's our goal, right? What we want to do is- is have, or have an estimator or- or- or a predictor that has very good low generalization error. So generalization error is how well we perform on unseen, uh, examples. And for that, what we see is in order to minimize the generalization error, we have three different components that we can go after. All of them are non-zero, so reducing any one of them will reduce the overall generalization error. Right? And these- these three components, as you, uh, uh, saw before, is the irreducible error bias, the square of the bias and variance that add up to form the- the, uh, mean squared error. Now, to reduce any one of these, let's- let's see, uh, what- what- what happens. So irreducible error is something we- we cannot do anything about, right? That's just a fact of life. We have to live with the irreducible error. No matter what kind of a predictor that you come up with, you can never do better than irreducible error, right? So that's like our- our- the best we can hope for. And then we have these extra two terms, bias and variance. So if you want to reduce generalization error, then we need to reduce either bias or variance or both. Unfortunately, what we saw was some steps such as regularization will- if we tune the regularization to reduce the bias, it increases the variance. If you tune it to reduce the variance, it increases the bias, right? So that's- that's- that's like, um, um, that's the fundamental trade-off that we need to work with to- to find some kind of an optimal- optimal, uh, uh, minimizer of the sum of these two terms. Right? And there was a question that was asked, why not try to analytically minimize this? And in to- to analytically minimize this, we need to be able to compute f of x star. Without being able to compute f of x star, we cannot min- find the right balance to minimize the sum of the two. And therefore, what we are left with is to use cross-validation. So what we do is take a validation set. We discussed cross-validation, uh, in the last lecture. Have a validation set, and- and keep that away when you're fitting your model. Once you fit your model with some specific value of lambda, see how well your- the fitted model, which is regularized, performs on the cross-validation set. Right? And then when you measure it on the cross-validation set, you're measuring the overall mean squared error. We- we're not- we're not measuring the individual components. When you measure the- the error made by the model on the cross-validation set, we're measuring the overall error. Right? And then we now start tuning the lambda parameter to different values. Increase it a little, decrease it a little. And with the tuned values, measure the performance on the cross-validation error again. And tune it such that the- the- the overall, uh, mean squared error on the cross-validation, you find some kind of a good minimum. By doing that process, you are automatically optimizing for the overall mean squared error and implicitly finding a trade off between bias and variance. Right? It's very hard to get an exact breakdown of what was the irreducible error component, what was the bias component, what was the variance component. But the theory suggests that by tuning lambda, we're gonna have- we're gonna have some kind of a trade off between bias and variance. And there may be, and there probably is, some kind of an optimal lambda that finds a suitable trade-off between the two. And the way we do about, uh, the way we go about doing that in practice is using a cross validation set, where we are directly trying to minimize the- the overall mean squared error, uh, without actually finding out what the exact, uh, broken down components was. Right? The other, um, I guess the- the larger takeaway from this is that even beyond regularization, in machine learning, as I said before, what we care about is reducing the mean squared error. Right? And if we want to reduce mean squared error, we want to take some action that either addresses bias or addresses variants. Right? And in order to- to, uh, make the judgment of whether we want to go after bias or go after variants, we need to get some kind of heuristic estimate of what the current bias value is and what the current variance value is. Right? But as I said already, unfortunately it's not possible to get, you know, a closed form expression because f of x star is unknown. That's- that's- that- that's the heart of the problem. We don't know what f of, uh, x star is. Uh, if we knew it, we could exactly calculate bias component and variance component. But, uh, but- but- but we don't have access to f. So instead, uh, what we do is we fall back to heuristics. So the heuristics is that- [NOISE] All right? So this is a heuristic that is extremely useful in practice. It is- if you change this approximately equal to- equal to, and it's very wrong because the training error is not your bias and the cross-validation error minus training error is not the variance, but they're extremely useful heuristics. So when you're trying to- to, uh, build a model that gives us better generalization error, as I said again, our goal is to minimize generalization error. Our goal is to not go after bias or variance by itself, but we do that in o- just in order to minimize the generalization error. And in order to minimize the generalization error, we always want to get a rough sense of the breakdown between bias and variance components that are contributing to the generalization error. And rough heuristic, again, is to consider the training error as the bias and the gap between the cross validation error and the training error as the variance. Okay? So when we want to improve our generalization error, we first need to find out what's the breakdown and go after the component that is larger. We never want to take an arbitrary step, such as, you know, get more data or add more regularization or reduce regularization. There are- there are a whole number of things you can do to improve, uh, generalization error. But it would be extremely wrong if you just tried them arbitrarily, right? That's- that's probably the biggest takeaway from this lecture, and probably, the biggest take away from this course. In machine learning, we want to reduce generalization error, and the way you go about reducing generalization error is to construct heuristic estimates of the components r, and the heuristic estimates are the training error and the cross validation error. And only once we come up with these heuristic estimates, we make a conscious decision of whether in the next step we want to reduce bias or we want to reduce variance. And the way you go about making the decision is to see which of these two is- is like the larger leeway. Is your training error so large and the gap between your cross validation and training error so small that you, you know, better go after your bias first, which means to improve bias- to improve bias or maybe to fight bias, is probably the better term, to fight bias, we do things like make model larger. So if it is a neural network, then add more layers or add more neurons in each layer, reduce regularization, you know, etc. And to fight variance, we do things like increase regularization, or collect more data, right? And- and the way you go about improving- uh, you know, am- am- am repeating this again because this is- you know, this is probably at the heart of this entire lecture that, our goal is to minimize mean squared error. And to minimize mean squared error, in theory, there are so many possible actions that you can take. But before you go about taking any action, you want to come up with some, kind of, a heuristic estimate of what the breakdown is. What does the amount of, you know, estimate of the bias in our-, um, um, in- in our generalization and what's the estimate of variance. And once you get this breakdown, we then make a conscious decision of whether we want to fight bias or we want to fight variance in the next step. And in order to fight that particular- of the two evils, we choose one of the actions that help us fight that component. Yes, question. Do we not consider reducing the flexibility to fight variance of the model [inaudible]. Yeah, yeah, absolutely. So the question is, do we not consider reducing the flexibility of the model to fight variance? Absolutely. In our simpler model- so this- this is not an exhaustive list, but these are just some examples. You know, simpler model, absolutely. More complex model. Yes, question. I can see how we can never achieve those estimation that bias plays exactly. But we know that, like, f star is a function of Theta star, like I drew underlying processes as a function of Theta star, and Theta star and the expectation of our estimated Theta is a function of Theta star from that matrix, and we can plot the different diagonal values, as like, as the- a very weird kind of curve where we can take all diagonal values in the first matrix and all the diagonal values in the second matrix, and you can find a huge, sort of, curve. They keep- keep changing Lambda, that theoretical value of those diagonals will be j theta, right? And if we take that [inaudible] curve and find that minimum of that- that curve, we'll be able to optimize those [inaudible] , and we do not need to go after Theta star. So the question is, um, I guess- I guess you're- you're, uh- what you're, uh, suggesting is a few ways in which, uh, using this analysis, we can, you know, try to come up with optimal values of- of Lambda that minimizes, like, the sum of both the errors, right? The- the- uh, so the- so the first problem with that is such an analysis is possible only for simple models, right? If you want to increase the- the- uh, for example, if you want to reduce the bias of your model, you- you may want to switch from linear regression to, say, a neural network or a Gaussian process, right? And once you do that, you know, this analysis breaks down at that point. And- and, uh, this exercise of seeing the breakdown is useful to get, like, an intuition- a mathematical intuition of what's happening underneath. Uh, but in order to take some real-world action, you want to follow recipes like this, right? If you wanna- if you- if you see that your model is not fitting, you know, your- your examples- uh, you- your data well, your- to fight bias, you could say add more features, right? And once you do that, your entire covariance matrix is no more of use because it's now a larger covariance matrix with an extra dimension. So it's- it's- the- the actions that you take are- are- like, they are- they're, kind of, beyond that simple analysis. Also in the first equation we have a theta star, is that decision going to be influenced by what theta star actually is, let's suppose we already know what theta star is, will our decisions going to be influenced by that Theta star because [OVERLAPPING] So the question is, are our decisions influenced by Theta star, right? So Theta star is unknown, because if it was known, then we wouldn't even go through this exercise of taking data and fitting models, right? And it is- because it is unknown, that- uh, that we- we come up with, uh, this, kind of an analysis to see, you know, how- how- how, uh, bias and variance can- can, uh, uh, impact us. All right. Uh, any questions on this before we move on to- to, uh, the next part, uniform convergence. All right, so let's move on. So what follows next in this lecture is gonna be, uh, a quick high-level introduction or overview of statistical learning theory or- or- or like, uh, the general, kind of, flavor of how statistical learning theory is. What are, like, the common- what's the common framework of analysis, and- and, you know, what are the kinds of things we could do with statistical learning theory? And this would be especially interesting to- to those who have an interest to move to machine learning research. But also the results that we see will kind of strengthen our intuitions about bias and variance. So for statistical learning theory, so let's- so we make a, uh, so a common assumption that- that's kind of widely made in statistical learning theory or- or in most theory is that our training and test data, they come from the same distribution. [NOISE] If- if this assumption is not there, it's very hard to say anything at all in general. So the- the, uh, the underlying assumption that we're going to make is that our training and test data come from the same distribution, and our examples are sampled IID. [NOISE] So there's in each example a samp- a sample independently of any other example that we've had, and at test and train time, we are- we are working with a data that's coming from the same distribution. And now with these, we define the following terms. So the risk of an estimator, so let's call this distribution as D. Risk of an estimator is defined as, let's call it a risk of a hypothesis. So the risk of a hypothesis is defined as the expectation of x, y sampled from this distribution of loss of y and h of x. Right? So h over here is our hypothesis. It's a function that takes as input some x and produces y as an output. So the risk associated with this function, right? So this is also called as a risk functional because it takes as input a function and returns as output a real value. So the risk of a hypothesis is the expected loss made by that hypothesis under some kind of a loss- loss function. So this is, you know, generally accuracy of squared error or- or, uh, something along that star where the data that we are using to feed as input to x and compare it to the y are sampled from the data distribution. Right? So this is the definition of the risk of the hypothesis. Now, suppose, um, there's a related pair, if S is some set of x^i, y^i pairs, i equals one to n. So if S to some training set, then redefine the empirical risk as Epsilon hat of h is equal to 1 over the size of the dataset times the sum over x plus y in S, loss y comma hx, right? It's very similar to the risk definition, except instead of taking a full expectation over the entire data generating distribution, we just limit ourselves to taking an average over the training set that we- that we have, right? and this is called empirical risk. Now, what we want to do ideally, or, you know, the best we could hope for, is to find some kind of a hypothesis that minimizes the risk. That's we what- that's what we want to do. That's our, um, um, that's- that's our desire to me- to find an h that minimizes the true expected risk. But what we actually do in practice is to find a hypothesis that minimizes the empirical risk because we have some- some training data. We construct a loss function over it and we minimize the empirical risk instead, right? And that's what, you know, uh, you've- you've- you've seen, um, this so many times. You have done it in your homework, linear regression, you know. We use a design matrix. We take the labels and use the normal equations to minimize the empirical risk. Right? So the questions- uh, the question that- that comes into mind is, what's the relation between, you know, for, uh, given h, what is the relation between Epsilon hat of h and Epsilon of h? Right? How does the training error relate to the generalization error for any given hypothesis? Right? You know, that's an interesting question. And the other question is, how does our generalization error compare to the best possible generalization error? Right? So our training error is something that we can measure. The generalization error is something that we cannot measure, right? But we- we're gonna use some theory to make statements about or give you bounds on how well our general- what our generalization error will be given that our training error is some value. Okay? Similarly, what's, I guess, you know, um, what's- what's- what's important at the end of the day is how well is our generalization performance compared to the best possible generalization performance that you can get. Right? And here we are comparing two, you know, two things, both of which we cannot measure, right? We- we have no idea how well our generalization error is and we have no idea what the best possible generalization error is either, right? But we still want to make some kind of probabilistic statements or give us bounds on the gap between the two, right? If we know that our- the- the generalization error of our model is within some small range of the best possible generalization error, then you know, we're happy, right? Or we may not be happy with, you know, happy with the result itself, but we know that we are- we are pretty close to what's theoretically possible. So in order to kind of get a better understanding of each of- of these two, let's define, uh, you know, some terminology. So on the x-axis, we have the set of all possible hypotheses. What I mean by set of all possible hypotheses, I literally mean set of all possible hypotheses like the set of all possible neural networks, the set of all possible SVMs, set of all possible Gaussian processes, set of all possible models that are not yet invented today, you know, every possible hypothesis, you know, comes on the x-axis. And we're going to linearize it to- to make the visualization simple, but, you know, imagine you have all possible hypotheses on- on the x-axis. And on the y-axis, we're going to have risk. Okay? Now- was there a question? [BACKGROUND] Right? Yeah, so- yeah, so what I mean by hypothesis, the set of all possible predictive models you can- you can think of, you know, random forest, neural networks, everything, you know, comes- comes on this axis. Now, on the y-axis, we have risk. And supposing we indeed had access to the true data-generating distribution, right? Hypothetically, we had access to the true data-generating distribution and we could take an infinite number of samples, right? We could pick the entire infinite set of samples from the distribution. Take one of these hypothesis and measure the empirical- measure the- the, uh, expected loss of that hypothesis at the, uh, uh, against all- the entire infinite set of examples and that's what gives the point. Right? So this is the hypothesis that we feed in over here and this is the value it evaluates to, the risk. Right? And similarly, we can- we can repeat that for every possible hypothesis, you know, across all models, and that's going to give us some kind of some kind of a risk function. [BACKGROUND] So this is not empirical risk. We are talking about the true risk. You know, assume you get the entire infinite sample and- and you- you, um, uh, evaluate the, uh, the loss and, you know, let's assume this is the- the- the kind of loss you get. You know, thi- this is the risk. However, we don't have access to this. Right? We wish we did have access to it and then we could just minimize it if we had access to it, but we don't have access to it. What we instead have is a finite sample S and from this S, we can construct the empirical risk. And the empirical risk might look something like this. [NOISE] Right? It's gonna be some kind of an approximation. And this gap between the true risk and the empirical risk is intuitively, you know, you can think of this gap to be a function of your dataset size, right? As you get more and more dataset, as this n increases, then the empirical risk and the true risk, they- they become tighter and tighter. You know, the- the empirical risk gets closer to, um, uh, the true risk. Now, this particular risk function is specific to one set- one specific set of training set that we have. If we take another training set of the same size, that might give us a different [NOISE] risk function, right? So these- you know, think of these two as two examples of risk functions or loss functions that we get from two different training sets. Both of the training sets are sampled from the same data generating distribution, and they have the same size, right? And if you keep repeating this again and again, we're gonna get more and more such, you know, dotted lines, more and more such empirical risks that are spread about the- the true risk with some variance. And if- you know, you can imagine that if we- if we increase the dataset size, then the spread around the true risk will be much tighter, [NOISE] right? We'll- we'll formalize that, uh, shortly. Now, the true risk, right? The true risk, that's the- the- the thick line is minimized at some point, right? The gap between, you know, this minimal point and the x axis, this gap is the irreducible error. [NOISE] Right? No matter what model you get, you know, model invented, you know, 10 years from now, no matter what model you get, you can never do better than the irreducible error, right? Because- that's just because your data is noisy. Yes, question? How do we know that the irreducible error can be reached by at least one [inaudible] So the question is, how do we know that the irreducible error is- can be reached by, you know, some model? Uh, by definition, if you ha- use non-parametric models and you have the model, um, you know, um, [NOISE] h of x [NOISE] equals expectation of Y given X equals x. You know, uh, uh, a non-parametric model, uh, uh, this function will give you this, right? But you may not be able to approximate this well with- with- with, you know, some function. But there exists a function that is gonna minimize your, uh, true risk to- to the absolute minimum. And that minimum is the irreducible error, right? So this is with the assumption that we're using the squared loss. So the squared loss is minimized by the mean. If you use like the absolute value, then instead of the expectation, take the median and so on, right? But- but that- there exists some- some- some function that's gonna minimize it. Right? So this is the irreducible error. Now, however, we're gonna limit our analysis to a class of models. Right? We're gonna limit ourselves to just say, the set of all neural networks, random forests, and SVMs, and logistic regression, right? We're gonna limit ourselves to a class of models, right? And this class of models, this interval on the x-axis, I'm going to call it script capital H, [NOISE] right? Which is class of model- class of hypothesis- [NOISE] hypothesis, right? And what we didn't see is this class of- within this class of models, the best possible risk that we can hope for is found here. Right? So this is the point within the class of models that minimizes the risk the most. And here, I'm talking about the true risk, not the empirical risk. All right? So let's call this h star, which h star is best in class hypothesis. [NOISE] Right? So this is the best in class hypothesis. And the difference between the two, [NOISE] this gap, it's called the approximation error. [NOISE] Right? It's called the approximation error because this is the penalty that we incur by limiting ourselves to a particular class of models. Right? If- if- if we did not limit ourselves to, you know, this class, but if we use a wider class, then our best-in-class would have been the same as the, you know, irreducible, uh, error. But because we are approximating the hypothesis by some class of hypotheses, then this is the extra- the extra penalty that we are paying due to limiting ourselves. Right? That's called the approximation error. However, that- the- the model that we say end up using is the one that we get by minimizing the empirical error. Okay? So it is the empirical risk that we minimize in order to come up with a model, right? Now, if this dotted line- so let me erase the other dotted line to- [NOISE] so in practice, what we do is we take a training set and we get this as our loss function, and we minimize this loss function. And the point where we minimize, we get the hypothesis, let's call it h hat. Right? That's the estimated hypothesis, right? So in case of linear regression, this corresponds to the Theta hat vector that we get. All right? [NOISE] And now, because we- we- let me move this a little to the right so that it's easier to visualize. [NOISE] Let's assume this is where we get h hat. Now, this is where the- the uh, empirical risk was minimized, and the corresponding generalization error is over here. Right? And this gap is called the excess risk or the estimation error, or the excess risk, you can call it both of them. Let us call it excess risk. Right? So [NOISE] that is one part that is the irreducible error, it cannot do better than that no matter what. There is another part called the approximation error which we- it's just the penalty that we pay by limiting ourselves to a class of hypotheses, right? And there's yet another component called the excess risk, it's also called the estimation error. And this is the penalty that we pay for having a limited dataset, a finite data set, rather than the full infinite number of data. Right? Because if we had the full infinite data, then our empirical risk could be exactly on the thick line, and excess risk would reduce to 0because we would just end- uh, I don't know, finding this as our empirical risk minimizer. Yes, question? Are we are actually doing better than the- just like [NOISE] your actual x-axis, that should be less, right? Yeah, so we want- we want to find points such that the risk is minimized. So going higher up is bad. But if we compare it to the irreducible error, we are are actually doing greater than just going with the optimal line. So we're doing worse than the irreducible error because irreducible error is even lower. Yeah, but in that [NOISE] class, you can either have extra or ex- excess closer to the reducible error in the next chart, right? So- so- so I wouldn't read too much into the verti- into the horizontal distance between anything. Uh, what we care about is the error incurred, expected error incurred. We want the error to be minimized. So this- this is, you know, the fact that I have even drawn it linearly is- is- is a big, you know, um, uh, uh, simplest- simplification. So don't- don't read too much into the- the horizontal ordering of things. [NOISE] Right? So, um, so what we want now ideally is to minimize this excess risk, or at least get a bound on excess risk, right? And the way we go about doing that is you're gonna break it down into- into two sub-steps. [NOISE] So the way we're gonna do it is first- [NOISE] so step 1, uniform convergence. Step 2, excess risk bound, right? So we're gonna follow these two steps. Where the second step, is- is- is- is very- is- is- is a pretty straightforward step and applies to all models. But the class of model that we use and the loss function that we use, is gonna give us different techniques for doing step one. All right, so roughly speaking, uniform convergence means [BACKGROUND] the intuition behind uniform convergence is that we want our risk function- the- the empirical risk function to converge to the true risk function in a uniform way, right? And we want to make- to- to come up with some kind of a- a probabilistic bound on how far apart the empirical risk function is from the true risk function. [NOISE] So if you're- if you- if you've taken some advanced mathematical classes before, you might be familiar with the concept of functions converting to other functions and that's exactly what's gonna happen here. So this is sum function, the emperor- the true risk is another function. And now we're gonna see how, um, um, the empirical risk functions, as we increase the number of data points we have, will probabilistically convert to the true risk function uniformly across all points, right? That's- that's the- the- that's the intuition to have that we want to minimize the gap uniformly across all hypotheses. So the uniform convergence statement. Um, the statement looks something like this, so with probabilities or WP means with probability. So with probability greater than or equal to 1 minus delta. So delta gen- generally, you know, you wanna think of delta as like a small value. So if you wanna give like a 99%, um, um, uh, probability statement, then delta will be 0.01. If you want to give, you know, with probability 80%, then delta will be 0.2, right? So you think of delta as the gap between 1 and the level of probability that you wanna give a statement. So with probability 1 minus delta for all h in h. So we're gonna to limit ourselves to a particular class of hypotheses, right? We are doing our analysis now in the context of a particular class of hypothesis. [NOISE] The generalization error epsilon of h minus epsilon hat of h, is less than equal to sum term gamma. Where this gamma is a term that involves, n delta and class of hypothesis, [NOISE] right? So this is the standard template of all uniform convergence results. And the result is- is- is different for different classes of models, for different loss functions. And, uh, so depending on the class of model that you have, you will obtain, you know, um, a different- uh, you will obtain a term for gamma that- that uses these terms in- in- in some particular way if it- if you're using, uh, lets say, SVMs, then this term is gonna be different. But in general, the term over here is- is- is gonna use it- this is going to be a term over n delta and the, uh, hypothesis. And the statement says, with probability 1 minus delta, you know, think of it as with high probability, for all hypotheses, you know, for all hypothesis on the- on the x-axis, the gap between the- the test error or the generalization error, and the gap between the true risk and the empirical risk for that hypothesis is less than gamma. So it's basically tell you- it's telling you, if you give me a training set size n, the degree of- of- of, uh, confidence you want in the statement and the class of hypothesis, then I will return you some margin, think of gamma as a margin. Such that [NOISE] with probability say, 99%, the gap between the true risk and the empirical- and the empirical risk is gonna be less than gamma with probability 99%, which means for, say, 99- with 99% probability, for any given h, the gap between the true risk and the empirical risk, is less than gamma for all h, right? It's- think- think about it, you know, carefully because it- it can be a little confusing. What we mean is for any given h, let's consider this h, right? For this h, we have fixed the, uh, the number of example's n. And we're gonna repeat collecting n examples, fitting a model, evaluating the- the, uh, uh, empirical risk. Repeat, uh, take a new set of n examples, fit the model by minimizing the empirical risk and- and- and plotting the error. [NOISE] And that will give us points like this. So each experiment is gonna give us a point like this. And what it says is for any given hypothesis for all h, the probability that the- the [NOISE] the point that we get, uh, the empirical risk that we get, and the true risk at that point, the gap between the two, let's call it gamma. Gamma is gonna be the- the- the- the gap between- [NOISE] the gap between the two, sorry, this is not gamma, this- this will be less than gamma, right? So there's gonna be some upper and lower bound plus gamma and minus gamma. So this is a gap of 2 gamma. So it says with probability greater than 1 minus delta, the empirical risk is gonna be within, you know, gamma of the true risk. And how do we go about actually calculating this- this particular value of gamma, is gonna depend on the kind of hypothesis that you are working with, right? And- [NOISE] and we'll see a few examples of this shortly, yeah. Is this n just the size of the sample? Yes, n is the size of the sample. And the intuition you wanna have here is the larger the n you have, the smaller gamma will be, which is good, right? And the smaller the delta you have, which means the tighter the bound you have, the larger gamma will be. So if you want a high probability bound then gamma will be bigger, right? And the larger the n you have, you know, you're reducing the variance. So- so to- to reduce gamma, you want to either increase n or even increase delta, which means you want a- a- a lower probability bound. Next question. So is this like an assumption that we're making or is this a theorem and can you prove it? Yes. This is- this is going to be a template for a theorem that you can derive different values of gamma for different model classes. If you don't know if it's going to be uniform for every different kind of hypothesis or not. Is it always going to be the case? It will be uniform and- and you're gonna go more- I- I- I'm just giving you a flavor of the- the way the statements will look. And- and- and we will see a- a- a few more details shortly, right? Right? So this- this uniform convergence results will always follow this flavor where you're trying to ge- obtain a probabilistic bound on the margin of how far apart [NOISE] the true risk and the empirical risk will be. And that margin is gonna depend on n- delta n and the hypothesis class itself. Next question? Uh, why is it such that the, uh, risk function is continuous over the, um, hypo- possible hypothesis space? It need not be continuous over all hypothesis space here. So the question was, you know, is the risk function is continuous over the hypothesis space and- and, you know, the hypothesis space itself need not be continuous so we'll move on, right? So in general- yeah, so the- all uniform convergence results are gonna look like this where you- you want some kind of a gap between, uh, uh, the empirical risk and the true risk for any hypothesis, you know, this any hypothesis is what, you know, is why- is why we call it uniform convergence. And that gap is gonna be, you know, less than some margin gamma fo- for a particular level of probability- probabilistic confidence. And this gamma will depend on n, the level of confidence and the hypothesis class, all right? And assuming we have derived some gamma in this way, we can then plug it into the excess risk bound, right? Excess risk bound. [NOISE] So I'm intentionally, uh, not giving an example of- a concrete example of this yet. Let's see how this plugs into the excess risk bound and then we'll see a more concrete example of this. So the excess risk bound does actually looks something like this. With probability greater than equal to 1 minus Delta. Epsilon of h star is less than or equal to Epsilon- Epsilon of h hat is less than equal to Epsilon of h star plus 2 Gamma, right? This- this looks somewhat similar to this, but it's- it's pretty different. So first of all, here, in uniform convergence, we are talking about the same h but the different risk functions. You know, we're talking about the gap between the- the, uh, true risk and the empirical risk. Whereas over here, we're talking about the true risk itself, but it's the difference in the true risk between our estimated hypothesis, h hat, and the best possible hypothesis h star, or the best put- best-in-class hypothesis h star, right? And the only difference between the two is, you know, from Gamma, we went to 2 Gamma. Why is that, right? And this is- this is a pretty, uh, pretty cool result and a very, kind of, intuitive result. Can I aska question? Yes, question. Why does the excess risk involve the, uh, an estimate [inaudible] Yes. So let me- let me draw a quick picture about this. Uh, so the question is, why does the excess risk involve h hat? I guess that's what- that's the question was, uh, anyway. So um, so if this is our region of interest, okay? And let's say this was- is the true risk and there's the empirical risk. So this is Epsilon of h. This is Epsilon hat of h, right? So what we want here is- so this is h star that minimizes- that minimizes the true risk, and [NOISE] this is h hat that minimizes the empirical risk. All right? So now the excess risk is this gap. This gap is the excess risk. All right? So the, um, excess risk, we want to get, uh, uh, of, uh, we want to get a bound between two points on the thick line, right? But whereas what uniform, uh, convergence gives us is a bound between the dotted line and the thick line for any given value of h, right? That's what uniform convergence gave us. It gave us a bound on the margin between the thick line and the dotted line for any given value of h. But instead what we want is a bound on the gap between two points on the thick line itself. How do we go from- go from this bound to this bound is I guess the question for this, uh, uh, for this proof. So the proof is pretty- pretty nice and intuitive. So Epsilon of h hat, which is the generalization error of our learned hypothesis or the estimator hypothesis is less than or equal to Epsilon hat of h hat plus Gamma, right? And this comes directly from the uniform convergences idea. We apply the uniform convergence result at h hat and it says that the true risk function can not be more than Gamma away from the empirical risk at h hat. Any questions on this, the first step? Okay. All right. So this is uniform convergence at h hat, right? And then we're gonna say this is less than or equal to Epsilon hat of h star plus Gamma. What change from here to here? The empirical risk at h hat is less than the empirical risk at h star. [NOISE] What does it mean? This is the empirical risk. Empirical risk at h hat is less than empirical risk at h- h star. And that is by definition because h hat minimize the empirical risk. So the- the main- the- the- the risk at h star cannot be less than- the- the empirical risk of h star cannot be less than the empirical risk at h hat because h hat by definition minimizes the empirical risk. Does it make sense? And now- so this is basically, uh, by definition of empirical risk minimization, right? And this in turn is now gonna be- and now we're gonna make a statement that the gap between- let me just write it down. So Epsilon of [NOISE] h star plus Gamma plus Gamma, okay? You're gonna, again, apply uniform convergence for the gap between the true risk and the empirical risk at h star. All right? Uniform convergence tells us that the gap between these two- these two risk functions is going to be less than Gamma at any h with some probability, right? So it's gonna be less than h over here and it's gonna be less than h over here, right? So this is uniform convergence at h star. So the risk between h hat, ah, ah, Epsilon hat of h star and Epsilon of h star is gonna be less than Gamma, right? It's Epsilon of- here, we are- we are- we are applying this at h star. And this is basically equal to Epsilon of h star plus Gamma, plus 2 Gamma, sorry, right? Which is the result that we got; h hat is less than Epsilon of h star plus 2 Gamma. So what happened here? We had a tool called uniform convergence that gives us a gap between the dotted line and the thick line at any hypothesis h. And using that tool, we wanted to obtain a gap, uh, a bound between two points on the thick line itself, which in general is not possible, but we are gonna use the fact that we are doing empirical risk minimization. So we have, you know, one- one gap here. And so this is less than Gamma, this is less than Gamma. Uniform convergence tells us this is less than Gamma and this is less than Gamma. Uniform convergence, at two different points. And empirical risk minimization tells us this point is less than this point. All right? So two, you know, uh, so this- this basically allows us to order this point to be less than 2 Gamma times this point. Because this point cannot be a Gamma more than here. This cannot be ga- uh, you know, this cannot be less than here, and this cannot be Gamma less than here. So at most, the gap between these two is 2 Gamma, right? So to- to- to kind of draw it clearly. So this is Epsilon hat of h, this is Epsilon of h, and this is at most Gamma. And we have Epsilon hat of h star, which must be more than Epsilon hat of h, uh, h hat, and this can be at most this is less than Gamma, and this is Epsilon of h star, right? So this is less than Gamma, this is less than Gamma, and this has to be more. So at most, these two can be less than 2 Gamma, right? So once we are able to derive a uniform convergence bound for a particular hypothesis class, we can just multiply it by two and get an excess risk bound to tell us how well our model is gonna perform in the real world relative to the best to- the best possible model that could have performed in the real world, right? Now with this template, let's see- er, any questions on this before we see a few examples of uniform convergence? This- this is the general recipe, you know, of- of- of- we're gonna use uniform convergence to- to derive the- the Gamma term, and that's gonna be different, uh, using different techniques for different kinds of models, um, and- and- and different kinds of loss functions. But once we get that uniform convergence bound of- of Gamma for a given hypothesis class, you're gonna then plug it, you know using this technique to obtain an excess risk bound which is just 2 times Gamma. All right? No questions? Yes, question. So just to put it in words, uh, what were the statement that say, uh, the empirical risk for the- so restricting yourself to a certain hypothesis, plus the one that- h hat is the one that you pick which minimizes the empirical loss. So the empirical loss in that 1 is less that and the best- it's- it's bounded by two Gammas, uh, at the [inaudible] ? So [inaudible] before you go further, th- this is not the empirical loss, right? So Epsilon is not the empirical loss. Epsilon hat is the empirical loss. [NOISE] Epsilon is the- the generalized loss or the true risk. [NOISE] So that's the true risk? This is the true risk. We're- we're- in- in the excess risk bound, we're talking about the true risk at the empirical risk minimizer and the true risk at the best-in-class [NOISE] hypothesis. [BACKGROUND] [NOISE] All right, so we're kind of- um both of these are, in general, not measurable because it's measuring the true risk at you know at- at two different points. We calculate the H hat using empirical risk minimization. [NOISE] We have no idea what this value evaluates to. [NOISE] Here, we have no idea what h star is, and we have no idea [NOISE] what that evaluates to, but we [NOISE] can still derive [NOISE] an upper bound on this- on this- uh on the gap between the two [NOISE] using this technique. [NOISE] And in order to kind of actually you know, complete this technique, we need to get a- you know a uniform convergence which is a bound on Gamma [NOISE] f- at- at any point of- at any hypothesis H. Yes, question? On the diagram on the left. Are H star and H hat [inaudible] because the empirical risk at the H star should be less than the empirical risk at H hat, right? So the empirical risk at H star should be less than empirical risk at H hat? So it need not be. So that's a good question. [NOISE] So the empirical risk at H star and empirical risk at H- H hat, they may be flipped. They- they may be in any order. What uniform convergence tells us is the absolute value between the two is less than Gamma. [NOISE] So the- the- uh what that means is [NOISE] at H star, [NOISE] the empirical risk may have been, you know, above it. It's- it's- it's- that's- that's totally possible. [NOISE] Yes, question? [NOISE] So you make that [NOISE] capital H, then [inaudible] hypothesis plus. [NOISE] You're saying that you take it and you will need to put one layer and take it to 1,000 layers. Is this always going to hold so like make it and [inaudible] hypothesis [inaudible] always true, right? Yeah, so- so- so- th- th- the question is what if we make hypothesis class like infinite? Say you're using neural networks, we're gonna come to that- uh uh uh th- th- th- that's exactly what we're gonna come to now. [NOISE] So any, any questions on- on- on this- th- this technique. Yes, question. [NOISE] Yes. So the- [NOISE] the question that was previously asked [NOISE] u, h regarding that, isn't it- isn't H uh hard to [NOISE] find to be the empirical loss in [inaudible] Yes. So H hat is the empirical loss minimizer, [NOISE] which means [NOISE] the empirical loss at [NOISE] H hat will be less than empirical- less than or equal to empirical [NOISE] loss at H star, right? [NOISE] It says nothing about [NOISE] the true risk versus empirical risk at any of those points. They may be flipped. [NOISE] Uniform convergence tells us that they cannot be Gamma apart. So the true risk may have- [NOISE] may have gone below the empirical risk also, It's, you know, it's- it's totally possible. [NOISE] All right. So uh, moving on. Um, [NOISE] let's see a few results of empirical risk minimization. [NOISE] I'm sorry, of a few uh, [NOISE] results of uniform convergence. [NOISE] So- [NOISE] so Case 1, [NOISE] finite hypothesis class. [NOISE] So if we assume that we have a finite number of hypotheses in our class, [NOISE] which is- [NOISE] which might seem actually um, you know, uh, very limiting at fr- uh uh at first. But then, if you kind of think about it, you know, if um, uh, you know, um if you consider logistic regression [NOISE] or say a neural network, [NOISE] you represent that- that um, you know, neural network with- with its parameters stored in the computer memory, right? [NOISE] So the weights of your neural weights and biases are gets- are stored in your uh, computer um um as- as, you know, some float values or 64-bit in a parameter or- or some such representation. And your computer has finite representation capab- capacity. [NOISE] So you're each weight uh, each uh, uh, parameter is stored at a 64-bit value, which means it can take at most 2^64 different possible values, [NOISE] right? So even though you have a neural network having real value parameters, in practice, it is actually, you know, we are dealing with the finite hypothesis class just because we are working with [NOISE] finite capacity computer. So [NOISE] even though this might seem very limiting at first, [NOISE] it's possible to kind of use this in pretty powerful ways, [NOISE] right? So [NOISE] if- if you are considering hy- a finite hypothesis class where [NOISE] the size of the hypothesis class is sum value of K, [NOISE] right? [NOISE] The uniform uh convergence results tells us [NOISE] uniform convergence tells us with probability greater than equal to 1 minus um, uh, Delta, [NOISE] absolute value of [NOISE] and for all h in hypothesis minus [NOISE] h is less than equal to- so the Gamma term that we uh, uh, [NOISE] so the Gamma term- so that's uh i- i- if you remember in uniform convergence, this is the template that we always follow. And for each different situation, you're gonna get a Gamma term that involves n, Delta and uh something about the hypothesis class. And in this case, it turns out to be [NOISE] square root of 1 over [NOISE] 2n log 2K [NOISE] over Delta [NOISE], right? Yes, question. What do you mean by K? So K here it is the size of the uh finite hypothesis class. How do you get the size of the class? Yeah, as I said, you know, if you're storing things in- in- in the computer, two to the power of, you know, number of parameters you have could be used in this case [NOISE] because, you know, for each number of parameter- for each parameter, it can only take up to, you know, a- a- a- a- a- in a- in a 64-bit uh uh values of- So you're defining the size of the hypothesis class with the sign in its memory representation? Yeah, because, you know, uh that is- you're limited to the, you know, th- by the memory capacity in your computer of, you know, the different possible uh hypothesis you can- you can- [inaudible] much more limited than [inaudible] Yeah, in practice, it is, which- which- w- which doesn't hurt the bound. And the bound might be loose, but bound is still valid, [NOISE] right? So you get- you get um, um, [NOISE] uniform convergences that are like this, which [NOISE] translates into an excess risk [NOISE] bound. [NOISE] As with probability greater than equal to 1 minus Delta [NOISE] of Epsilon of H hat is less than or equal to [NOISE] Epsilon of [NOISE] H star plus 2 times this [NOISE] and we're 2n [NOISE] log 2K over Delta, [NOISE] right? And there are different results, you know? For finite hypothesis class, we get a margin term in this form for different kinds of classes. For even for infinite dimensional uh, infinite size hypothesis class, there are different techniques [NOISE] where you can [NOISE] obtain terms like this. [NOISE] The key observation to see here is that this term is essentially [NOISE] min h hypothesis sum of h, okay? This is by definition, the performance, the- the- the true risk of the best-in-class hypothesis, [NOISE] right? And what we see here is now- [NOISE] just like the bias-variance analysis, we can do a very similar analysis here, [NOISE] which is what is the optimal size of our hypothesis class? If we increase the size of our hypothesis class, [NOISE] then this term is gonna reduce [NOISE] because you're- you're- you're searching over a larger hypothesis space. If- if the script h is bigger, then the- the minimized value will be much smaller because you're just minimizing it over a larger space. [NOISE] But if script h is bigger then k is larger, so this is gonna go up because K is in the numerator, [NOISE] right? And this is- [NOISE] this is another way of- of- of- kind of um interpreting the bias-variance tradeoff where [NOISE] in- in- in the earlier case, we saw the effect of regularization. But here, we see the effect of using bigger classes versus smaller hypothesis classes, right? If you increase your- your hypothesis class to be bigger, then think of this as the bias term and the bias comes down. [NOISE] But at the same time, the variance goes up [NOISE] because the sign of the class appears in the numerator here, right? And you can think of this as the- the mean squared error of your, of your hypothesis where you're testing it against, you know, th- the generalized performance. So the generalized performance is bounded by some term, which is like the bias plus some term that- which is like the variance. And here, we see the tradeoff between the- the hypothesis uh b- between having a larger hypothesis versus smaller hypothesis because, you know, a larger hypothesis reduces one component but increases the other component and vice versa. [NOISE] There is, in the notes, there is another, there's another section on how - how to work with infinite classes uh, where H is infinite using something called VC dimensions. And you're welcome to read that. We won't test you on the exam on any of that. In fact, we probably won't test you on any of these except the- the main takeaway over here is that [NOISE] you- you- you know, it's- it's very important that you understand this trade-off between bias and variance. What we care at the end of the day is minimizing generalization performance or- or improving our generalization performance or minimizing the generalization error. And that generalization error always has two components: a bias component and a variance component. [NOISE] It's very hard to break them down analytically. [NOISE] But there are heuristics where we can consider the bias as the training error and the gap between cross-validation and- and training error as the variance. [NOISE] And always, always when you wanna take some action to improve your model's generalization performance, characterize these two using those heuristics and purposefully attack one of [NOISE] them at a time. Either go after bias- bias at a time or go after variance at a time depending on which of the two is bigger, right? And that becomes an iterative process where you improve one. And then, you see that, you know, that is no longer, you know, say bias is no longer the biggest error. Then, you go off to variance. And also uh, you know, using these tradeoffs, you see when- once you- when- when you fight bias in some way, you know, that can increase the variance and so on, right? That's- that's probably the- the larger message from today's lecture, and that's probably the biggest takeaway from the course itself of dealing with bias and variance to improve your generalization performance, all right? Let's break for today.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_6_Exponential_Family_GLM.txt
Okay. Welcome back. Let's continue. Uh, so this is our sixth lecture. The topics for today are the exponential family. Um, it's a- it's a, uh, a family of probability distributions, and the generalized, uh, linear models, or GLMs. The two topics are kind of tightly coupled, so we cover them, uh, in the same lecture. And before we get started, a quick recap to kind of, um, see the motivation for exponential families and GLMs. So we've seen two- two models so far, uh, regression model and the classification model, and they were linear regression and logistic regression. Right? And for both the linear regression and the logistic regression, we saw their probabilistic interpretations. And for the linear regression, the, um, the assumption that we made was y given x is sampled from a normal distribution with mean Theta transpose x and some variance Sigma squared. Um, it- it- it did not matter what Sigma squared was and, um, and- and- and that's fine. In the case of classification, y given x was sampled from a Bernoulli distribution whose parameter was 1 over 1 plus, um, e^minus Theta transpose x, right? And this was also, uh, we call this g of z equals 1 over 1 plus e^minus z, and g of Theta transpose x becomes the, um, becomes the parameter of the Bernoulli distribution. Right? And then we had a smaller side, uh, mostly, uh, a digression in terms of what we're going to cover today. But, you know, this is going to be useful next week. Um, think of functions as points in an infinite-dimensional functional space. And you kind of get a one-to-one mapping between the domain of the function and the axes of the, uh, functional space. Uh, anyway, so this is not relevant for today, but, you know, just kind of keep this in the back of your mind. So we're gonna continue along the lines of- of, um, the probabilistic interpretations, and we're gonna cover generalized linear models. So off the bat, we- we see a few things that are common between these two. All right, first of all, we are modeling y given x in both the cases, right? The difference is that in one case, y is real valued, and in the other case y is binary valued, 0 and 1. And the- the datatype of y, whether it is, you know, a real value or- or, um, discrete binary value, informed our choice of the distribution that we are gonna- that we used, right? Uh, it doesn't make sense to define a Bernoulli- a Bernoulli over real value. Similarly, it doesn't make sense to define a normal distribution on, uh, discrete value. So the choice of- the- the data type of the y-variable informed the choice of the distribution that we used. Um, the other thing that we see in both- uh, both of these is the occurrence of Theta transpose x. right? We have a Theta transpose x here, and we have a Theta transpose x here. Right? And, um, these two seem to be, you know, somewhat- somewhat superficial similarities. But what we'll see, uh, what we'll see, uh, today is that there's actually a much much deeper and richer theory that unifies these too, and a lot more different kinds of models under common umbrella. Right? So, and- and that's called the, uh, um, that's called the GLM or the generalized linear models. We call it linear because, you know, Theta transpose x is linear, right? And, uh, we call it generalized because we generalize not just to- not two types of deep types of y but, you know, for- for more general kinds of, uh, y in general. Okay. So let's get started with exponential families. [NOISE] I'll use a different pen. Exponential family. So the exponential family can be written in the form. So this is the probability density, PDF, or PMF. So an exponential family distribution is a probability distribution whose, uh, density has the form p of y parameterized by Eta equals b of y Eta transpose T of y minus a of Eta. Right. This- this- this is a cryptic expression, let's add some color to this. So the data that we are- on which probability is defined, let's call it y, y, y, right? And the parameter of such a distribution, call it, Eta, Eta, Eta, right? So this is the Greek letter Eta, and if you're using LaTeX, uh, you use E-T-A, Eta to, uh, get that symbol. Um, it kind of looks like n, but it's just normal Eta. So the parameter of the distribution is Eta, and the support of the distribution, that is y. Uh, we call it y, uh, and not x, uh, because it's a hint, we're trying to model the output of our models with exponential families. So, um, so the probability density of y parameterized by Eta is given by this expression, right? And there are kind of three functions over here. So we have, uh, b of y, and this is purely a function of y and it's generally called the base measure, right? And we have t of y, this is called the sufficient statistic. And we have a of Eta. A of Eta is called the log partition function. Right? So, um, a few observations. The base measure or the function b is purely a function of y only, and it has no Eta terms in it. right? And similarly, the log partition function is purely a function of Eta, and it has no y in it. Right? And similarly, T of y is- is, um, is again a function of- of- of y alone, and for pretty much, you know, uh, most of this course and, you know, most commonly T of y will just be equal to y. I think of it as the, uh, identity function. But, uh, for- for most of the cases, T of y will just be y. And why are these called, um, uh, the- these, uh, particular names? So one way to, uh, think of it is the way an exponential family can be constructed is by starting with some base measure b of y, right? Um, assume it is some probability, um, um, density, and come up with a new- new distribution whose density is- Right? So what's happening here? We have some base measure Eta, and we- we can define a [NOISE] new distribution whose density is proportional to the density of the base measure times the exponent of the parameter times- times the data. To make this simple, let's- let's now consider, uh, um, a scalar case and where the base measure is. And now, because it's a probability density measure, if we normalize this, we have to normalize this. [NOISE] p of y Eta is equal to b of y times e of minus Eta to y, over the integral of this with respect to y, okay? We integrate out y. [NOISE] b of y times e of minus Eta of y d_y, okay? Does it make sense? So we- we are just defining a new- new probability di- um, um, distribution whose density is- is defined by the b- you know, some base probability measure y times the exponent of, you know, a parameter times, uh, uh, e of y, right? And then we just renormalize it. And this is also, um, in- in- in some literature, this is called, uh, exponential tilting, [NOISE] right? And now, uh, what is this? This [NOISE] is b of y times e of minus c- Eta transpose y. Um, and this is just the expectation of- [NOISE] expectation of e to the minus Eta transpose y, right? [NOISE] Does that makes sense? This is just the definition. So this is the- the- the- the, uh, density, and this is sum function, and you're integrating out so this is the expectation. And this also happens to be called the? Anybody? Log partition function. It's the log partition function, uh, but just this form alone is also called the moment-generating function, okay? In- in- um- but that's not- um, um, that's not relevant to- to, uh, our- our- um, our study here. So you can just call the sum function of, you know, let's call it capital A of Eta because you're integrating out y, the only variable here is Eta, okay? And this comes in the form, b of y times e of minus Eta transpose y. So, uh, I flip the sign here, this- right? Eta transpose y minus [NOISE] log A of Eta, [NOISE] right? Yeah. [inaudible] Yep. Uh, so the- the question is, should, uh, you know, uh, should b of y, um, satisfy the probabi- the- the properties of a probability distribution function? In this form, it need not because, you know, um, there could be some common constant that get canceled between the two. Um, you- so here, you can actually think of it as some b prime, b prime, b prime, you know, and, you know, b prime. And after you common- cancel the common consonants, you end up with a b of y. But with the- the main idea you wanna think of here is to get an exponential family distribution, you start with some base measure b, do an element-wise multiplication at each term with the e- e of- you know, you introduce a new parameter and, uh, multiply it by, um, uh, uh, exponent of that parameter times y, and then just normalize it. And once you normalize it, what you get is the, kind of, the- uh, it's called the exponentially tilted version of b of y and that will be an ex- a member of the exponential family, right? That's- that's one way to think of how exponential family distributions are contrast- constructed and kind of gives you a motivation of why it's called the base measure. So this- this is also called like the pre-tilting measure. And once you perform this process called tilting, you get a member- uh, uh, a distribution in the exponential family, right? All right. So that's- that's just a side- that's just some motivation for why the names, um, you- you call this the base measure, and up- the partition function, um, is another common name for a normalizing constant, right? Um, I think that term comes from statistical physics where you- you use an- a normalizing constants are called partition functions. And because this is a normalizing, uh, constant, and when you take it up here, you know, you- you take, uh, uh, because the exponent, this becomes a log of a normalizing constant, and this is called a of Eta. And therefore, it's called the log partition function, right? That's- that's- that's just some intuitions for, you know, why the names- base measure and log partition function, uh, why those names, uh, uh, came into picture. But, you know, um, everything inside this box is- is not really, uh, relevant for, uh, our study as such. All right. So- and- and- and the claim is that a lot of probability distributions that we encounter, for example, the Bernoulli distribution or the, uh, uh, Gaussian distribution, the normal distribution, uh, and many more, they are all part of the exponential family. It means you can express these distributions in this form, right? And any distribution that can be represented in this form belongs to the exponential family. The only constraint is that- is that, the choices of b, T, and a should be such that the- this whole term is always non-negative and integrates to 1 when you integrate out y. Subject to those constraints, for any choice of b, T, and a, you have a distribution in the exponential family, right? Now, any- any questions before we see how the, um, uh, Gaussian and- and Bernoulli belong to this family. Any- any questions with- with this so far? Yes, question? [NOISE] Go ahead. Just a quick question. What is [NOISE] that symbol at the bottom, something a dot- This one? Yeah. Oh, uh, just a capital A. I- I just called it some, you know, capital A of Eta and used log of capital A. [NOISE] Next question? So why are we dividing by the expected [inaudible] Oh, so the question is why are we dividing by the, uh, um, the expectation? So we divide it by the integral of- of the numerator with respect to y, which will make it normalize to 1. So- so, uh, think of it as, um, so you have uh, um, uh, some, uh, uh, a set of numbers 1, 5, 7, and you want to maintain their ratios but still make them add up to 1. So you divide it by 1 plus 5 plus 7, 1 plus 5 plus 7, 1 plus 5 plus 7, right? And this makes them normalized, uh, uh, into a probability, uh, distribution, they sum up to 1 now. And- and here, we are doing the same thing for- in- in- in a continuous scale, right? So integrate out the numerator, and- and divide it by the- that integral. So this- this is called, uh, um, normalization. [NOISE] What was [inaudible] So you think of 1 plus 5 plus 7 as the- the- the different values- this evaluates to- for different values of y, right? [inaudible] All right. So um, uh, maybe to- to, uh, answer this, uh, uh, uh, clearer. So let's see what happens here. Now, uh, we have some function b of- call it b prime of y [NOISE] times e^Eta times y. Now, let's assume this p of y is this. And let's assume the integral of p of y equal to integral of b prime of y times e of the Eta times y is equal to some k, let's say equal to 100, right? But now, we want- we want this to be a valid probability distribution which means when you integrate out y, we want it to be equal to 1, right? So what- what you do is you wanna to divide it by 100, so that when you integrate the whole thing, you get 1, right? And- and this 100 comes because that's what you got by integrating it out in the first place, right? So- so the, um- so the integral of b prime of y times exponent of, um, uh, Eta times y over integral of- [NOISE] right? Now, this term, the- the- the entire denominator is just a constant and this will be equal to 1 over integral of b prime of y, x of Eta y times the numerator's integral of Eta y, and you just cancel them and equal to 1, right? That- that's- you're just- you're just, uh, normalizing it such that the integral is always 1. That makes sense? Yes, question. So why did you pass the exponential function here [inaudible] Yeah. So this is just- this is- this is the process of what you call as an exponential tilting. And, uh, there are- there are, uh, good reasons why you wanna tilt something using the exponent function. There are, uh, there's a lot of theory of why the exponent is- is the right choice to do, right? It's- it's, you know, um, think of that as beyond the scope of this class. Yeah. Yes, question. [inaudible] Oh, so here, you know, uh, just to give the intuition, we are considering Eta to be a- a scalar in this case. So yeah, if this was a vector then sure, you know, think of this as. And then the integral would be, you know, would be integrating over y_1, dy_2, etc. Yes, question. [inaudible] Very first line. So, uh, so this is- this is the assumption we are making, uh, you know, where taking a base measure b prime of y, we wanna construct a new measure such that the new measure is proportional to b prime of y times exponent of some parameter times y. [inaudible] So the- the intuition is that you- you're just defining a new distribution which is defined like this, right? And this is called, you know, the tilted distribution, the exponentially tilted distribution. Okay. But the- the- the main point here is that, you know, we went through this just to get an- get a flavor of why the names came into picture, right? And- and we can- we can go into details after the class, you know, come up to the- come to the stage, we can- we can go into the details. But the main point is that if you can decompose a probability density into this form, into these components, then, you know, uh, that distribution belongs to the exponential family. Right? Now, let's look at a few distributions which- which actually follow this pattern. So let's start with the, uh, Bernoulli distribution. The Bernoulli distribution p of y parameterized by Phi is written as Phi to the y times 1 minus Phi to the 1 minus y, right? This is the, uh, Bernoulli distribution. And the claim is that this- this- this form of- of the Bernoulli distribution can be rewritten in the exponential family form, right? How do we do that? So, um, the first thing we can think of is we can take the exponent of the log of Phi to the y 1 minus Phi to the 1 minus y, right? Just took the exponent and the log that, uh, and the reason why you can take the log is because this is positive and this is positive. So you can take the log and then, you know, take the exponent again, right? And now we can rewrite this as exponent. So this will be y log Phi plus 1 minus y log 1 minus Phi, right? And this rearranging terms you get exponent. So I'm gonna collect terms that have y in them to the left and to the right are terms that do not have y. So we will get, um, log Phi over 1 minus Phi times y plus log 1 minus Phi, right? So all we have done here is pure algebraic manipulation, right? We started with this form and brought it to this form. And now we can see, um, you know, start doing a pattern matching, right? So the, uh, exponential family says p of y given Eta is equal to b of y times exponent [NOISE] Eta transpose y minus a of Eta, right? Now, we want to pattern match these two. So, uh, first thing we notice is that, you know, this is Eta. So Eta equals log Phi over 1 minus Phi, right? And this implies if we invert this, we get Phi equals 1 over 1 plus e to the minus Eta, right? And surprisingly, and- and not coincidentally, this is the logistic function, okay? And, um, moving on, so, uh, um, so we have Eta equals this, and Phi equals this, and T of y equals, T of y is just y in this case, right? So this should be, sorry, T of y. T of y is just y, okay? And a of Eta, so a of Eta is equal to minus because a has a- a negative sign minus log 1 minus Phi. And if you plug in Phi from, uh, this expression, a of Eta will end up being log of 1 plus e to the Eta. And b of y is just 1, right? So we started with the Bernoulli distribution and did a whole bunch of just pure algebraic manipulation. You know, nothing more than algebraic manipulation. We didn't use any kind of logical deductions or- or anything- anything like that. And just kind of massaged it into this form. And did a pattern match to extract the corresponding b function, sufficient statistic, log partition function, and- and, um, and Eta. And Eta over here, uh, forgot to mention. Eta is called the natural parameter. [NOISE] Right. So yeah, so start with the, the, uh, uh, density function, massage it into this form and using pattern matching, just extract out the relevant, relevant, uh, um, uh, terms of the exponential family. And so we have shown that the Bernoulli distribution belongs to the exponential family, right? That's what we, that's what we showed that Bernoulli, um, um, uh, distribution belongs to the exponential family. Yes, question. [inaudible] So the question is, how does this Phi relate to the G function that we saw in, in logistic, uh, um, regression, you know, last class, right? Um, hold on a few more minutes, you know, the, the, the, uh, uh, relation will become- become clear, right? Now, similarly, let's repeat this exercise for the Gaussian distribution. [NOISE] So the Gaussian distribution has the form p of y given Mu, Sigma square. Now we are just considering the univariate Gaussian, not the multivariate. And this will be 1 over square root of 2 Pi times Sigma exponent of minus 1/2 y minus Mu square by Sigma square, right? This is the, the, uh, uh, standard Gaussian distribution. In our case, we will be assuming a constant variance. And for the sake of derivation, we are just going to assume Sigma square equals 1. You can leave it Sigma square as some constant, uh, Sigma squared itself. And we will get the same results, but, you know, the derivations will look a little more complex. Yes, question. Can you look at that comparision to get the different factors, how come the transpose on the Eta disappears? How come the transpose on the Eta disappeared? Good question. Thank you. So, uh, in this case, uh, Eta is just a scalar. So, uh, yeah, when you have two scalars, transposing them will- is, is, is, uh, is the same as just leaving it as is. Yeah. Right. So, uh, here we're going to assume Sigma squared equals 1, which gives us a simplified version of the Gaussian. Mu is equal to 1 over square root 2 Pi exponent of minus 1/2 y minus Mu squared. And this we can see is, you know, expand the square and we get 1 over square root 2 Pi exponent of minus 1/2 y squared times exponent Mu y minus 1/2 Mu squared. So what happened here basically expanded the square and one of the terms will have- will be- will have only y's. So got that out. And the other terms- two terms are here. Uh, yeah. And with this, we can again now do the same pattern-matching exercise. P of y Eta equals b of y exponent Eta transpose t of y minus a of Eta. And doing this, uh, uh, pattern, pattern matching, we see that Eta equals Mu, in this case, Eta equals Mu, and similarly, Mu equals Eta. The sufficient statistic, T of y is just equal to y. All right. A of Eta equals Mu squared over 2. And that's the same as Eta squared over 2. And finally, b of y equals 1 over square root 2 Pi exponent of minus y square by 2. Okay. Any questions on this? [BACKGROUND] You can assume Sigma squared to be, uh, some constant. And, uh, you know, this is going to, uh, have a few Sigma squares in a few places, but, you know, it's just a constant. Uh, and, and, uh, as far as, you know, constructing, uh, generalized linear models is concerned, you know, that has no bearing. So we- we're just gonna, you know, just to make, just to make the, the algebra simple, we're gonna assume, uh, uh, Sigma squared equals 1, and then that's completely harmless. Okay. Any questions on this? So uh, and, and here we can see that, um, from the, the, um, um, point of view of exponential tilting it, it kind of suggests that you get the Bernoulli if you start with a uniform distribution and perform an exponential tilting. And over here we actually see that, uh, for the Gaussian distribution, this is actually a standard normal distribution that has, uh, mean 0, standard deviation 1, and it's still a Gaussian distribution, right? So for the Gaussian distribution, if you perform an exponential tilting, then you still get a Gaussian distribution with just, you know, different, different parameters. Anyway, that's just, uh, uh, um, you know, um, that's just to tie it back to, uh, the, the, uh, the tilting interpretation, uh, uh, that we spoke about. Now, exponential families have a few nice properties. Properties of exponential families. One property is that log p of y Eta is concave in Eta. That is MLE is concave, which is the same as the last function, negative log-likelihood is convex, is convex in Eta, in Eta, right? Another property is that the expectation of y Eta equal to- So the first derivative of the log partition function is the expectation of the distribution. And this might look a little, you know, why is this the case? Let me look and see. Similarly, the variance of y is the second derivative of the log partition function. And this might look a little, um, um, surprising, initially, but it should not be so surprising when we see the fact that the log partition function came from the moment generating function, right? And the moment generating function has the properties that as you keep taking derivatives, it keeps generating the moments of the distribution, right? So the, uh, the first derivative of the log partition function, which is also the log moment generating function give us the first moment. The second derivative gives you the second moment, and so on. Yes, question. [inaudible] Um, good question, um. Uh, what I was about to gonna say is that, you have these three question as your homework question. So I'm not gonna, ah, reveal a lot about that. And- and I think, you know, question number four in- in your homework 1 is- is you're asked to prove exactly these. Um, yeah. So tho- these are- these are some very nice properties about the, um, exponential family. And, um, they're especially nice because in general, if you're given a distribution and asked to calculate the expectation, you are required to take the integral of something, right? You are- you are expected to take the integral of your- your variable, um, um, with resp- multiplied by the density. And if you want to take the- uh, calculate the variance you- you know, you're supposed to take the integral of the squared, a square of the variable with respect to the density. Um, and- and, you know, in general, performing integration is hard. It's- it's- it's messy, it's not nice. Okay. What- but for the exponential family, um, instead of doing integration, you can actually, you know, just differentiate the log partition function. And then you take- you differentiate it twice you get the second moment, right? Those are- you know, these are some nice properties about, um, um, the, um, log partition func- uh, about, uh, exponential families. Any questions? Yes, question. [inaudible] Is this obvious or is that a proof? Well, it- it is, um, I mean, that is a proof and no, you know, uh, your homework question number four, homework 1 is you're asked to prove this. [BACKGROUND] Why is it concave? I mean it's, yeah, it's- it's- it's in your homework, you need to show it. Yeah, all these are in your homework. [LAUGHTER] Oh, NLL is negative log-likelihood. Um, so the maximum lo- uh, likelihood estimate, um, is, uh, to calculate the maximum likelihood estimate you take the probability, you know, or the log of the probability and you maximize this with respect to Theta. Now, if you just swi- switch the sign, you get the negative of this, and that's generally considered as the last function which you want to minimize. So now if in the- in the maximum likelihood, uh, interpretation you're trying to maximize, uh, uh, something and, um, you know, in the- from a loss function perspective, you want to minimize something, so you just flip- flip the sign and, you know, and- and that's generally considered the loss function. Yes, question? [BACKGROUND] What's, uh, ca- ca- can you please explain what's concave down and concave up? [BACKGROUND] Oh, okay. Oh, I see- I see- I see. So- so the question is by concave, uh, do I mean this or do I mean this? Right? Is- is- is the function, uh, uh, shaped like this or the function shaped like this? So, uh, the common nomenclature is when you have a global maxima. When there's a maxima you call it concave and when there is a global minima you call it convex. Okay. So if you have a convex function, the negative of that will be a concave function and vice versa. Good question. Any other questions? All right. So that's- that's, you know, that's- that's it about exponential families. And now, we're gonna see how the exponential family kind of ties into, uh, generalized linear models. [NOISE] All right, so generalized linear models. So with the exponential families, so far we were dealing with y, right? The variable y. So we were defining, um, um, a probability distribution or y and the parameter was Eta, right? And there was no x anywhere, right? In- in supervised learning we're trying to learn a mapping from x to y, right? [NOISE] So in generalized linear models, we- we now use the exponential family to introduce a relation between a set of variables x to the variable y. And how do we do that? We make the following three assumptions, or- or, uh, these are like the- the- the three- the three steps for constructing, uh, a GLM. So first is, you make the assumption y given x belongs to an exponential family [NOISE] Eta. That is, given x and some Theta, y now belongs to some exponential family whose parameter is Eta. Right? So far- so far uh, in this lecture, we assumed we were- uh, we didn't assume but we were considering the cases when y belonged to [NOISE] exponential family parameter Eta. But now we say y given x belongs to an exponential family with parameter Theta. And our- the goal of prediction will [NOISE] be to predict the mean. So the hypothesis that we wanna learn [NOISE] is the expectation of y given [NOISE] x. And the third assumption that we are gonna make is now connecting x Theta and Theta. So here we're gonna make, uh, reasonably strong assumption that Theta will be equal to Theta transpose x. Right? So these two seem somewhat reasonable, uh. You know, we're just assuming y given x follows some, ah, exponential family distribution. And also, uh, the second assumption was that the hypothesis or the, the value that we wanna predict for an input x is gonna be the expectation of this distribution if, if- you know, i- in general, when you- when you know, um, that a variable is distributed according to some distribution, and you're asked to make a prediction of what value is, you know, predicting the mean of the distribution is a pretty reasonable- you know, um, pretty reasonable value to predict. So this seems pretty reasonable. And then we make yet another assumption of the way we tie the parameter of the exponential family with the inputs as Theta equals Theta transpose x. Right? Now, together with these three, um, assumptions, we can, uh, or- or- or these three steps, we can construct exponential family or generalized linear models for different exponential family, um, ah, distributions. Now, let's see what happens. Uh, how do we construct, um, [NOISE] kind of in a- in a- in a- in a pictorial way? Let's- the- these three assumptions or these three steps can be kind of visualized like this. So we have some X_i to R_d, right? Now, I'm gonna run it through. Theta- Theta transpose X_i. And that is gonna output Eta i, right? And this Eta i [NOISE] becomes the parameter of some exponential family that has, you know, some particular, you know, sufficient statistic lock partition function b of y. By- by choosing the three different uh, functions of an exponential family, you know- you completely determine the exponential family. And the output of- of- of, um, the dot product between Theta and x become then natural parameter of that exponential family, and from this we sample y^i. This is the big picture you wanna have in your mind. So- yes, question. [inaudible]. So the question is, uh, shouldn't- shouldn't these functions also be somewhat, um- um parameterized by the model? [inaudible]. Yeah, yeah, yeah. [inaudible] Yeah. So I would- I would- I would- um, the general process of how you go about constructing uh, a GLM is to first choose a distribution. For example, choose that you want to model your output as Gaussian or choose that you wanna model your output has a Poisson distribution or choose that you wanna model your output as Bernoulli. And by making that initial choice of what distribution you want to use, these get fixed. All right? So these get fixed. The- you know, the t function, a function and b function, they get fixed up front when we make the choice of the distribution for y, right. And then the question becomes, what is going to be the parameter for a particular y_i, right? And the perimeter- the natural parameter is then um, assumed to be uh, uh, Theta transpose x for a given example. So Theta transpose x will give you the uh, natural parameter, and from this, y_i is- is obtained. So this is the assumption uh, that we're gonna make of how our data was generated. Right? Uh, any questions about this? And- so think of this as like- as like the flowchart of the data-generating process of how we believe, um, um, the data is generated. [NOISE] A few things to note here is that Theta is global in the sense that you have just one Theta for- you know, for- for- for- for the- for the full model. But whereas the parameter of the exponential family Eta is different for each example. So each example- each x will output a particular natural parameter of the exponential family. Now, we can now see how this relates to, for example, linear regression. So ordinary least squares or linear regression. [NOISE] Yes, question. [inaudible]. Uh, uh, what's the question? Why is Theta local to one family? No. Eta. So um, the, uh, the question is, why is Eta local to one family? So the- um, the exponential family, uh, corresponds to distributions that have this form, right? [NOISE] Now, once you choose p, a, and b, you fix the functional form of your exponential family. And by- for example, by choosing your um, b, t, and a according to, um, uh, you know, that set of choices, you get the Gaussian. By choosing b, T, and a according to this set of choices, you get a Bernoulli, right? [inaudible] So, um, now, ah, if- if- if I understood, uh, you correctly, uh, the question is Theta is- is local to x? Is that- is that the question? [inaudible]. Yes. I- I, um, I'm- I'm- I'm, kind of, uh, uh, trying to understand what you mean by local to your data or local to your, uh,. [inaudible]. Yeah. [inaudible]. Yeah, so Eta is, you get a different Eta, for different examples. So each example will out put its own Eta. right? So in this process, x i, uh, take a dot product with Theta will give you Eta i of the ith example. Yes, question. [inaudible]. Exactly. Exactly. We'll- we'll- we'll, uh, I'm gonna draw a picture about- about that in, uh, shortly, right? So, um, so for the, uh, ordinary least squares, if we make the choice that our exponential family is Gaussian, then a natural consequence of that is- so if- if- if you construct a generalized linear family where the exponential family, uh, uh, distribution is Gaussian, then you naturally get least-squares regression is what you should be doing, right? What- why is that? Um, so assuming exponential family is Gaussian, we get h_Theta of x. We want h_Theta of x to be the expectation, expectation of y given x, um, Theta. And we saw that the mean of- of, uh, Gaussian is just Mu, right? The mean of a Gaussian is Mu, and Mu as a function of Eta, was just Eta, right? And Eta is equal to Theta transpose x. So we basically get h_Theta of x equals Theta transpose x, right? And that was our hypothesis for- for, uh, linear regression. Similarly for, uh, logistic regression. [NOISE]. So lo- so logistic regression is again basically a generalized linear model where we choose the exponential family, uh, to be Bernoulli's, right? By- by just making the choice that the exponential family is Bernoulli, logistic regression pops out as a natural consequence. So, um, again- so assume exponential family is Bernoulli, h_Theta of x, again is- should be expectation of y given x parameterized by Theta. And in case of Bernoulli the- the mean of the Bernoulli was- is- is just Phi, okay? And Phi as a function of Eta, what we saw was- is equal to 1 over 1 plus e to the minus Eta. And then we make, uh, uh, the third assumption about, uh, GLMs was that Eta equals Theta transpose x. So 1 over 1 plus e to the minus Theta transpose x. h_Theta of x equals, equals this, right? And, and what we see here is that this is exactly equal to g of Theta transpose x, where g of z equals 1 over 1 plus e to the minus z, right? When we introduced logistic regression, we arbitrarily chose g to take this form, something that's- that squishes minus infinity to plus infinity into the range 0-1. But this is a more principled way of deriving it, where you think of your output to be, um, uh, following, er, the- following the Bernoulli distribution and construct a GLM and with just those two assumptions, we see that, uh, hypotheses must take this form. Yes, question. [inaudible]. Was there a mathematical reason for our third assumption? So, um, this is a modeling choice, um, where you're- you're, um, making an assumption that the natural parameter is, is, um, a linear combination of, of, of your x's. You can make this relation as complex as possible in general, right? You know, beyond GLM. So you can- you can [NOISE] for example, say your Eta equals, um, you know, um, some neural network parameterized by some, some, some Theta of your x, right? And this is totally fine. You can construct, um, uh, um, a generalized models except now it's no more linear. We can call it GLM, but, you know, you, you can absolutely do something like this. Uh, but for GLMs, um, it's a linear model because we're making the assumption that Eta is a linear function of x's. [inaudible]. So the question is, um, GLMs are convex or concave and when you look at your scatter plot of your data, if it does not look, uh, convex or concave, should we still fit GLMs to that data, right? Uh, one thing to, to note is that, um, the GLMs in exponential families are all convex or concave with respect to the parameters and in a scatter plot, you're looking at data. Right? You know, they're, they're two different things, right? Now, your data is generally kind of embedded in your loss function in some way. It's, kind of, you know, the shape of the loss function is decided by your data. And the, the, the shape of the loss function is gonna be very, very different from looking at the [NOISE] scatter plot of the data itself, right? Your, your data may look- may, may look- may, may look in any shape or form, right? But that tells you nothing about how the shape of the loss function is gonna look. [inaudible]. So, um, may, may maybe this- the, the next, next diagram I'm about to draw might answer your question, if not, you know, please ask the question again. Um, yeah, so what's we- what- so, uh, here we see that, um, the- the linear regression and logistic regression pop out as natural consequences by just choosing- uh, by just making the choice of our exponential family distribution, right? And now, how does this, kind of, um, look visually, okay? [NOISE]. So for linear regression, um, [NOISE] for linear regression we assume this to be x-axis the data. Right. And there's some Theta that- so x is in Rd, Theta is in Rd. But for this picture, we're just assuming it's one-dimensional. And here, this line, for example, this line is basically Theta equals Theta transpose x for some Theta, right? And then what we- what we saw was Eta for, in case of, uh, the Gaussian distribution is also equal to Mu, right? For- for the Gaussian distribution, Eta equals Mu. And y for a given x is now distributed as a Gaussian with mean Mu. So if we take the example xi, we get a Mu^i, which means for this value of x, we assume that centered around here, we have a Gaussian distribution. So this is y and this is xi. And from this Gaussian distribution, we sample a y. Let's say we got this one and this is yi. Similarly, for a different, say xj, there's a Gaussian distribution. And from this Gaussian distribution, we sample, let's say this is yj. So for every different value of input- for every input value, for every x, we dot it with Theta, we get Eta. And that Eta is equal to Mu of a Gaussian distribution centered at that-centered at that- at that Theta or Mu. And from that Gaussian distribution, you know, its- its a distribution along this axis, we sample a y, all right? And that makes the x- xi-yi pair or the xj-yj pair. All right. So let's look at a few more- that is another Gaussian distribution here. This point, let's assume this- this point- this, right? And from this, we assume that the data has generated this way. From this we get- this is x, this is y. So I'm just gonna try to draw those points again. So this is x, this is y, the data was generated. So this is the data generation process. And now when we start our machine learning analysis, we start from here. All right? This is the observation of the data. And from this observation, we now want to infer what Theta is. So each point over here has, you know, uh, so- so the true Theta might- might be something like this, but we don't know. Each example, each yi has two parts, right? So each of these is- is basically, um, Theta transpose x plus Epsilon, where Epsilon is the Gaussian noise. So Theta transpose x. Now think of this as the signal and you have a noise, and you're trying to separate the signal from the noise, right? We want to get what the signal is. What is Theta? What is the- the- the invisible dotted line over here from which the y's are sampled, right? And the way this relates to generalized linear models is that we assume for each example, there is an exponential family distribution, in this case, the Gaussian. For each example, there is a Gaussian distribution, and the y was sampled from the Gaussian distribution. Right? And now by just- by just starting with, uh, the x and y pairs without seeing anything else, we want to- we want to learn what- what- what Theta is. And- and that's where, you know, that's basically statistics machine learning, where you start from the data and you're trying to- to construct what the model is. Yes, question. [inaudible]. So the question is, in practice, how do we know whether, you know, the noise actually follows a Gaussian distribution or not? Right? And, uh, the- the short answer to that is our- myself, funny, but in machine learning, we don't care. But in statistics, you care a lot. Um, uh, because, er, in- in machine learning, our- our goal is, um, are we able to learn a model that you know, that just, you know, does well in terms of prediction on unseen data, right? But as in- in- in classical statistics, uh, we are trying to- your data is- is kind of, you know, hopefully, generated with a well-designed experiment where if you perform a regression analysis, the values of, you know, Theta are meaningful in some way. You can test a hypothesis, right? And- and over there, the- the- the assumption of whether the- the noise was actually Gaussian or not, are- are very important, right? But uh, from machine learning point of view, we- we're mostly interested in how well our model predicts on unseen data. For ex- for example, um, if yi is, you know if- if there's a new test example, x-star, um, our model- and suppose it- it learned this other line to be the- the, uh, uh, to be the hypothesis then it- it's gonna say this is, you know, um, gonna be your prediction. And the, uh, um, yeah, so i- in general, we are- we are interested in, you know, what is this gap basically, right? And- and in practice, it's not even possible to measure this gap because we don't know the first dotted line. So, um, so we're gonna make this prediction, but the true data was sampled from, you know, from- from a distribution center on the first dotted line. So your data might be here. And what we care is how far was our prediction from the observed data? And that's all is our focus in machine learning. We are not trying to, um, we care less about, you know, what was the specific value of Theta? Is it meaningful? You know, those -those are questions that's more classically handled by statistics and not machine learning. Good question. Next question. [inaudible] So- so I- I guess to summarize your question, you know, is the- is there a more principled way of- of choosing what our distribution should be, right? Um, the- the, uh, short answer there is, you know, um, so in machine learning, um, when we're given a set of data, right? Uh, let's assume you have some training set, right? Generally what we do is we- we split it into a training set and a validation set, and a test set, right? And, um, for- for, you know questions such- such as the one you posed about, you know, what's the choice of, you know, the exponential family distribution we want to make, the answer almost always is going to be, you know, um, try both approaches. Try- try two different distribution, say Gaussian and Laplace, and see how well that performs on the validation test- on the validation set. And which of the choices work better on your validation set, go with that. [NOISE] [inaudible]. Yes, that's just a limitation that we live with. Yeah, exactly. Yes, questions? [inaudible] Well, so, uh, the question is, here we are sampling y. Uh, the sampling is the assumption that we're making of how the data was generated under the colors. [inaudible]. So, um- um, what we are predicting is, um, so with- with- so with linear regression, we saw two different interpretations, right? One was just minimize the cost function; take the data that we have, minimize the cost function. The other was, the, uh- the probabilistic interpretation, right? And right there we had made the assumption that your y^i was sampled from a normal distribution whose mean was Theta transpose x, right? And we are making that assumption, which may or may not be true. And if we believe in that assumption, then the maximum likelihood principle tells us that we need to minimize the squared loss, right? And similarly over here, um, if we make a generalized, uh- if we assume that our data is from a generalized linear model where the exponential family is- is- is Gaussian, then our hypothesis should be, you know, uh- maybe I erased it. Our hypothesis should be just Theta transpose x. You know, which is basically the, um- um- um, linear, uh- uh, linear regression hypotheses. And if you perform maximum likelihood expectation on your, uh, generalized linear model, then you get the squared loss as well. Right? So these- the linear regression and logistic regression are exactly specific cases of generalized linear models where we chose their family, exactly. By making a choice of the family, uh, to be Gaussian, we get exactly linear regression. By making a choice to be Bernoulli, we get exactly logistic regression, right? So that's- that's- you know, this- this is the picture you wanna have in mind for, um- uh, linear regression that the assumption is that there is- there is some Theta, some true Theta, which acts as the mean for each y^i that we're gonna sample from. And the y^is that we're gonna observe are going to be sampled from this, so there is some noise coming in. And we start with these noisy observations. And we're- you know, the- the hope is that we kind of, you know, prune away the noise and can extract the signal in there, right? That's- that's- that's the picture you wanna have. And similarly for logistic regression- yes, question. [inaudible]. Yeah. [inaudible] So we- we don't get a Theta, we on- when you- when you start your machine learning task, you're just giving your data set your x's and y's, right? And then we make an assumption that the way the- the, uh- uh, data was generated was, you know, through a generalized linear model. If we make that assumption, then the, you know, uh- the set of actions we need to take is to perform maximum likelihood on a generalized linear model. Which- which, uh- and if the generalized linear model has a Gaussian distribution, then you get linear regression. [inaudible] You don't generate y's, right? We don't generate y's, we- we- we- we just- we just take the observed x's and y's and- and start from there. Exactly. We don't- we don't sample- we don't do any kind of sampling as part of our analysis. [inaudible]. I'm sorry. Can you please repeat that? [inaudible] So what we, um- effectively, what we're doing is we're trying to, um, estimate what Theta hat is. We- we try to come up with an estimate Theta hat that is as close as possible to the true Theta. That's- that's essentially what we're trying to do, right?. The output of our model is always some parameter vector Theta. Like for example, you know, you do linear regression, say use the normal equations, the output of a normal equation is some Theta hat, and that's our estimate of what the slope was. All right? So for logistic regression, um, the picture looks again somewhat similar. So this is x, and assume this is Eta equals Theta transpose x, right? And now we have yet another function, which is the activation function. [NOISE] Oops, I messed this up. Sorry. [NOISE] This is y equals 0 and y equals 1. Right? So for logistic regression, again, um, this is Phi equals- oh, sorry. Equals g of Eta [NOISE] equals 1, right? Again, for logistic regression, you know, assume x is our dat- x is our- our- our data axis. Um, for simplicity, uh, we're just drawing it as one-dimensional, but in general, this is in d-dimensional, right? And Eta equals Theta transpose x is this, you know, hyperplane or, you know, aligned in one dimension, right? And what we then have, uh, in case of linear regression, Eta was Mu, but in case of logistic regression, Phi, that is, you know, the mean, is not equal to Eta but g of Eta, where, you know, it takes this one. You know, this is basically our- our- our logistic regression hypothesis, right? There's also h Theta of x [NOISE]. And now the- the, um- the way the- the- the- the way the data is generated, the way you want to think about it is that for any given value of x, right? Calculate, um, g of Eta. This is the Phi, Phi^i corresponding to x^i. And from this Phi^i, sample a y. Right? Now, if- if it is close to 1, if- if the mean of the Bernoulli is close to 1, then it's more likely that you're going to sample a 1 instead of a 0. So for example here, you might get here. And for this, again, might get here. This might get here, here, here. Maybe, you know, one of them might be here. And similarly, for- for- for, um, low probabilities, uh, for- for those regions of x where g Theta of x will be a low probability, you're mostly gonna get things here, and maybe occasionally one of them will be here. And for the region where your- your, um- um, h Theta of x is close to 0.5, you would expect, you know, roughly an equal number to be, you know, both up and down, right? And from this data generation process, we get a data set that might look like this, x and y. [NOISE] So you might get a dataset that looks like this. Again, this is just visualized in one-dimension. Um, and from this, we want to now work backwards towards, you know, the Theta that gave rise to, you know, the- the- the hyperplane, which then give- gave rise to this logistic function from which the data was chosen from the sample. Right? Does that make sense? From here, we want to then work backwards and ask, what was Theta? So there are any questions on this? Yes? So the [inaudible] So the in- in, uh, I said, um, in case of linear regression, Eta equals Mu precisely because we made the Gaussian assumption. All right, so a few things to kind of, uh, keep in mind, there are lots of parameters moving around here, right? So we have Eta, we have Mu's, we have Phi's, we have Theta's, and it's, you know, it's quite easy to kind of get lost in all these different parameters. And- and so it's- it's- it can be useful to have a big picture of where the different parameters, uh, kind of lie and what role they play and what's their relationship with each other? So we have model parameters, right? Then we have the natural parameter. Then we have mean parameter. Right. So Theta Rd is the model parameter, Eta i is the natural parameter, and the mean parameter is gonna be different for kind of distributions. For example, for this Mu for Gaussian Phi for Bernoulli, Lambda for Poisson, now and whatever it could be. And the choice of the distribution implicitly chooses which mean parameter we are working with, okay? And the relation between the model parameters, which must the- the- the dimension must match with x, the model parameters dotted with x give the natural parameters, right? So Theta transpose x i gives the natural parameters. So that's the relation between your model parameters and the natural parameters, and the natural parameters is 1 per example, right? And then from the natural parameter to the, uh, mean parameters, we have something called the g function. So the g function takes us from natural parameter to the mean parameter. In case of Gaussian g equals, you know, just, you know, g of- in case of Gaussian, g of Eta is just equal to Eta. And- and similarly, in case of, you know, Bernoulli, Phi equals g of Eta equals 1 over 1 plus E to minus Eta, and so on. Right? And this function, that the function g that takes you from the natural parameter to the mean of the distribution, is generally called the canonical response function. And the canonical response function has this property that it will always have an inverse, right? It's always an invertible function. Um, and the inwards, which takes you from the mean back to the natural parameter is generally called the canonical link function. It's suddenly called the canonical response function, because generally, y is also called the response variable sometimes, and so it is taking you from Eta's to like the mean of your y's. So it's- it's also called the canonical response function. And the canonical link function is- is- is basically the, uh, inverse of it. And, uh, the thing you want to keep in mind is, you know, that there are three different kinds of parameters we can talk about. So the first is the Theta's. And when you perform gradient descent to perform maximum likelihood on your Gaussian, or on your generalized linear models, these are the parameters that get trained, right? These are the parameters that we are adjusting in each step of gradient descent or stochastic gradient descent. [NOISE] And once you fix these, the natural parameters are just the output of Theta transpose x for different values of x's. So these are kind of ephemeral in some way, they- you know, we don't store them, whereas, you know, we train and store the, uh, uh, the Theta parameters, right? And the- the natural parameters are just the output for different values of inputs, right? And then you take them through the- the canonical response function and you get the mean of your, you know, y variables. And this is the prediction that we do, right?x So the prediction is- is, uh, is [NOISE] it's Theta of x equal to expectation of y given x Theta. And this was always g of Theta transpose x. [NOISE] Yeah, so uh, are there other kinds of generalized linear models, ah, that's the next, ah, um, thing we're going to discuss. [NOISE] Okay. So -so we to- we saw two examples for generalizing linear models, classification, and regression, and let's see if- if- let's see a few other types of GLMs. [NOISE] So in order to first construct the GLM, right? In order to first construct the GLM, the first thing that we do is first look at our data, right? You have- you're given pairs of x and y's, the first thing you wanna do is look at your y and find out what data type it is. Your y variable might be binary, just 0s and 1s, it can be real valued, which means it can take a value between minus infinity and plus infinity, it can be counts, which means it exists only as integers, like counts of how many clicks is my website going to have or count of number of customers who are gonna walk in through the door, right? It could be just counts. It could be- it could be multiclass, discrete, and- and in the sense if you're trying to, if you're trying to perform classification of whether a given- given image is, I don't know, a cat or a dog or an elephant, right? It can be discrete but not binary, but a set of- set of classes, right? And it can also be just positive and real valued. For example, you know, if the y value is the time until some event happens, then it's real valued but just limited to positive values, right? So the first thing that we wanna do is look at the data type of y, and supposing the datatype of y is R, real-valued, then- then the next thing you wanna do is make a choice about which exponential family distribution you want to use that has a support compatible with your data type. For example, if- if- if- if it is real value from minus infinity to infinity, look up the table of exponential families, say on Wikipedia, Wikipedia is, you know, a page of exponential families, gives you a table of all exponential family distributions. And look at those that have support equal to the full real line, and those are all, you know, eligible options for you, right? So for example, you can choose Gaussian or you can use Laplace, right, and then maybe a few more, okay? But they need to be limited to the exponential family. And when your data type is real-valued, then the thing that we- the name that we give it is regression, right? Similarly, if it is 0 or 1, then we chose Bernoulli and we call it classification. For the choice of Gaussian, we got linear regression as our algorithm. For choice of Laplace, you're gonna get a different algorithm, but it's still a valid algorithm. For Bernoulli, we got logistic regression. Now, It could be 1 through k, and here it can be categorical distribution, and we call that multiclass classification. If it is natural numbers or counts, and one example could be Poisson. This is there in your homework, so in your homework question 4 or question 3 or something, you're going to construct a generalized linear model for Poisson, take a dataset, fit it, make predictions, etc. And you call this either counter regression or Poisson regression. Similarly, if it is positive real values, and specifically if it is like- your y variable is timed to some event, then you have choices like exponential. The exponential distribution is distinct from exponential family, and the exponential distribution is a member of the exponential family. It may be confusing, but this is the exponential distribution not the exponential family. Exponential, or you can use gamma distribution and there are a few more. And generally when you're trying to predict the time to some event, you call it survival analysis. And you can also have your data type to be probability distributions themselves. So this may be a little- little advanced, but if you're doing Bayesian statistics, for example, you can define a probability distribution over Bernoulli distributions, in which case, the exponential family for Bernoulli would be a beta distribution, and you can give it some name. Similarly, you know, your- your- your- your y variable can be a categorical distribution itself, in which case, you want to have a distribution over distributions like for example, the Dirichlet distribution, and all of these can be fit in to the generalized linear model framework, right? You start with what your y variable is, data type of y, and the data type of y is going to inform the choice of distributions that you can use, right? So if somebody- if you're wondering is generalized linear models used for regression or classification, the answer is, it can be used for anything. Just use the appropriate choice of exponential family distribution, okay? A question? Yes. [inaudible]. I'm not gonna go too deep into that. You can- you can ask me after the lecture, I can, I can explain more, all right? [NOISE] Any- any- any- any questions? [NOISE] Now. What was the- what did we gain by putting them all in a common framework, right? We saw that, you know, yes, it's kind of mathematically elegant to kind of put them all into a common framework, but what have we gained anything as such, right? The answer is yes, we have actually gained. Once we make a choice of distribution, we can go through the exercise of algebraically massaging it and doing a pattern matching to find out what our G-function is. So first thing you wanna do is make a choice of distribution according to the data type, right? Now- Um, and for that distribution, um, express it in exponential family. Exponential family, which means, you know, find out what is a, what is, you know, a of eta, b of y, T of y. Um, and most importantly, your Mu or Phi or the- the mean parameter, you know, express it as a function of Eta, right? So calculate the g function. Right. That's- that's the most crucial part. Right. And this g function will decide what your hypothesis is. So the hypothesis, h_Theta of x is now g of theta transpose x. Okay. And probably the most- the most useful benefit of going through, uh, of- of building a GLM is that once you do this, in order to perform maximum likelihood, um, uh, estimation, you don't really need to write out the log probability, take the gradient, you know, and- and- and- and, um, derive an update rule. The update rule for all generalized linear models is the same rule. And that- that rule is Theta equals Theta plus some step-sized Alpha times y^i minus h_Theta of x^i times x^i, right? You've seen this update rule over and over. And what you can see is if you start for any generalized linear model, for any choice of the distribution, you take the log probability, perform maximum likelihood estimation to get the gradient, and the update rule that you end up with will always take this form. Right. So in case of logistic regression, your y^i's are 0s and 1s, your h_Theta is- you know, goes according to this, uh, um, um. And for linear regression, your y's could be- could be, you know, a real value. For Poisson regression, they're gonna be counts. But no matter what data type you have, the update rule that you're gonna end up with is always the same. Just that h_Theta of x will be different according to the choice of g. Okay. And finally, for prediction, okay, prediction y hat is always h_Theta of x^star, and this again is equal to g of Theta transpose x^star. So basically, by- by, uh, with GLMs, not only did- do we have a nice and elegant unifying theory, but it also gives you an update rule that you can go directly without writing out the log-likelihood, without taking the gradients. You know, you can just go and start using this update rule and- you know, and run it 'til your model converges. Yes, question. [inaudible] That's a very good point. So, uh, the question is, uh, when we use this y hat, um, according to this, then our- our- our prediction that we are making will not lie in the support of the data type. That- that- that is all- that is true. Um, and for example, for logistic regression, the- the, uh, hypotheses will output a probability value, right? And then it is up to us to choose some kind of a threshold to say, you know, anything above this threshold, we classify it as- as- as- um, um, as positive and anything below, you know, we classify it as negative. And similarly for you- and for Poisson regression, right, your data types are gonna be counts, but the value that comes out of this is gonna be real value, right, again, in which case, you can either round it to the minimum- you know, to the nearest integer, or you can, you know, just sample from the Poisson. You know, that- you- you can- you can do something like that, exactly. Yes, question. [inaudible] This one? No, on the last row. This one? Uh, this- this was just x^star just to say some test example, you know, some unseen example. So, um- yes, question. [inaudible] So the question is, uh, the- the threshold that we wanna choose to decide whether the output of our logistic regression was 1 or 0, can we adjust that threshold? Yes, you can adjust that threshold. It's- it's- it's generally, you know, um, it's generally a post-processing step, which means you- you- you fit the model as it is, and after you fit the model, you know, on a validation set, you can decide what threshold you actually wanna use. You don't always have to use 0.5. You can use a different threshold. [inaudible] It's generally not something that you learn from the training data. So the threshold is not something that you learn from your training data. The threshold is something that you tune on a validation set. And we'll- we'll- we'll probably cover- uh, we're gonna have a lecture on evaluation metrics in, you know, in one of the, uh, um, uh, future lectures, and there we- we're gonna talk about thresholds and- and- and- and things like that. All right. So moving on, um, the last topic, um, I wanna cover today is our softmax regression. Softmax regression. So softmax regression is again, uh, a type of- it- it- it is part of the exponential family, um, or- or a generalized linear model. When you make a choice of, you know, categorical as your- as your choice of exponential family, um, you get multiclass classification, and this is also called softmax regression. So, uh, the lecture notes has, uh, you know, a pretty detailed- uh, um, it walks you step-by-step of how you actually construct, uh, softmax regression as a generalized linear model. And, um, I highly encourage you, you know, you need to kind of go through that in the lecture notes, um, to- to kind of, you know, go through the mechanics of it. Uh, but in- uh, but during the lecture instead, um, I wanna provide some kind of a- um, some intuitions about how softmax regression works so that what you read in the lecture notes, you know, which is pretty heavy on rotation, um, so that it- it makes a little more sense. Okay. So softmax regression is for multiclass classification. And let's start with some pictures. Right. In case of- um, x1, xd, in case of logistic regression, um, we started off with two classes. Let's say this is class 1, and then we have another class over here. Um, minus, minus. These are some examples. Right. Um, in- the- the picture we drew for the generalized, uh, linear model, we just had one- one dimension for, uh, the x-axis and y was on the other dimension. Over here, um, this is x1 and x- x- xd, and the y or the label is basically, you know, embedded in the- in- in the shape or the color over here. Right. Um, and then, um, our- our, uh, goal was to come up with some Theta such that Theta transpose- Theta transpose x equal to 0. This line, that does a separating hyperplane. Right. That was- that was, uh, logistic regression. And Theta transpose x greater than 0 was like the positive classes, and Theta transpose x less than 0 was the negative classes. This assumes that, you know, P of y given x Theta equals 0.5 is the separating hyperplane. So if you assume, you know, 0.5 is- is- is the, uh, separating probability, then, uh, the Theta that you learn, um, has this property that it's- it's the separating hyperplane on one side where it's- it's, uh, greater than 0, you get- you classify it as positive and the other side as negative. Right. Now, what happens in case of multi classes, right? What if we have multiple classes? x1, xd. Okay. Let's assume, um, there's one class over here and another class over here, some examples over here, and let's use another color, and some more examples over here. And let's say these, you know, um, let's say this belongs to y equals 1, y equals 2, and similarly, y equals k. We have k different classes living in a d-dimensional input space, right? And we want to now build a multi-class classifier. So, um, the first thing we want to, um, we want to change from this view is, first we want to figure out what- where is the Theta vector, right? So in this case, the Theta vector will be like this, right? So this is the Theta vector. Because any vector which is less than 90 degrees from Theta will give you a dot-product that's positive. Any- any vector that is, you know, more than 90 degree will give a dot-product that's negative, right? So this is our Theta vector. When we- when we, um, um, move on to multi-class classification, here we will now have one Theta per class, right? So this, I'm going to call it Theta_2, um, this would be Theta_1 and [NOISE] let's call this Theta_k. So for every class, you get your own Theta vector, right? And over here it was- it was clear. So if Theta transpose x was, you know, bigger than 0, it was one class, it was less than 0, the other class. But now we have, you know, multiple Theta vectors, and the way we're going to classify something as belonging to one class is to take that input vector x, right? Calculate Theta_1 transpose x, calculate Theta_2 transpose x, and so on until Theta_k transpose x, right? Each is a scalar, and pick the- pick the corresponding k for which the Theta_k transpose x is the largest, right? So each of these is in R, right? And from this what we want to do is to take the max, or rather the R-max, as the- as the- as the predictor class. The intuition to have there is that for the set of examples that belong to a class, the- the- the Theta_k k is- is- is- is kind of pointed in that direction so that only those examples belonging to the class will give a dot-product with this vector to be much larger than their dot-products with the other Thetas, right? For example- for- for- you know, pick this- this blue, right? The dot-product between this example in Theta_k k, because they are, you know, um, um, so well aligned, is going to be much bigger than the dot-product of this with, say, the red. Because in fact, this is greater than 90 degrees, so this will be a negative value, and similarly, the dot-product between this and the black vector will be negative value, but between the blue vector is going to be positive, and so, you know, the- the class k will now test the max, right? And this set of Thetas, we want to write it out in- in a matrix form, right? Where each- each row is d-dimensional and you multiply this with vector x, which is also d-dimensional, right? And this will give you- um, and- and you have, um, k such vectors, right? You have- you have one- one- one Theta vector specific to each class, and you dot it, you get the real value, and for some example, let- let's call it x^i, the different Theta transpose x's might look like this. So for i equals 0, um, for class 0, you might get, um, Theta transpose x. For class 1, the dot-product might be negative, for class two, again, the dot-product might be, you know, positive. And the way softmax regression works is to take the set of these scalar-valued dot-products and convert them into a probability distribution, right? Now, given a set of scalar-valued, um, um, scalar-valued numbers, how do we now convert this into a probability distribution? So for a probability distribution, there are two properties, the first thing is that all the values should be non-negative, and the second property is that they should sum over to 1, right? So the first thing we do is make them all positive while still maintaining the relative orders. And how will you- how- and can you suggest one way how we can- how we can convert all of these into positive values while- while still maintaining the relative orders? [BACKGROUND] You can add a constant, very good. Another way is to? [BACKGROUND] I'm sorry. [BACKGROUND] You can use- the absolute function will make it positive, but they- they- it might not maintain the order. For example, a highly negative one might become, might- might- might become larger than this. [BACKGROUND] You can use a sigmoid function, very good. The sigmoid function will squash everything between 0 and 1. A simple- simple, uh, technique is to just exponentiate them, right? So, um, exponentiate each x^i transpose, um, Theta_i, and this will now make them all positive. So if- if- um, if- the input exponent is minus infinity, it maps it to 0, if it's highly negative, it maps it to a very small positive number, and if the value is positive, it'll make it much, much more positive, right? And then we normalize it. We normalize it such that the heights add up to 1, right? And how do we normalize it? [BACKGROUND] Dividing it by- the integral of this was continuous, in case of discrete, you sum them up and divide them, right? And you get- I'm just going to continue it here, so you get exponent of x transpose Theta_i over summation over j, exponent of x transpose Theta_j, and this will be probability that equals i given x Theta. And this function is called the softmax function, okay? The softmax function takes a set of scalars, real-valued scalars, and converts them into a probability distribution, right? Where the- the larger valued scalars will have a higher probability, the smaller-valued scalars will have a smaller probability, and they all add up to 1. That's the softmax function, right? Any questions? Yes. [BACKGROUND] Can I write a little bigger? Sure. Um, sorry about the writing. So may- maybe I'll just try to write it a little more clearly here. Probability y equals i given x Theta is equal to exponent of x transpose [NOISE] Theta_i divided by the sum over all j, exponent of x transpose Theta_j. Is that clear? All right, so this is- this is basically the softmax function, and the re- reason why it's called softmax function is because, um, if you're given a vector x and, um, you want to find- you want to extract the maximum value or- of, you know, the maximum- the largest component of x. One way to do it is to do x transpose softmax of x, and it'll give you something that is similar to the max, uh, the largest component of x. But- and this is, you know, differentiable, you can do gradient descent, etc. And, uh, you know, it's- it's- it's a differentiable smooth way of extracting the maximum value of- of- of- maximum component of some X. So that's why it's called the softmax. All right, that's- that's about it in terms of, uh, generalized linear models. Uh, this is the big picture you want to have in your mind, and, uh, exponential family, which we already discussed and- and- and generalized linear models. Take exponential families and relate it to some input x with the assumption Eta equals Theta transpose x. That's the big picture. All right. Thank you.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_1_Introduction_and_Linear_Algebra.txt
So we're gonna start off today with, um, some introductions of the teaching team. Um, that's me, Anand. Uh, I am a 4th year PhD student in Computer Science. I work with, uh, Professor Andrew Ng on machine learning and applying machine learning, uh, to different problems, uh, mostly in health care. We also have a wonderful team of teaching assistants. You'll meet the rest of them through the quarter and office hours and such. [NOISE] Okay. So the goals of this course, what do we- what do we, uh, what are the goals on this course? So by the end of the course, you will- we want you to be an expert in machine learning, which means you understand the internals of how machine learning algorithms work. Not just using them, but understanding what's happening inside the algorithms. That's the main, um, uh, I would say goal number 1 of this course. Goal number 2 is to enable you to build machine learning applications, which means having a good understanding of what algorithms to use in which scenarios. How do you- how do you, um, decide whether a given problem is a good machine learning problem? Not all problems are good machine learning problems. And once you know that a problem is a good machine learning problem, how- do you treat it as a supervised learning problem, unsupervised learning problem? We'll go through all those things, right? And the third goal is also, uh, enable you to be able to start doing machine learning research, which means making you familiar with a lot of the terms and concepts that you will encounter if you pick up and read a machine learning research paper. So those are the, uh, uh, the top three goals for this course. Prerequisites. So, um, machine learning is- is, uh, part mathematical and part computer science in the sense it require- it's- it's really the culmination of both computer science and, um, and- and statistics. So you will need to have basic computer science, uh, uh, principles. Uh, you need to know them. For example, you need to know what is Big O notation, right? You need to, uh, kind of understand what's, you know, uh, what's computation versus what's memory, you know, basic computer science principles. You'll need to be comfortable writing non-trivial, uh, uh, programs with Python and NumPy. It would be great if you are comfortable with recursion. You know, some- some of those, uh, some of the ideas from recursion will be useful when you're translating, um, algorithms to kernel methods. You know, we'll go in- into detail about them later. So, um, uh, you know, we- we will assume that you are- all of you are comfortable writing programs, especially in Python, or at least be willing to put in the time and effort to, uh, pick it up. The other big, um, prerequisite is probability. You should have taken some kind of a probability, um, uh, course in your, you know, uh, academic history so far. So concepts like random variables, distributions should be, you know, um, uh, you should be pretty comfortable with them. You should know what's the difference between a random variable and a distribution, you know, there are, you know, there are distinct things that you should know already what- what's- what's, uh, what they are. You know, things like expectations. We'll be using all these concepts very liberally throughout the- throughout the quarter. The last prerequisite is, uh, linear algebra and multivariate calculus. So you should already be comfortable with things like gradients and Hessians. Um, you shouldn't be seeing them for the first time. You should- you should be comfortable with, uh, these concepts already. Uh, and- and- and you know, things like eigenvalue, eigenvector, um, um, as well. [NOISE] So what about the honor code? So you are strongly encouraged to form study groups. Um, in fact, uh, I would say forming study groups is a key- key step for being successful in this course. Um, especially if you're not so comfortable with the prerequisites, please do form study groups and, you know, work together in study groups. However, you should still write up your homeworks and write up your code independently, right? So it is fine to discuss and, you know, uh, work on solutions together. And once you're done with your study group meeting, put aside all the material that you used during the study group and independently, from scratch, you know, write up your homeworks by yourselves in your own words. Please do not refer to, you know, uh, the material that you used in your study group session while writing up the, um, uh, uh, writing up your ho- uh, your homeworks, right? Um, another important note is that it is in violation of the honor code to refer to homeworks and solutions from previous years. Whether they are the official solutions, uh, released by this course or solutions written by some other students in a GitHub or wherever. So it- it is against the honor code to, um, refer to, um, uh, solutions from the past. And we are very serious about the honor code. And, uh, though we will, you know, uh, we- we will not, uh, actively look for honor code violations, we trust that you will be, uh, you'll be following them. So the course structure, there are three homeworks. Uh, each homework will be about, um, two weeks long, and each homework will- will count towards about 20% of your final grade. And there'll be a final exam. It'll be a take-home final exam. Uh, the reason it's a take-home is because you'll need a computer, uh, to- to- um, um, do your final exam. There will be some code, uh, more details on that, you know, uh, towards the end of the quarter. Uh, but it's gonna be a take-home final exam. Very likely a 24-hour take home final exam. Uh, we haven't yet decided what's- what's the exact start of it, uh, when the exam will start, but it's gonna be a final exam during the, uh, finals, uh, final slots. Logistics. So the course website is on, it's- it's up. It is cs229.stanford.edu. On the course website, we have the office hours calendars, the course calendars. There are two calendars up there. One is for all the office hours when- when each TA is holding their office hours and- and where exactly. And, uh, the deadlines are part of the, uh, course calendar. We can have a quick look at the, uh, course website. Oops. Anyways, it- it's not opening over here. Anyway, so the, um, yeah. You can look up the course website. Uh, on the course website there are three big buttons as soon as you open them, you know. One- one button is called syllabus. Um, let me see if I can- there you go. Okay. So you have three big buttons, the syllabus, link to Piazza, and- and, um, the calendars. So on the calendars, there are two calendars. First one is the office hours. You can- you can click on this and add it into your Google Calendar if you wish to do so. Uh, but the exact office hours of every TA is- is, uh, is out there. And if you scroll further down, you'll see the course calendar. Um, basically we are in this lecture right now. And [NOISE] you'll find details about when each piece set is out, when each piece set is due, um, all the deadlines are in the, um, um, course calendar. So if you want, you could- you could just add this to your, you know, your personal calendar to keep track of the deadlines. Um, and then we have the syllabus page. So on the syllabus page, um, we have a collection of topics that, um, you're probably not gonna cover all of these, but we're gonna cover a pretty big subset of these topics. Um, and further down, um, here is gonna be a lecture by lecture of what we are gonna cover in each lecture with the corresponding, uh, slides or, you know, notes corresponding to that lecture. Yeah. And- and the syllabus page, uh, the- this will be updated through the quarter depending on what- what the actual- what the actual progress we make is, okay? And Piazza forum. So all of you should have- um, all of you should have received an invite to Piazza already. If not, click on the Piazza link and enroll right away. Piazza is probably the most important, um, most important platform for the course in terms of logistics, because all announcements will be made on Piazza, the homeworks will be released on Piazza, and, you know, anything that you need to know in terms of, you know, the final exam logistics. All those details will be announced on Piazza. So please sign up on Piazza and- and, you know, um, and monitor it. Gradescope. All the, uh, submissions will be done on Gradescope. Um, if you- if you, um, haven't used Grade- Gradescope before, you know, um, spend a few minutes to, you know, get familiar with it. Um, the homeworks will be uploaded by you into Gradescope directly and we will grade them on Gradescope. Um, again, for Gradescope, also you should have received invites by now. If you have not received your Gradescope invite, you know, create a private Piazza post. So, uh, in terms of any questions that you have through the quarter, your first- um, your first destination will be Piazza. If it is, uh, uh, any- a question related to the course content, about the course logistics, create a question on Piazza. It's ve- it's very likely some others may have already created a similar question. So you can- you can either, um, if you know the answer, help them out or if you, uh, if you don't find the question posted there. If it is a question that is specific to you in the sense, uh, for example, if you're not able to, you know, submit a homework on time and you know that up front and you want an extension, things that are specific to you, or if you have an OAE letter, um, create a private Piazza post, um, you know, things that you don't want the rest of the class to hear about. Um, if there are things that you want to, um, if there are things even beyond that that you wanna, um, um, let's say for example, just let me know, um, and not even let the TS know, just send me an email directly with the word CS229 in the- in the, uh, uh, in the subject. So that's about the logistics. Any questions on the logistics? I'll take that as a no. Okay, moving on. All right. So what is machine learning? So the term machine learning was coined by Arthur Samuel in 1959. Um, and one of the very early programs that- that kind of got people interested in machine learning was a Checkers playing program written by Arthur Samuel. So it was a program, um, probably most of you know what Checkers is. It's- it's a two-person game, there is, um, a checkers board that looks like a chessboard and you have white and black pawns. It has certain rules. So Arthur Samuel wrote a program in, um, few years before 1959 and it became popular in 1959, where the program would learn to play against itself without any- any- any kind of strategy being explicitly programmed into it. Right? It learned to play by- against itself and it improved its, you know, game-playing performance as it experienced more and more games against itself. Right. Eventually, it- it learned to play checkers better than, you know, Arthur Samuel himself. I don't know how good a player he was to begin with, but it got pretty good by just self play. There was no- there was no strategy coded in by Arthur Samuel into the program. Only through experiences, the program learned to, uh, play better. And that was kind of the, uh- the event that kind of sparked interest, widespread interest in machine learning. A more common definition by Tom Mitchell, he's a- a professor at CMU, is machine learning is the study of computer algorithms that improve automatically through experience. Now, experience is, um- is a loose term. What do we mean by experience? Most of the times, by experience, we mean prior data, like examples from the past, where the machine learning algorithm itself is a very generic, um, template algorithm. You don't, um- and you just- you feed the algorithm data from the past, what you call as training data. And the algorithm analyzes the data that's given to it. And you- looking for patterns in the data the- the algorithms improves its performance at- at- at- at whatever task it is- it is, uh, designed to operate on. So the- the key here is data or experience from the past that- that distinguishes machine learning from other fields. Right? Machine learning is also related to the field of artificial intelligence. So AI is a much broader term and we see the terms AI and machine learning being conflated a lot of times, you know, especially in- in the news and media. But strictly, um, if- if you look at the definitions of AI, um, and the definition of machine learning, machine learning is a subset of AI, right? AI is a much broader field. AI is about- is about building programs, where the programs operate at the level of performance of- of say, a human being for example. Again, that's a very loose definition of AI, right? So AI is, uh- is a broader- is a broader definition and it- it deals with algorithms that perform at a cognitive level, similar to a human being. Machine learning is one approach of how to, uh, implement such programs. For example, you could implement an AI program that does not look at any kind of training data. You just use your smarts and write up, you know, a very smart program that- that works really well and behaves as, you know, uh- at the level of a- a- a human in terms of performance, and that's still AI. But that's not machine learning. In machine learning, data is the key. You look at past data, use it as a training set, or you build a simulator from which your program can kind of, um, build experience by interacting with its environment. And that program which is looking at data or which is looking, you know- which is getting experience in some kind of a simulator improves its performance over time. That's machine learning. That's one way to, um, um, make your programs intelligent, right? However, uh, most of the attention, recent attention in artificial intelligence is due to machine learning, right? Artificial intelligence has been around for a while. Machine learning has been around for a while. But some of the recent advances in machine learning has made machine learning very successful. And therefore, the interest in artificial intelligence is- is kind of, uh, reinvigorated again. And even within machine learning, there is a specific, um- a specific type of machine call- uh, learning called deep learning, which has actually seen the most advances recently. So it's, um- it's good to have this clear picture of AI, which is a much broader field, which is- which is, uh, dealing with programs that operate at a- at a similar cognitive level at- as a human being, right? It does not- AI does not tell you how to implement such programs. Within AI, machine learning is a subfield which- which prescribes one way to implement such programs. That's basically look at prior examples and- and- and learn from examples. And even within machine learning, there is a subfield of machine learning called deep learning, uh, which uses something called neural networks, which we will see later in the course. And that- that has- that's this- the field that has made a lot of progress in the last 5 to 10 years. Okay. So here are some examples of where machine learning has made a lot of progress. So computer vision and image recognition, um, this was, um, uh- this is a field that has probably undergone the most progress in the- in the recent years. There is, uh, a famous, uh, dataset called ImageNet, a computer vision dataset. And there is a kind of model called, uh, the- a convolution neural network, uh, which together, you know, more or less revolutionized computer vision. And this is all recent, you know, since, you know, um, after 2012 and 2015, you know, within the last- last decade. And it has also, um, significantly improved autonomous driving where, you know, machine learning techniques are used to sense where, you know, uh, pedestrians are, where the stop sign is, where the traffic lights are. Um, and it- it- it is- you know, um, it's a key component for- for, uh, autonomous driving. Machine learning has also significantly improved speech recognition. And- and- and things like, you know, uh, it- it has made- and it has made possible things like voice assistance like, you know, Apple Siri or, um, uh, the Google Assistant, etc. Language translation. So Google Translate now uses machine learning or from what we know, from what Google tells us, that it uses, uh, deep learning. Um, and in fact, there was also a paper from Facebook, I think last year, which does unsupervised translation, which means just give it a corpus of, you know, a whole bunch of, you know, English- um, uh, of English sentences and English documents. And just give it a whole bunch of, um, for example, French documents. And just by looking at them where it was- the algorithm was never told what is a matching English and- and French, uh, statement, but- but just looking at two different corpuses, it learned to, um, uh, translate sentences from one language to another. And similarly with Google Translate, they had this, um, um- they had this paper a few years ago where, um, they built, uh, a single model that could translate from various pairs of languages. And that model automatically learned how to translate from new pairs for which it had never seen a training example before. Tho- tho- those are examples in language translation, you know, a very exciting- exciting field. There's also a lot of progress happening in reinforcement learning, uh, or deep reinforcement learning. And most of the progress that has been in game playing, like, for example, Deepmind over the last few years. They built, um, reinforcement learning models that could play Atari games at the level of humans. So you- you feed the model, the pixels from the display of an Atari game. And just by looking at the pixels of how the- how the game looks, the, uh- the agent will, uh, um, controls different actions and the agent learned to, um, um, play Atari games at a superhuman level across a whole, you know- across 30, 40 games. And also AlphaGo. Until very recently, uh, Go is another board game. And it was widely considered to be, um, extremely difficult to play because you need a lot of strategy, a lot of planning. And after chess was kind of, um, conquered by- by- by computer programs in- in- in the '80s and, uh, um, around the 1980s, Go was then, you know, thought of as, you know, the next big, uh, uh, board game that's really hard to- to kind of beat. And, you know, uh, AlphaGo, again, you know, uh, was- was, uh, built by Google again, you know, which- which plays, uh, better than like the world's top Go players. Those are, uh, some of the recent, um, uh, progresses and some preview into the course, what we'll be doing, um, uh, through this quarter. So we're gonna look at, um, different supervised learning algorithms, unsupervised deep learning algorithms, some learning theory and some reinforcement learning. So most of the time will be spent on supervised and unsupervised and a- a good amount on learning theory, and maybe a little bit on reinforcement learning theory. So in supervised, uh, learning, we are gonna, um, look at problems and classify them as, you know, regression problems versus classification problems. What are regression problems? When the output of your machine is- is some kind of a real valued output. Uh, for example, if you're trying to predict the wind speed tomorrow at, you know- at Stanford and you're going to build a machine learning model to predict the wind speed, uh, based on today's, uh, uh, uh, weather condition as the input, the output is going to be a real valued number, some, you know, kilometers per hour or miles per hour. So that's a real valued number. So that's, uh, for example, a regression model. Uh, classification model. Uh, for example, if the output is binary or some kind of class, will it rain tomorrow? Yes or no? Right? That's a classification problem. Given today's conditions. If you wanna predict, you know, uh, whether it's going to rain tomorrow, the answer is yes or no. And, uh, you can- you can, um, classify machine learning algorithms in- in, um- along many different dimensions as generative versus discriminative, probabilistic versus non-probabilistic. So any given machine learning algorithm, you can kind of place it on this hypercube of all these dimensions. Um, and, um, kind of taking a step back, what is supervised learning? Supervised learning is a kind of machine learning, um, uh, technique where the training data that you have come in pairs. Each pair has an input and an output, right? An input is what you feed into the machine learning algorithm, and the output is what the machine learning alg- algorithm should output, right? So- and that's called supervision, because for each example, you're telling the algorithm what the right output is. So that's the su- that's where the supervision comes in. In unsupervised learning, um, algorithms, you're- you give the algorithm some data. There is no explicit supervision and you just ask it to look for patterns or structure- interesting structure in the data. And the output of your algorithm is that interesting structure or, you know, the interesting pattern. And again, uh, we kind of, um, classify unsupervised learning al- algorithms as those that look for clusters versus those that look for subspaces, you know. It's fine if you don't, uh, um, fully understand these terms now, I'm just giving you a flavor of what's to come, er, you know, in the rest of the quarter. And deep learning is, um, what's also commonly called as representation learning. Where, um, the deep learning can- can, um, can kind of plug into- you can view deep learning in a supervised setting, unsupervised setting, in a reinforcement, ah, setting, etc, where you're trying to learn representations of your data. Uh, whereas, um, in non-deep learning approaches, you are manually feeding in what the right representation is for the supervisor in a supervisor reinforcement learning algorithm. Whereas with deep learning, you're- um, you're letting the algorithm itself learn the representation as well along with the final task. And then we'll, um, cover learning theory, um, super important concepts of, you know, the bias-variance decomposition, and bias-variance trade-off, generalization, and uniform convergence. Why do we expect a model that was trained on this training set to even perform, you know, with any level of accuracy when you- when you start using it in the world, when you put it into production because you trained it on this specific training set. Now, why do you expect this model to work well in the real world? You know, questions like that, we're gonna answer them, um, in learning theory. And that's probably- the learning theory is probably the- the- uh, the part of the course that- that, um, makes this course interesting, in the sense you learn principles that are common across learning algorithms, you know, uh, common across learning algorithms that have not yet been invented, right? You know, the- the- the foundational principles of why machine learning works. Um, and then we'll spend some time on reinforcement learning as well, uh, though not a lot. So, um, here's an example of a supervised learning, um, algorithm, image classification. So these are examples, a few images of handwritten digits. This comes from a very famous dataset called MNIST, um, where the input is a set of pixels. No, I think this is, um, 24 by 24 pixels or so- some- some- some resolution like that. And, uh, they are black and white images. Uh, in fact, they are grayscale images, um, where you- you have the pixel value at- at- at, um, each location, you feed the set of pixels, um, as input to some learning algorithm. And the output of the learning algorithm is gonna be- is gonna be, um, uh, one of, uh, digits 1 through 9, or 0 through 9, all right? It has 10- it's- it's a- a multi-class classification problem. So the output is, it's- it's a classification but not binary. It's 1 among 10 classes. Um, and you want to learn, a machine learning model given this training data, which- which is able to- um, which is able to predict what the handwritten digit is, for digits it hasn't seen before. Um, in fact, um, you would be doing- you'll be working on this dataset and implementing a neural network to- uh, to build a handwrite- uh, handwriting, ah, um, ah, digit classification in one of your homeworks. Okay. So unsupervised. Here's an example of, um, an unsupervised, ah, learning. So in this example, uh, it's what's commonly known as the cocktail party problem. So imagine there are n speakers, in this case, assume there are two speakers who are in a cocktail party, just the two of them. Um, and there are two microphones that are placed in the room at, you know, some- some, um, uh, specific locations. [NOISE] Now, um, the two microphones are recording what the two speakers are speaking, and each of the recording is some mixture of the- the- um, what the two speakers are saying, okay? And the- the task here is now to take these two audio clips, each of them has a different mixture of the two speakers. [NOISE] And the- the machine learning algorithm is expected to output two different- uh, two different waveforms where the source- the- the voices are separated, right? So all that you are feeding into the algorithm is just two clips. You're not saying, you know, um, there's no kind of supervision telling, you know, speaker 1's voice sounds like this or speaker 2's voice sounds like this. There's no supervision whatsoever, you're just feeding it two clips where the clips have mixed audio of two different speakers, and the, um, um, model outputs, um, their separated voices. So let's see if the audio works here. Um, [FOREIGN] [OVERLAPPING] 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Right? That's, you know, from one microphone there were two speakers. [FOREIGN] [OVERLAPPING] 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Right? That's from another microphone, probably the- the- the sound of one of the speakers was a little louder and the second one, or- or, um, vice versa. And when you separate them, [FOREIGN] 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Right? So all what you gave the algorithm was these two audio clips and nothing else, right? Just two wave files and it- you know, it analyzes the, uh, audio wav- waveforms looking for structures, and it's able to separate the voices with no supervision whatsoever. And you're gonna be doing one of these in your homeworks as well. Um, it- it will not be two, but actually, um, um, a set of five audio clips that are mixed together and- and you're given five different, um, audio files. And you will implement the- uh, the ICA algorithm that we're gonna cover in the course and- and separate the audio into five, you know, distinct- distinct clips. [NOISE] And finally, reinforcement learning. So here's an example of reinforcement learning, um, where the goal of the algorithm is to control what's called- ah, um, what's commonly called the inverted pendulum or the cart-pole. So imagine you have a stick and you're trying to balance a stick on your hand, uh, a vertical stick on your hand, you know. And we are trying to control this- ah, um, this cart-pole agent in order to keep the stick that's placed on it well-balanced, right? So, uh, when you start a- a learning algorithm from scratch, so, uh, this is episode, so each trial, um, is an episode, when- once the stick falls, you start the next episode. And for example, this is episode 0, it is falling, falling, falling. Now it's episode 1, falling again. Episode 2, fell down. Episode 3, fell down. And you- you train this algorithm, you continue trying it for more episodes, and eventually, it will- it'll get better. Um, I stopped the algorithm for- for this- uh, I stopped there at about 130-ish. But if you let it continue, it's- it's gonna learn to- um, learn how to balance the cart-pole for, you know, potentially indefinitely long. There you go. Oops. Oh, no, I think it- it- it falls off the table or something. Anyway, so, um, again, this is gonna be on your homework as well. You're gonna- you're gonna, um, ah, write a reinforcement learning algorithm that's gonna learn in a- in a simulator like this, how to, uh, balance- balance a stick on top of cart-pole. Yes, question? Yeah, just out of curiosity for the previous example you gave, is it required that you have as many clips as you have speakers? So the question is, uh, for- for, uh, those who are watching, uh, the lecture on- online, um, is it necessary to have as many number of audio clips as the number of speakers? Yes, that's exactly right. You need as many number of audio clips as, uh, the number of speakers. Okay, so that's about, uh, it for the introduction. So for the rest of today, we will be- yeah, so for the rest of today we'll be covering- um, doing a review of linear algebra, uh, in case you, you know, it's been a while since you took linear algebra to, uh, brush you up and familiar- familiarize you with the concepts that are more important for this course. And in the next- um, next class, we will be doing- covering matrix calculus and- and some probability and statistics. Um, and from, you know, onwards we'll be, uh, getting into actual machine learning, uh, models. Yes, question? You said that working unsupervised learning, you try to find the structures. [inaudible] Yeah. So, ah, the- the, uh, question was the- um, in unsupervised learning you're- you're- just given a dataset and asked, uh, you know, to- how to find structures in the data, and I also spoke about an example with- with the, uh, language translation where a model was just given unpaired mappings of language and it still learned how to, uh, map from each other. The answer to that is a little technical. Let's take that offline. You know, after the lecture, you know, come up and we- we can talk about that. All right. So I'm gonna be assuming all of you ha- are familiar with some basics of linear algebra in the sense, all of you know what a matrix is, know what a vector is, you know what you can do with vectors and matrices. Uh, so we'll be reviewing them, but also I'm gonna be, um, assuming some familiarity from, um, all of you. So first of all, some notation. So what's a vector? Right? So, um, for the purposes of this course, you will assume that a given vector, let's call it V, is an element of R^d. Now, what's- let's- let's- let's make sure all of us understand what- what this means, right? This means an element of, you know, from set theory. So V is a vector, it is an element of this set, and this set is a d-dimensional real space, right? So what's a d-dimensional real space? So this is, uh, imagine this is, um, the real line, and this is x, y, z. So this is an example. This is R^3 because there are three dimensions, right? So a vector is an element of a d-dimensional real space, and vectors are- when we say it is an element, it means it is a point that lives in this d-dimensional, um, um, real space. This vector can also be represented as, um, V_1, V_2, V_d. We generally write- um, a vector generally is considered as a column vector. We write it as a column in this course. Um, you can also write a row vector as V^transpose; V_1, V_d, right? So what- what- what do these numbers mean? These numbers are basically the coordinates of this point, right? So this is V_1, this length, and this length is, uh, maybe I should write it here, V_1, V_2, V_3, right? And this point over here is- is V. Okay. Similarly, we- we also have a matrix. So we write matrices as, let's call it a letter A. So generally matrices have- we use capital letters, for vectors we use small letters. Let me say is part of R^m*n. All right. You can- you can interpret a- a matrix A as, um, as an element living in an m by n dimensional real space. What- what does that actually mean, right? So you can think of that as, [NOISE] so a matrix is- is a grid of numbers, um, real valued numbers for the purpose of this course. Through the course we- we hardly ever deal with complex numbers, so, um, it's gonna be only real numbers. Think of it as a m by n grid of real numbers, and that's a matrix. You can interpret this as n- m- n column vectors of m dimension each, right? Or you can think of it as m row vectors of n dimension each, right? There is ah, um, a sp- a special matrix called the identity matrix, which is 1s along the diagonal and 0s everywhere else. I just write a big 0 to indicate that, er, all- all the individual cells are 0. You can also have a diagonal matrix where- a diagonal matrix is one where all the cells except the diagonal are 0s and these could be any value. There's something called a symmetric matrix. [NOISE] A symmetric matrix where A equals A^transpose. What is transpose? Switching rows and columns. Exactly. So a transpose of a matrix, uh, in this case, um, take the first row, I'm I'm- sorry, the first column, and treat it as the row of the transpose, and take the second row- second column, and make it the second row and so on, right? The- the property that, uh, the transpose satisfies is that A_ij equals A^transpose_ji. Here, i and j are the, um, row and column, um, indices of- of- of- of- of a given matrix, so when you transpose it, the row number becomes the column number and the column number becomes the row number. Uh, the trace of a matrix is the sum of the- of the diagonal of- of that matrix. So take a square matrix and sum up the diagonals, um, and that- that's the trace. [NOISE] Let's look at some basic operations that you can do with vectors. So if you have two vectors, you can- you can kind of combine them in two different ways, right? So the first is called the inner product, so first this, so vector-vector operations. Given two vectors, what can you do with them? So the first is- is the inner product. And, um, technically the inner product and dot product are distinct concepts, but in this course they're gonna be the same, so, uh, the dot product. So one way to think of the inner product is, um, if you have two vectors, x,y both in R^d. So in order to perform the inner product, the two vec- the two vectors have to be of the same dimension, for example d, and the inner product is the sum over i equals 1 to d, x_i y_i, all right? And this is also written as x^transpose y. So this is the- the- the mathematical representation of the dot-product. You could also think of this as this. You think of a vector written as a row vector, another vector written as a column vector, the two have to have the same dimension, and when you perform the inner product, you get a scalar, right? You feed in two vectors, the output is a- a single number. Right, you- you've lost, um, the angles or- or- or- the orientation of the vectors once you perform an inner product. All right. The other operation you can do with two vectors is the? Anybody? You can do the cross-product. Uh, the cross-product, however, is not very interesting for us. You also have something called the outer product, right? So the cross product and outer product are different things, right? So, uh, in cross-product you, uh- you're given two vectors and you find a new vector that is perpendicular to the plane in which the two- two vectors are, but here we're gonna talk about the outer product. [NOISE] So you have the inner product and you have the outer product. So in the outer product, you can have x in, let's call it R_d and y in R_p, right? So for the outer product, the two vectors need not to have the same dimension, right? But inner product they have to have the same dimension. And, um, mathematically we write the outer product as x, y transpose, right? It could also- you- you could also have a different outer product which is y, x transpose. In this case, x transpose y and y transpose x for the inner product was the same. So, you know, you could- you could, uh, switch the order. But for outer products, if you switch the order, you get something which is a little different. Um, the way to think of the outer product, um, at least, uh, pictorially is something like this. Okay. Right? So think of vector 1, vector 2. So we have a column vector and a row vector, right? Um, so in this case, this is d dimensional and this is p dimensional, right? So with the inner product, we took a row vector, a column vector and convert it into a scalar, right? But with an outer product, what we do is you take each- each and every possible pairs of elements from the row vector and the column vector. For example, take the first element from the column vector and say the third element from the row vector, multiply those two elements, right? And that becomes the first row, third column cell of this matrix. So you take two vectors and construct a matrix out of them. All right, by- um, pick the- pick the ith, um, element from this vector, jth element from this vector, multiply them, and that becomes the ij element of- of- of the matrix, right? So this- this, uh, for the outer product, you don't need the vectors to have the same dimension. And a matrix that you construct from one row vector and one column vector is also called a rank one matrix, right? Why is it called rank one? Because one way to think of it is it is made of one pair of a row and a column vector, right? So that makes it, um, um, what's, uh, what's also called as a- a rank one matrix. You could also, for example, um, pick two rank one matrices, add them up. So these together will give you, uh, um, one rank one matrix, add it with another rank one matrix. Right? So the row column- the- the column vectors have to be of the same dimension. The row vectors have to be of the same dimension, but the two need not be the same dimension, right? So if, you know, these two have to be d dimensional, these two have to be for example p dimension. Here you get a rank one matrix, you get another rank one matrix. You sum them up element-wise, and you get another matrix of rank. [inaudible]. So you take two rank one matrices, you add them up. [inaudible]. You get a rank two matrix. So you take two rank one matrices and you add them up, you can- you get a rank two matrix. Assuming the vectors are linearly independent. Okay? If the vectors are linearly independent, you take two rank one matrices, add them up, you get a rank two matrix. Now, what happens if, let's say we add, um, k of them, so 1, 2. What's the rank of this matrix? Rank k. So the rank of this matrix is actually going to be the less than or equal to mean of d, p, and k. So the rank of the matrix cannot be bigger than the smaller dimension of the matrix. All right? So by adding the rank one matrices or, you know, um, or linear- adding the linearly independent rank one matrices, you're increasing the rank of your, um, uh, resulting matrix, but you can only go up as high as the smaller of the two dimensions. Yes. Question. Can you define the rank, please? Yes. We will, I- I will define the rank. For now, let's- let's think of a rank as, um, think of this as a sheet of paper. Right, you know, think of this as a matrix and you add more sheets of papers, you're increasing the rank of the matrix, we'll precisely define what the rank of a matrix is in- in a few minutes. Right? And just by looking at a matrix, if you're just given a- a bunch of numbers of- of rows and columns, it's pretty much impossible to tell what the rank of that matrix is. Right? Superficially, you- you cannot just tell by looking at, um, um, looking at a matrix what its rank is, right? It's- it's like an inherent property. There are, you know, simple cases if you are given a diagonal matrix where everything is 0-something diagonal, yes, you can tell the rank, but in general, just by looking at the values for matrix, you cannot tell what the rank of matrix is. Okay, so, um, those are- are vector-vector operations. Let's look at some matrix-vector operations. Right? So, um, you know matrix vec- so, um, even to our vector- vector operations, the inner product and the outer product. For the inner product, you had to have both the vectors of the same dimension and you would get a scalar. For an outer product, they could be of different dimensions, and you got a rank one matrix, right? Now, let's look at matrix-vector operations. So for a matrix-vector operations, first we're gonna consider operations where you have a matrix, let's call it a and a vector, call it x, and this, let's call it M by N, and this is, uh, n. So A belongs to R_m by n, and x belongs to R_n. Right? So this is- um and- and so A_x is now of dimension. [inaudible] A + 1, mm-hm. So how do you think about, um, this operation? So first you can, um, um, think of, uh, the matrix as a set of rows, right? So each row has n elements. Right? And the vector you're multiplying it is a column vector. And all you're doing is the inner product, right? Take the first row, get the inner product with the matrix, and you get a scalar, right? Take the second row do the inner product, you get another scalar. And you're gonna get as many number of scalars as the number of rows you have in the matrix, right? So- so Ax R_m. You can write this as m by one but, you know, um, we assume vectors are column vectors. So if I write just, you know, R_m, you know, just assume it's- it's um, m by 1 if you want to think of it that way. Another interpretation of, um, matrix-vector, um, multiplication is- it's the same matrix, except you're gonna think of them as columns. Okay? m by n. This is n, so this is A and this is x. So this has- I'm just gonna use different symbols. Let's assume, you know, there are three elements, um, and all right this has four. So let's- let's assume this has four elements, and there, um, and this four, A is m by 4 matrix and you're trying to, uh, calculate Ax. The other interpretation here is, um, let me use some colors. So here we did inner products. Right? Here, what we're gonna do is pick the first element of- of x, scale the first column of a. And then pick the second element of x, scale the second column of A by that number. By scaling, I mean, do an element-wise multiplication of- of this number with all the elements here, not just rescale it. And pick the third element of x, scale the third column. Right? And- and then you just, um, um, sum them up along this way. So sum up the scaled versions. All right? So Ax is again, you know, you're summing up, uh, uh, m-dimensional column vectors that are scaled by the corresponding values in the vector. And that's just gonna give you, you know. And whether you do the operation this way or this way, you'll always end up with the same- same right hand side. Right? So that's matrix-vector. Um, and then you have matrix-matrix. [NOISE] So again, for matrix- matrix, there are, kind of, you know, two interpretations of how you, kind of, uh, visualize the product of two matrices. So let's assume you have, uh, two matrices, matrices m by k times k by n. So it is important that the- the number of columns of the first one and the number of rows of the second one are the same, right? And in interpretation number 1, we'll go with the inner product interpretation. So you have m rows of k dimensions each and n columns of k dimensions each, right? The dimensions are matching of, uh, the row and the column. So naturally, we want to do the inner product, right? Now, take every pair of row and column vector. Um, so that- that gives you m times n. So, um, any- any row vector, any column vector perform the inner product. And that becomes the cell value of the corresponding ith row and jth column, right? So pick the, let's say, the second row and the fourth column, do the inner product, and that is the second row, fourth column of- of- of your output matrix. So matrix multiplication, essentially, a whole bunch of inner products that you are doing concurrently in parallel. You can also, um, do the same operation, m by k, k by n. And in this case, think of them as- [NOISE] right? So now we have k column vectors and k row vectors, right? M and n are different, uh, so this is m-dimensional and this is n-dimensional, but we have k of each of them, right? Now, pick them pairwise, not all possible pairs, but pick them pairwise. So first row and first column. I'm sorry, first column and first row. Do the outer product, you get a rank 1 matrix, right? So you get a rank 1 matrix of- so this is column 1 from the first matrix and row 1. Plus, pick the second column and second row, column 1 and, uh, row- column 2 and row 2, right? And add k of them, so column k and row k. So the columns come from the left matrix and the rows come from the right matrix. Um, and you add them up, you get another matrix, right? And the matrix that you calculate this way- all the matrix that you calculate this way are going to be exactly the same. So they're just two different interpretations of the same mathematical operation. [NOISE] Any questions so far? Yeah. [inaudible] Yep, so, uh, the question is, uh, should this be a rank 1 matrix? Uh, so the- the, uh- when you take, ah, ro- a column vector and a row vector and take the outer product, this will be a rank 1 matrix, and this is another separate rank 1 matrix, and this is another rank one matrix. And when you add them up, when you add k of them, you get a potentially, you know, uh, min of, you know, m, n, and k rank matrix. [inaudible]. It will, uh- if- if the rows and columns are linearly independent, it will not be rank 1. You're increasing the rank as- as long as you're taking linearly independent rows and columns. But you cannot- you cannot keep adding, uh, matrices, rank 1 matrices, and increase your rank forever. You have an upper bound which is the smaller of the two dimensions of the row or column. All right, so, um, another question of what the rank was? And before we get to that, um, let's see. Why do we need to learn linear algebra for machine learning? Why is linear algebra even relevant for machine learning? So, uh, some of the, um- some of the situations where we will end up using linear algebra through, you know, um- through the rest of the quarter is first is, um, represent data. So supposing a supervised learning setting, um, you're, let's say, you want to predict, uh, the price of the house and the inputs that you're given are things like, you know, the number of bathrooms, number of bedrooms, you know, the area, etc. Um, you would then represent your training data as x, as, you know. Where each row x's is you- it's also called the design matrix, where each row is a- an example is, a dis- different example. And the columns corresponds to what we also called as features. Uh, for example, uh, this could be house 1, house 2, house 3, house 4, etc. And this could be, say, number of bedrooms area, number of bathrooms, etc. Um, and this is generally called a design matrix, right? So your design matrix x, um, is most conveniently represented as a matrix. Similarly, your supervision values, the actual house prices of those houses, is most conveniently represented as a vector, right? So just to represent your data, it is convenient to use concepts from linear algebra like matrices and vectors, okay? And then we're going to do a whole lot of manipulation with these, like, you know, multiply this, you know, uh, x vector with, you know, uh, another parameter vector called Theta, and so on, and so on. So we are going to be using linear algebra for representing our data and also doing operations on them. Another, um, situations where we will need, um, matrices, uh, is going to be in- in probability to represent what's called covariance matrices. We'll cover probability with you tomorrow, and we will, uh, go over what a covariance matrix is. So for example, covariance matrices, um, they're generally written as, you know- they tend to be symmetric. Um, symmetric, uh, means the matrix and its transpose are the same, right? And we're- we're gonna use, uh, linear algebra for- for calculus, multivariate calculus, right? So, uh, things like, um, gradients, so gradients and, um, um- I'm expecting you know what a gradient is. So, um, um, you um- gradients tend to be vectors, right? So you, you would represent vectors as, as, um, um, column vectors. Hessians, Hessians are matrices. Hessians are basically like, you know, think of them as the second derivative in the multivariate setting. Um, uh, and- and, um Jacobians, Jacobians are derivatives of vector valued output from, you know, a vector value input. Um, all of those can be, um, uh, let's see, gradients, maybe vectors. Hessians matr- matrix, symmetric, Jacobian's, again, a matrix. And Jacobian will, um- will likely not be symmetric, and so forth. And we're also gonna see, uh- use linear algebra very literally in- in kernel methods. You're not expected to know what a kernel method is. I'm just listing it here. Uh, um, just so, uh, you know, that, you know, uh, learning linear algebra has uses in all these, um, um, various scenarios, right? And by calculus, I also mean, you know, in general, optimization. You know, given a loss function, uh, we need to minimize it, right? And, and, uh, we're gonna use, uh, gradients and Hessians and, uh- and, um so- so on there, right? So, yeah. So the- uh, so linear algebra is- is is important, and it's very important that you are comfortable manipulating these concepts such as matrices, and vectors, and, you now, multiplying them, taking inverses, etc. Any questions, uh, before we move on to a few more interesting concepts. Uh- [NOISE] No questions? [NOISE] So we saw matrix-vector multiplication over here, right? So here's the- let's do a geometrical interpretation of- [NOISE] so linear algebra is one field of mathematics where it's very easy to visualize things which makes it really fun, and- and also kind of easy once you kind of get the core intuitions, right? So we saw the matrix-vector multiplication. Uh, so A, let's- let's call a matrix as A, it's in [NOISE] R m by n. And let's say you have a vector x in Rn, right? Now, when you do a matrix-vector multiplication, you get something called Ax, right? And this is in [NOISE] Rm, right? So one way to- to think of Ax is- is through this, you know, uh, row interpretation or column interpretation, right? But, uh, an even more useful way to think about this is to think of A as some kind of a function, right? Think of A as a function. It's, you know- we- we've been looking at the matrix as a set of numbers, right? So stop thinking of it as a set of numbers, but think of it as a function, a function that takes x as input and outputs, you know, that vector. Now, let's assume you have- we have A in [NOISE] R 3 by 3, which means we have, uh- it takes as input values from a three-dimensional space, and its output is also a three-dimensional space, right? Now we're thinking of A as a function, right? Who- which takes as input- the inputs are vectors, and the outputs are also vectors, right? So x is a vector, h is also a vector. They are in different dimensions, yes? Right? So it takes as input an n-dimensional vector and produces an output which is an m-dimensional vector, right? So let's pick some- so assume this is the A which-which-which, uh, does the multiplication. And [NOISE] so these are- let's call them x-axis, y-axis, z-axis, [NOISE] is 3 by 3. And let's say we- so this is a vector in, you know- this is some x. It lives in a three-dimensional space, and it has, you know, a- a- a- a column representation, right? Multiplied by A, right? And let's say it maps it here, in the output space, right? Now, let's take another vector, let's call it- now this is some other vector, run it through A, it's gonna give you a different output, right? So perhaps it comes here. And similarly, you know, take a- a- a third- third vector, right, through A, it's probably gonna come here, right? So the, uh- what this picture is representing is you have this function A, right? We don't think of it as, you know, numbers anymore. We think of it as a function for which we feed- we- we take some point in the input space as the input, feed it into the function, and it outputs, you know, a- a new vector in- in a diff, you know - possibly a different dimensional space. And in- in this case, it's 3 and 3 to make the visualizations simple, right? Now, if A is full rank, which means its rank is, uh- is 3, in this case, right? There is going to be a unique mapping between every point in the input to every point in the output. There's gonna be a one, you know- a one-by-one mapping, one-to-one mapping between every point in the input to a corresponding point in the output, right? Now, if A is such a matrix that makes such a one-to-one mapping possible, A is a full rank matrix, right? Now, uh, so this is input space, [NOISE] and this is the output space. [NOISE] Right? Now, you could also imagine that there exists some other matrix, call it B, that takes these values as the input, right? Say the red, uh, uh, as the input, run it through B, and it outputs the original point over here, right? Now, if A is a full rank matrix, then such a B will exist, you know, which, which does the reverse mapping, and this B is also called A inverse. Right? So A is the function that takes- that takes vectors as input and maps them to some output. And as long as it is full rank, there will be- there will be another matrix, which is the inverse of it, which takes this as the input and corresponds the- and- and outputs, the original point as- as the output. Right? Now, this is in the case of a full rank matrix. Now, what happens if- if, uh, A is not full rank? So this was [NOISE] full rank. Now here, if a matrix is not full rank, it's technically called rank deficient. [NOISE] Right? So again, this is the input space, x-axis, y-axis, z-axis. Similarly, there is an output space x-axis, y-axis, z-axis, right? And we have an A over here. The A is, again, in R 3 by 3. [NOISE] But let's assume this is a rank 2 matrix, right? [NOISE] Right? And this was a full rank matrix. This was a rank 3 matrix, so it could uniquely map every point to every other point, and you could go back through the inverse. You know, now, if it is, uh, rank deficient, what this means is if the rank is 2, it means there exists, uh, a two-dimensional subspace in the input space, you know. Okay. So there exists a- a two-dimensional subspace. What does a subspace mean? So assume this is the ambient space, is three-dimensional. Assume this room to be the ambient three-dimensional space. Any two-dimensional space, for example, this paper that extends indefinitely in all- along all directions, is a subspace that lives in this three-dimensional space, right? Now, if- if the rank of the matrix is- is two, you know, and it is a- a- a- a three by three matrix, then in this three-dimensional ambient space, there exists a two-dimensional subspace that is specific to this matrix, and that subspace must pass through the origin, right? And also, a corresponding subspace in the output space, which is also two-dimensional, which is what makes it, you know, a rank 2 matrix. There is a two-dimensional subspace and a two-dimensional subspace. [NOISE] Again, this also has to go through the- through the origin. And there exists a one-to-one mapping between points on this subspace to points on this subspace here. [NOISE] So a point on this subspace may get mapped here, another point over here may get mapped here. Similarly, you know, a third point here may get mapped here. Now, this- this subspace extends to infinity in all directions, right? In both- in both the input and out- out- output space. And A gives you a unique one-to-one mapping between elements in these subspaces. But A is a three by three matrix, right? It's- it's a function, um, think of it as a function for which you can feed any input. You know, just because a subspace exists for which a one-to-one, uh, uh, mapping exists, doesn't mean, you know, for example, I may take this point that lives outside the subspace, right? This is, uh- you know, I think of this as the ambient three-dimensional space, this is a two-dimensional subspace. What about the point over here where the pen is at? It's not living in the subspace, it's outside the subspace, right? Take- take such a point that, you know, uh, that doesn't live in the subspace. You can still feed this point as an input to this function and it's gonna map it somewhere, right? Where- where- but where will it map it to, right? The way to think about that is any given point in the input, you know, um- in the input space of a matrix can be decomposed, right? You can decompose x [NOISE] into two parts. One part [NOISE] Let's assume this is x, and x lives outside the subspace. You can decompose this x into two parts, where one part lives in the subspace and is in a way nearest to the given x, and another part, which is this residual from that nearest point to the actual point, right? Which means assume the origin is in that corner and this is a two-dimensional subspace that, you know, passes through the origin and we have a point over here, this point can be decomposed into two part- into two parts. This point is represented by a vector that starts at the origin and ends at, you know, this point. You can decompose it into two parts where one part lives strictly in the subspace, and it extends all the way till it is s- nearest to this point, and a second part, which is a point that is perpendicular to the subspace that originates from that nearest point to this point, [NOISE] right? It's like, you know, Pythagoras Theorem. You have two- two components, which are at right angles, and any vector can be decomposed into a component that lies in the subspace and a component that is strictly perpendicular to the subspace, [NOISE] right? And- I'm sorry, what's the question? [inaudible]. Yes, you- you- you can call it a- a- a resolution but a- a more commonly used term is decomposition. So you decompose that vector into two parts, right? And these two parts, when I mean decomposition, I don't mean take the vector, you know, the- and- and split it into, you know, a- a two-dimensional versus a three-dimensional, um, um, you know- What I mean is, you know- you should not be confusing this- [NOISE] this decomposition as splitting it into [NOISE] one part that lives in, uh, uh, the subspace versus another. That's not what we're talking about, right? What we are talking about is, you know, x [NOISE] uh, can be written as- [NOISE] I'm gonna use the word projection over here. [NOISE] So the projection of x onto the subspace [NOISE] plus the- the technical term over here would be the row space [NOISE] plus projection onto the null space. [NOISE] Now, what does that mean? I mean we saw two terms over there, a row space and a null space. Now, row space is actually the subspace which- which gets mapped bijectively to the corresponding, you know, the output subspace, right? This subspace in the input space, right, for which the bijection exist is also called the row space. And it's called the row space because it is made up of precisely those points which can be represented as linear combination of the rows of A, right? Yes. Question. Are all predictions orthogonal? Are all predictions orthogonal? [NOISE] I'll come to that in a moment. Yes. Um, so, um, the row space is made up of all points that- that can be represented as linear combinations of the rows of A, and it is precisely that subspace for which a bijection exists between, [NOISE] you know, the- the output space and- and- and the, er, er, input space. So this is called the row space, and this is called the column space, right? It's also called like, um, um, um, the range of, um, um- it's also called a range or you can, er, think of it as a columns. And this is precisely [NOISE] those- the set of all points which can be represented as linear combinations of the columns of A, right? Now, the- the point over here, which did not exist in the row space, we said, can be represented as the sum of two components. We can decompose it as the sum of two components. One component that lies in the row space and is nearest to x, and another component, which is- you can call it, you know, what's left of the residue, right, which is orthogonal to, uh, uh, your row space. And the question was, is the projection always perpendicular to the space onto which you're projecting it to? Yes, the answer is yes. [NOISE] So the way you wanna think of it is- let's assume this is the subspace. [NOISE] This is a two-dimensional subspace in this ambient three-dimensional space, and you have some point over here, you know, where this, you know, a black pen is, and you wanna project it onto the space- onto the subspace. Now, [NOISE] the- the- the way you do the projection is search all the points that exist in the subspace, calculate the distance from backspace- from that point to this point, and choose the point that- that has the smallest distance, right? [NOISE] That point is the projection of, um, um, the- the- the point having the smallest distance from this point is the projection of this point onto the subspace. And the- the- the line connecting the true point to the projection point is always gonna meet at an angle of 90 degrees. It's- it's always gonna be perpendicular to the subspace itself, [NOISE] right? It's- it's kind of intuitive right? [NOISE] So, um, um, in this [NOISE] subspace, the point that is nearest on this- on this plane to this point is gonna be the one that is like directly below it and so that line is always gonna be, you know, perpendicular. [NOISE] Does- does that answer your question? Okay. [NOISE] Yes. Question. So what is the subspace? Is it possible to [inaudible] origin [inaudible] Subspaces will always pass through the origin. [NOISE] By definition, they pass through the origin because, um, this is always gonna be some linear combination of the row or the columns, and the linear combination can be just 0s, you know, just 0 times Column 1 plus 0 times Column 2. So the origin is always part of both the input [NOISE] subspace and the output subspace. [NOISE] Yes. Question. [NOISE] [inaudible] It is possible [inaudible]? Uh, I don't know [inaudible] Yeah. So the question is what is row space? So row space is precisely the set of all points that can be represented as linear columns of the rows, [NOISE] right? So column space is the set of all points that can be represented as linear combination of the columns of- of A, which means the set of all points that you can obtain by multiplying some vector with that matrix, right? And according to this definition, it is some linear combination of the columns [NOISE] of- of- of A and, you know, um, you take the transpose and you get the row space. Yes. Question. Uh, do you see that these subspaces are, uh, [inaudible] actually [inaudible]? The question is are the subspaces- yeah, yeah [OVERLAPPING] [inaudible] are also [inaudible] A [NOISE] So the question is, are the points of different colors, are they the rows or columns of the matrix A? Is that the question? So, uh, good question. Thank you for asking that question. So the points that I'm choosing over here, right, have nothing to do with. In the sense, they need not be either a row or a column of A, right? The points that lie on this subspace can be represented as some linear combination of the rows of A, right? What it means is one of the- so if A has, has, has three rows, there will be three points in the subspace which corresponds to the three rows of A. It has to, it has to lie there because, you know, um, um, what I mean by that is, let's assume A has three rows. This is A, it has three rows, and this is the subspace obtained by linear combinations of these three rows. Which means one of the combinations could be 1, 0, 0, right? 1 times the first row plus 0 times the second row plus 0 times the third row must lie in the subspace, right? Which means first row exists at some point in the subspace. Does that make sense? Yeah, so, um, yeah, so that's, that's the, um, um, row space and any given x. Now, what happens, you know, back to the question of what happens if you take some x that does not exist in the, in the, uh, row space of x, and you multiply it through- and multiply it uh, by A, what happens? And this is where the, the decomposition, you know, helps us find an answer. So A of x is gonna be A of- I'm gonna abuse notation and write it as x_R plus x_N, where it is- this is the projection of x onto the row space, and this is the residue, right? And this the direction in which the residue lives is also called the nullspace of A. Which means any point that lives in the nullspace of A will always get mapped to zero in the output space, that's called the nullspace of A. So any point in the input space can be broken down into two components. A component that lives in the row space and a component that lives in the nullspace, right? And A of x, you know, you can represent it as, you know, the row space component plus the nullspace component. And because A is a linear function, you can write this A of x_R plus A of x_N, right? And this is 0 because, you know, you're feeding this, this is, uh, a vector in the nullspace, you multiply it by A, it always goes to 0, right? And so this is just A of x_R. So the, the operation you wanna think in your mind is given any point in the input space which may or may not lie in the row space. When you multiply it by- multiply it with A, what happens is effectively, you project that space onto the row space and transfer that, you know, and, and, and find the corresponding output of that projection, right? So it, it, it, it essentially means, um, this operation is effectively just pruning out anything that's outside the row space of, of, you know, from, from the input. Find the projection onto the row space and, and, and carry forward that point onto the output space, right? So this concept of projection is super important. You're gonna, um, you're gonna come across this concept- this, this projection again when we, when we, uh, do linear regression, right? So what's, what's, uh, technically how do we calculate the projection? Right here, I just represented as some abstract function called projection. And here I used, you know, the, the resulting vector after you do the projection. But how do you actually calculate the protection, right? And that's not too hard either so, so projection. We'll start with the simplest case. The simplest case where you have consider you have a vector. Let's call this V some vector. And you have another vector, lets call it B. There are two different vectors. And now we want to project B onto the direction- onto the subspace spanned by V, right? V is a vector, and it induces a subspace and that subspace is a one-dimensional subspace which is made up of all linear combinations of the vector B, which means take B, multiply it by 2, you get- you take V, you know, 2V is part of that subspace, 1.5V is part of that subspace, minus V is part of that subspace. So this entire line is the subspace spanned by V, where the vector V, And now we want to project the vector B onto the subspace, right? And how do we do that? For that there is, um, for, for a given V, there is something called the projection matrix. So the projection matrix of V is V, V transpose over V transpose V. We decipher this in, in a minute. Now this is a projection matrix, which means you take this matrix and take any other vector. It could be this vector, it could be this vector, right? And if you multiply B by this matrix, the resulting point is gonna be the projection of, projection of, of, of B onto that- onto the subspace spanned by V. Now why is that the case? That's the case because let's look at what's happening here. [NOISE] I'm gonna rewrite this as so V, V transpose B over V transpose V. And this is the same as V over normal V, V, normal V transpose B, right? So V transpose V is the same as the square of the length of V. It's the, it's the norm squared of V. And that- um, so this is the square of the length. I distribute one of the length to this and one of the length to that, right? So when you divide a vector by its length, you are effectively rescaling it into a unit length vector, right? So now I'm going to call this, uh, say V tilde, V tilde transpose B. Where V tilde is a vector along V that has length equal to one. Maybe this point over here is V tilde, right? You can, you can pick any vector along this, along, along, on the subspace. And it's always going to result in the same projection matrix because you are rescaling it by the length. So it doesn't matter which, which specific vector you chose in the subspace. We rescale it by the length comeback to V tilde. And the projection matrix is V tilde, V tilde transpose. Now, this [NOISE] I'm going to this write it as V tilde times V tilde transpose B, right? And this we know is the- when, when you take the inner product between a unit length vector and any other vector, [NOISE] it's gonna give you the, the, the length of the projection of this vector onto a unit length vector. [NOISE] So the- if V transpose is, uh, V tilde is, is, is some unit length vector, you take any other len, uh, uh, vector B, the inner, the inner product between two vectors where one of them is unit length, is going to be the magnitude of the other vector along the direction of, of the unit length vector, right? So this is gonna be the length of the projection of B along the dimension of V. And then this is just some scalar. You're re-scaling V tilde by some scalar again. Okay? So this, this is the projection matrix [NOISE] of some vector V, where the projection matrix projects the input that is fed into the projection matrix onto the subspace spanned by it. Is this clear? Any questions? Now, what happens if instead of one vector V, V give you, V, V have a collection of vectors, a bunch of column vectors. For example, we have, let's say, you know, we have a subspace spanned by the a collection of three row vectors. Now, if these are the vectors onto which, uh, uh, whose subspace onto which we want to, um, project X. We follow something very similar to this. In space, in- in place of V, instead of this being a vector, we will have a matrix where the matrix here is- will be some matrix X, whose columns are vectors which make up the subspace onto which we wanna project it. Right? In this case, it- it was a vector, it- it was subspace with just one vector. But if we want to project B onto a subspace spanned by multiple vectors, right? Then in place of V, we're going to use some matrix X, whose columns are precisely those, uh, you know, uh, set of vectors which make up the subspace. Yes, question? [inaudible] They need to be linearly independent by definition, uh, because those are, you know the, um- um. To make up a subspace. You know, you need a set of linearly independent vectors, right? So, um, take- take, uh, the- s, a- se- some set of linearly independent vectors from that su- subspace, right? And wherever there is V, just plug in X, so we're gonna get X. [NOISE] This looks different from this. And that's because, in V transpose V was a scalar and you could- you can divide something by a scalar, right? But X transpose X is-is going to be a matrix and you cannot divide something by a matrix. It- it- it's not even a meaningful operation. How do you even go about doing it, right? So the- the- the right way to think about this is to rewrite, um, this as V [NOISE]. Right? So I just rewrote this as, um, V- V transpose, V inverse V transpose, which is, uh, which is the same. And it is this form that can generalize into a- a matrix form, right? So the projection matrix for a set of columns is-is this. And we are going to come again, come- come across this concept again when- when we do a linear regression. So, um- um-um, remember that. Yes. [inaudible] I could've put it anywhere. You could have put it anywhere? Yeah. [inaudible] So the question is, um, why did we put the X transpose X inverse in the middle rather than to the left or to the right, right? So, um, it's- I mean, the-the answer is- is, um, is- is, I don't know if there's a satisfactory answer, but this is the only thing that makes sense. If you know other, uh, position will match the dimensions, right? So X is m by n, x transpose X, or x transpose x inverse is n by n, and X transpose is n by m. So in a way that's the only place where it kind of fits. In order to kind of multiply things. I know that's not a satisfactory answer, but that's the only operation that kind of makes sense. Right? So this, this, this, um, this concept of what happens when you're [NOISE] this whole idea of-of projecting things and- and finding a point for which, you know, uh, uh, bijections exist and moving over is-is-is going to be a recurring theme. It's- it's- it's something you wanna kind of understand really well, right? Any given, uh, for-for a given matrix A which is not full rank, there exists subspaces in its input and output space. And the-the- the dimension of the subspace is going to be equal to the rank of the matrix. And- and whenever you multiply a vector from the input space, what you're effectively doing is projecting that vector onto the row space that the input subspace for which a bijection exists, [NOISE] and then carry forward that point onto the out- output, um-um dimension. Which also means that now if we have a point that lives outside the subspace in the- in the output space. And we want to find out what point in the input space will give me this as the output. Which is effectively we are asking the question, what is A inverse, right? That does not exist there, for no point in the input space, can we reach a point that lives outside the-the-the column space. Right? This point is unreachable. And-and where, um, in, when we- when we- when we, uh, cover linear regression, we're gonna ask exactly this question. When we have a point that is not unreachable, you know, what are we gonna do about it? How are we going to find an input that, you know, that's- that's basically linear regression, which we'll go on- on- on Friday. But this- this whole idea of- of subspaces, you know, row space, null space decomposition, um, bijection, these are things you want to- you want to kind of digest it well, you know, and absorb them really well. Let's see. We have about eight more minutes, all right. So, uh, we might be able to squeeze in this one. [NOISE] We might run over a little over time today, uh, but I'll try to wrap up as much as possible and we can continue the rest. All right, so here we saw, um, kind of a visual representation of what A, the matrix A does to the input and output. Uh, now, next we are going to limit ourselves to only square matrices, which means the input and output spaces have the same dimensions, and for the purpose of visualization, we're gonna only consider, let's say, a three-dimensional space. Right? Now, in this diagram we have two different, um, um, um, two different pictures for the input space and the output space. But now, because A is symmetric, I'm sorry, A is- is, um, a square matrix, we're gonna overlay the input and output space onto the same space, right? So here, the input space and output space are- are being overlaid here. Now let's ask the question- let's, you know, A, you know, let's ask the question. We saw what happened, uh, for, you know, pick, you know, choose some points, you know, run it through A, you get an output, uh, uh, output point. Now, what happens if we take the unit sphere around the origin? By unit sphere, I mean, just to the points on the surface of the unit sphere. Think of it as a soccer ball at the center on the origin. And you take every point on the surface, run it through A, you get a corresponding output point for every, you know, input point of- of- of the soccer ball, right? How would the resulting shape look? Right? So that's gonna look as an ellipsoid. It's- it's, you know, almost like an ellipse. So that's gonna be- right? So what- what- what exactly happened here? We- it's- it's a three-dimensional input and output space. We started with the input as- we didn't have one input, but we had a collection of points as inputs. And that collection was precisely those points that live on the surface of a unit sphere. And let's say we- we took this point in the input space, run it through A, we got a corresponding out point, and that point, say, this one. Right? So every point on the input surface maps to some point on the output surface of that shape, right? We could have done this with any input shape, but, you know, sphere is easy to kind of analyze, right? Now, similarly, um, you do it for another point, um, let's say this point, and let's say that maps here, and let's say we pick- what color do we have? Green- green, and let's say we pick this point and that maps here. Okay? Now, we saw what- what- what- what happens when you- when we, uh, take some shape, for example, a sphere and run it through A, instead of thinking of um, um, running a point through A. Now imagine running a full- you know, a full- a full shape through A, which- which essentially means you are running every point in that shape through A and calculating the output point which, you know, which would be some other shape. So now think of A as- as- as taking as a shape as input and outputting a sh- a different shape as the output. If the shape is some unit sphere around, is- is the unit sphere around the origin, the output is going to be an ellipsoid. Right? Now, this ellipsoid, assuming A is also symmetric, right? We'll have, essentially there is two things. So first of all, so this was the input, this was the output, this is one, which means there were two operations being done here. One is- um, there is a change in magnitude, so the magnitude came down a little bit and there is a change in direction, right? A did essentially like, you know, effectively um, um, uh, two things, a change in direction and a change in magnitude. Now, there are going to be some points [NOISE] along, you know, um, uh, some point on the unit sphere for which the only change is gonna be a change in magnitude and no change in direction, [NOISE] right? And those are your eigenvectors. [NOISE] For uh, a full rank three by three matrix, there are gonna be three eigenvectors, right? And this is gonna be one of them, and if it is symmetric, those eigenvectors are gonna be perpendicular. Sorry, I used a wrong color I shouldn't have used. That's it. So this is going to be one eigenvector and there's going to be another eigenvector, which will map this point on the input to a point on the output where the- the only thing that changes is the magnitude and not the direction. Yes. Question. [inaudible] I'm sorry. [inaudible] So, does it necessarily have to have, uh, three eigenvectors? So it will have three eigenvectors. However, some of the eigenvalues may be 0. I'm going to come to- come to it, right? So- so this is going to be, you know, eigenvector number one for which the scaling is the maximum. Right? And the eigenvalue is the- the- the ratio of the length of, uh, the input vector and- and the length of the output vector. So if the output vector is- is- is stretched a lot along the eigen- eigenvector, then you have a large eigenvalue, right? And if- if-, um, if the eigenvector is- is shrinking the point closer to the origin, then the eigenvalue is, you know, less than 1. If it is bigger than, um, um, the origin point is greater than 1. Your- your point may also get reversed in direction, in which case your eigenvector is negative. Right? Now, that's- that's the eigenvector, and, um, um, the eigenvalue is- is the ratio of the output to, uh, um, um, the input magnitude. It may so happen that when you take the- the unit sphere or the soccer ball as the input in field, as the output- the output may not be an ellipsoid, but it may be a flat ellipse, which means you lost your dimension. It got kind of smushed into a- a two-dimensional ellipse instead of having a rigid three-dimensional, um, um, ellipsoid shape. Now that means the eigenvalue along the third eigenvector was 0. Right? So along- along- along the third eigenvector, you know, all the lengths kind of got mapped to 0. Doesn't make sense, right? So- so, um, now the- a question was asked earlier on, you know, what is the- what is the meaning of the rank of a matrix? The rank of a matrix is precisely equal to the number of non-zero eigenvalues, right? If the- if the matrix, um, so this is in- in- in a square matrix, um, setting. If a matrix has full rank, then it means a u- a unit sphere will always come out as some kind of an ellipsoid with the same number of dimensions. If you're rank deficit, the ellipsoid that comes out may have lost a few dimensions.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_4_Linear_Regression.txt
All right. Let's- let's jump right in then. Um, quick recap. So we basically just reviewed prerequisites for the course. Um, the hope is, um, ev- all of you are now more or less on the same page in terms of, uh, prerequisites. Um, also, the pre-requi- the- the- the kind of material that we covered in the pre-requisites gives you a flavor of, you know, the kind of, um, mathematical background you are expected to have and the kind of, you know, problems that you'll be solving in the homework. So hopefully that's- that's informative for you in terms of, um, um, expecting what to come- uh, wh- what comes in the rest of the course. So we covered linear algebra, um, and we- we went over some matrix calculus, uh, matrix calculus, uh, mostly bec- uh, uh, we- we are- we are interested in- in, you know, for example, uh, taking derivatives of a scalar-valued function with respect to a vector-valued input. you know, that's the most common thing we'll be doing, um, uh, in this course. Um, either vector-valued input or matrix-valued input. But mo- most of the times it's a scalar-valued output, which is gonna be your loss function. Uh, we're gonna cover that today. Um, we also covered, um, probability theory- some- some, uh, parts of probability theory. Um, and then, um, on Friday we- we touched upon maximum likelihood expectation. That's, you know, uh, just- just, uh, a small part of mathematical statistics that's kind of, uh, relevant to, um, o- o- or that part that's most relevant to this course. And, um, we- we saw the example of how you can- how you can use maximum likelihood estimation to estimate the parameters of a- a Gaussian- uh, a multivariate Gaussian. And we saw that, um, what- what- um, the estimators that we obtained for the mean and covariance happened to be very intuitive ones, right? For example, the mean- the- the mean estimator turned out to be just the average of the given, you know, uh, x's. And the covariance happens to be the sample covariance of- of the, uh, given inputs. So, um, it's- it's nice to see that, um, you know, the intuitive definitions happen to, you know, be well-grounded in some solid theory. So today we're gonna switch gears and start supervised learning. Supervised learning is- basically, um, deals with problems where we are trying to learn mapping from some input x to some output y, right? And the input x and y are- could be anything. Um, most of the times, um, the in- uh, we are gonna be dealing with problems that are either regression problems or classification problems. Those are, like, the most common kind of, uh, machine learning, uh, problems. And depending on the problem, for example, if it is regression, [NOISE] right, and let's say we're trying to predict, um, the price of, say, um, houses. And you're given the, um, living area, right? And this is the price. Okay? So you may have- right, let's call this in- in- in square feet. [NOISE] So let's say, um, you're given a dataset that has mappings x and y, or pairs of x and y, where x is- um, where each example denotes, uh, one- one house. And the x of that row is the- is the, um, living area in- in some unit. And the y of that row is the price in some unit, right? And given this dataset, um, we could, for example, plot it like this. So a unit here, square feet, and the unit here is $1,000. And you may get, [NOISE] right? So, uh, what do we have here? So here, uh, we have a plot where each dot represents one row. Right? And the x coordinate of the dot is the x value, and the y coordinate of the dot is the corresponding y value. And this i- this is just a scatter plot of the, uh, housing prices dataset, right? And the goal here is to learn some function, let's call it, uh, a hypothesis. The reason it's called hypothesis is- is not very crucial. So if you wanna learn a hypothesis, we'll call it h of x, which is some function of x that is similar to or- or, um, that's a- that- that gives you an output that is as close as possible to y, right? That's the, um, um, that's the goal of a regression problem, right? And in- there are some other cases where what you're trying to learn is, um, a classification problem, where [NOISE], right, and here, x_1 and x_2. So here this was x, and this was y; x was scalar value. But imagine a different problem where your x's have two- have- have two attributes instead of just one attribute. Let's say there was, um, a second attribute. Uh, you could call it- you could call it, um, um, for example, number of bedrooms, right? And in- and- and there's going to be a corresponding- um, maybe, let me just, um, write this differently. Let's say you have a different dataset where you have x_1, x_2, and y. And each example, again, is one row. And your y's are 0s and 1s. And your x would be some values, where the 0 and 1 tell you what class the example belongs to. And our goal is to learn some kind of a classifier, which is, you know, some hyperplane that divides your x's into two parts. And the- the, uh, the goal is to learn a suitable hyperplane such that most of the positive examples are on one side, and most of the negative examples are on the other side, right? These are two different kinds of supervised learning problems. Um, and we call it supervised because, with each example, there is a corresponding y that's given to us, which is like- uh, that's the supervision signal where it tells you what the right answer is for each given example, right? And for the f- probably the first third or first half of the course, we're gonna be focusing on supervised learning problems, and the two most common problems you're gonna be looking at are regression and classification. [NOISE] Right? And to set up the basic terminology, we're gonna stick with this terminology. In fact, throughout the course, um, in supervised settings, we're going to call the inputs as x and the outputs as y. N is going to be the number of examples in our training set, right? You're given a training set from which we want to learn some, uh, uh, hypotheses. And the number of examples, um, in the training set is- is going to be n in all your homeworks and- and- and all, uh, through the rest of the course. And we're gonna call the- a given pair x, y as an example in a supervised learning training set. Now, d is going to be the number of dimensions of our input. In this case d was 1, in this case d is 2, right? But d could be, you know, an arbitrarily large number, um, of- of, uh, dimensions of the input. And we're gonna use superscript i with a parenthesis to indicate that it is the i^th example. So x^i would be the x_1, you know, uh, uh, um, x_1, x_2 of the i^th example. And y^i will be the corresponding- uh, uh, uh, y^i will be the corresponding label. So we- we- we generally call the y's as labels or ground truth. [NOISE] And x will be called, uh, your input. So y is the example output, or you can call it label or ground truth, right? And, um, when given a pair, x^i, y^i together, they form the i^th example- i^th super- uh, uh, supervised, uh, learning example. Any questions about this terminology? Okay. Not to confuse, whenever there is a parenthesis, you know, it means we are not taking the i^th power of x or the i^th power of y. It means, you know, it's just the i^th example, right? And in the case of a regression problem, y^i will be a real-value number, right? And in case of classification, y^i will be- y^i will be in 0 or 1 if it is binary classification. If it's a multi-class classification, then y^i will be, you know- uh, it can take more number of discrete values. Right. So this- this is, you know, uh, to set up the, uh, terminology- uh, any- any questions about the terminology? If not, let's- let's- um, [NOISE] let's jump into supervised learning. [NOISE] All right. So the big picture of supervised learning, we saw this picture, uh, is you're given a training set. Training set is the set of x^i, y^i pairs, where i is from 1 to n. All right, this means, uh, a set of x^i, y^i pairs, n, n of them. And using this training set, we wanna run it through some learning algorithm. Now, the, the specific algorithm that we choose will depend on what y^i is. For example, if it's real value, then it has to be a regression learning algorithm. If it's discrete value, then it has to be a, uh, um, a classification algorithm. And the output of the learning algorithm is our hypothesis or the learned hypothesis, right? So this- this, um, h, uh, h of x is gonna be the output of the learning algorithm. And this, this output or this model that we call are the learnt model, let's call this the learnt model, can now take in new x's. Let's call this our x_test, right, and output the corresponding y hat. Okay. So, uh, this is the- the, uh, the big picture of supervised learning. And all the algorithms that we're gonna study are gonna follow this pattern, right? So we start with the training set. We will run it through the learning algorithm. We obtain a model. And the model can now take examples from what we call as the test set. Or, you know, these could be examples that, that are fed into the model when you actually deploy the model in production. And the, the, the goal here is that even though we are learning from a fixed set of examples, we hope to generalize well to new examples that we've not seen before. So x_test in general will be an example that your model has never seen before. And we hope to, um, we hope that the algorithm that we've learned or the model that we've learned will output y's that are, you know, kind of correct in some way. So with that, uh, let's start with our fi- very first learning algorithm, linear regression. So in linear regression, x is an R^d and y is an R, right? And that's the training set that we are given, n, n such things, uh, n such examples, right? And now we wanna learn, uh, a hy- a hypothesis which belongs to a family. What do we mean by that? So in this, in this setting, we didn't- we did not impose any kind of restriction on what h can be. It could be any function whatsoever that takes x as an input and produces some real- some, some, uh, some y. However, that is a very broad class of algorithms, a very bro- broad class of hypotheses. And we wanna limit the, the family of hypotheses over which we are gonna learn in some way. And the most simplest form of, uh, such a hypothesis is called a linear hypothesis. And we're gonna write it in this way, h_Theta of x is equal to x naught plus Theta_1 x_1 plus Theta_2 x_2 plus Theta_d x_d, right? Uh, what's happening here? So the hypothesis that we wanna learn has some parameter Theta. And Theta is in this case. But before I tell what, uh, Theta is, uh, this should be Theta naught. So, uh, the hypothesis that we learn comes with, uh, a Theta vector that has d plus 1 components, right? 1 through d corresponding to the x's and, and, uh, uh, another Theta naught. So Theta naught or Theta is in R^d plus 1. This is the extra, uh, plus 1 that we get. And the goal of linear regression is to learn a suitable set of parameters Theta that, that make, uh, y value as close as possible to h_Theta of x, right? That's the, that's the, uh, that's the goal of, of, uh, learning algorithm. And this we can also write h_Theta of x is equal to summation of i equals 1 to d Theta_i x_i plus Theta naught, right? And we will adopt the convention that we will add a new column to our x's. Let's call it x naught. And that will be equal to one for all the examples, right? And which means I can- we can now write h_Theta of x to be Theta naught x naught plus Theta_1 x_1 plus Theta_d x_d. Okay. And this is also easier to the i equals this time from 0 to d, theta_i x_i, right? And x naught is greater than theta transpose x, right? This, this additional term that we include in all of our examples where we just set it to 1 is also called the intercept term. And this is mostly just for notational convenience, right? Uh, there's, there's, you know, there is absolutely no difference between this version and this version, um, except for notation. Here we have an annoying extra additive term, here there is no annoying extra additive term. That's about, you know, it's just notational difference, right? Now, given this, this, this family of hypotheses that we want to limit ourselves to, where the specific member of the family is decided by the specific Theta vector that is, that is used, we are going to define something called as a cost function, or it's also sometimes called the loss function, [NOISE] right? And in this cost function or loss function, we want to capture the amount of displeasure a specific hypothesis causes to us in some way, right? And a common, uh, a very commonly used cost function for, uh, regression problems is called the, uh, uh, squared error. And we're gonna define it like this. So all the cost functions or loss functions, uh, in this course will be called J, right? And we call it a cost function when we want it to be small, we- the, the desired output for a cost function should be, you know, small, right? And this is gonna be defined as half of 1 equals 1 to n, right? So what's happening here? We have n training examples, i from one to n, and for each example, we take the hypotheses that- that we obtain by some given Theta, that is an input to the cost function, calculate what that hypothesis will, uh, what output that hypothesis, uh, will output, and calculate the squared error between the correct answer and the output of that hypothesis and square it, right? This is also called the squared error. Um, and this is a very commonly used, uh, loss function. And for different values of Theta, the cost function will evaluate to different- different values for the given, uh, training set. Yes, question. [OVERLAPPING] Why do we have a half here? Good question. I'll- I'll come to that shortly. Um, the- the- the thing to, uh, uh, observe here is that for the- from the cost functions point of view, Theta is the only variable, right? The training set that we have given is- is kind of embedded in this function. It's fixed, right? And if you- if you obtain a different set of training, uh, a different training set of- of having, um, uh, different, uh, features, and- and labels, the cost function is going to be different. So the thing that makes one cost function different from another is one, of course, the functional form, and also the training set it- itself. So the training set- the training data that we have is kind of embedded into the cost function, right? That's- that's, uh, that's something you wanna, um, keep in mind. Okay? And now the goal is to find Theta hat, which is- which somehow minimizes the cost function, right? We have Theta, and now, this is Theta i equals 1 to n, h Theta of x^i, y^i squared. Right. So we want to find that Theta that minimizes the cost function to the smallest value possible, right? Any- any- any questions so far? Yes. Should we be dividing it by the number of examples to normalize this? Sure. So the question is, should we be dividing it by the number of examples to, um, um, to normalize this? So if our goal is to find, uh, the lowest possible cost itself, or the loss value itself, then yes. Um, it- it matters a lot whether we normalize this by n or not, but the goal here is to find the arg min, which is, you know, um, what is the value of Theta that minimizes this the most, right? And the value that minimizes- the value of Theta that minimizes the cost the most is the same, whether- you know, the cost is this value, or the cost is this times 1 over n, right? Er, er, you can take 1 over n, you know, outside, and the value that Theta evaluates to will always be the same. So er, in terms of, um, if the interest is in finding the small Theta, then it- it doesn't matter whether you normalize it by n or not. Good question. Okay. Now, the question is now, how are we going to perform this minimization, right? What's- what's- what, uh, what do we do to actually perform this minimization? Because this is just a mathematical expression. This is not an algorithm, right? The algorithm will tell us how we are going to perform this minimization process, okay? And this brings us to our first algorithm which we call as gradient descent. Right? So to perform gradient descent, um, let me- let me start with some pictures to, um- okay? What I have drawn here are- so the two axis are x_1 to- let's call this x_d. Imagine that our- you know, this is in a- in a d-dimensional space. Uh, it's easy to visualize it in a two-dimensional space, but, you know, uh, it's- it's- it's representing a d-dimensional space, and what I have drawn here is the contour plot of the cost function, right? Now, um, this plot is fundamentally different from the two plots we saw over there, or there the axis was data, and here the axis is the parameters, right? A very different plot, right? We have the- the, uh, the axis, uh, to be the parameters, and this is the contour plot of- of the cost function. What's the contour plot? Yes. Shouldn't the y-axis be j, of Theta, and the Theta 1, and Theta db be be kind of on the x-axis So good question. Good question. So, um, the question is, shouldn't the, uh, y-axis be j of Theta, and, uh, the Theta 1 and- and 2 Theta to be- be- be kind of, uh, uh, on the x-axis. Um, that- so, um, the answer is, this is a contour plot, right? In a contour plot, what we- what we do is, we trace out the set of all- the set of all, uh, uh points in- in Theta, which output a specific value of J Theta. So this corresponds to J of theta equals 1, and this corresponds to J of Theta equals 2, and this corresponds to J of Theta equals 3. Okay? It's- it's a very different way of- of- of looking at- at, um, of- of- of plotting things, right? The- the value of J Theta is not visible here, and it is- it's kind of implicit in the shape of the contours that we draw, right? Now, uh, a few things that you can observe is that, uh, what can we say about the shape of J theta here? Okay. Is it like a boat shape? Yeah, it's like a boat shape where it is minimized at this value. And as you- as you kind of move farther away in the parameter space from this value, you are- the cost function evaluates to larger values, right? Is this clear? Yes, question. Can you also not think of this as a dome shaped So the question is, can you also not think of this as, uh, uh, dome shaped? Right. Um, you c- this, this would also be the contour plot of- of a dome shaped, uh, function. If the values w- we're in the other way. This is 3, this is 2, this is 1. Then, uh, for these values of J Theta, then this would be a dome shape. Right? Now, as we get closer to the center, the value J Theta- the value should become smaller and smaller. So it kind of you know, comes down to, er, er, as- as we move to the center. Good question. Now, the- the cost function that we described over here, where Theta is in some, uh, d plus 1 dimension because of the extra intercept term, right? So perhaps this is Theta d plus 1. Right? Now, the goal is to find the Theta that minimizes this cost function. So assume this cost function takes the shape. It could be, you know, uh, uh, um, you know, it's a- some shape. Um, and now we want to find a Theta that minimizes this cost function, um, to the smallest possible value? Yes, question. How do we know that there's only one value of Theta that minimizes the cost function? Very good question. So how do we know that, um, um, um, there's only one value of Theta that minimizes the cost function? In general, it need not. Uh, if you, uh, the correct way to actually write this is, you know, Theta belongs to argmin where argmin gives you a set of minimizers. There could be multiple values of Theta that minimize your, uh, minimize your function. For example, um, imagine this to be the cost function, and your cost function has this shape. So any Theta along this line is a minimizer. It's not a bowl-shaped, it's just, you know, um, that- it's just that the minimum value can be obtained by multiple values of Theta along the line. In, in such cases argmin is actually a set of Theta values. But for- for the most part we'll be assuming that there is one unique, uh, minimizer. Good question. So, um, so r- our, um, our goal is to find the value of- of- of Theta where J theta is the smallest. And for this, we're gonna use an algorithm called "Gradient Descent." Now, how does gradient descent work? So gradient descent, we start with a random initial value of Theta. So we said, Theta- let's call it Theta-naught [NOISE] equal to, you know, some initialization. A common initialization, is you just start with Theta-naught equal to 0, at the origin, right? Theta-naught. Now, it's, uh, again important not to confuse with the superscript I'm using here. If it is theta, then the superscript means the ith example, right? Now, if you're running gradient descent, um, the superscript over here indicates the time-step. After what iteration we are above the value of Theta that we get at, uh, a particular iteration, right? So, uh, take Theta-naught, I know, initialize it to some value, it could be the origin. And then, um, so first I'm going to write it, um, in the partial, uh, partial derivative notation, and then, uh, in a gradient form. So Theta J of 1 equal to Theta 0, minus Alpha times J Theta-naught. What does that mean? So VR at Theta naught equal to 0 or are- are- yeah- are- at Theta naught. So first, what we do is we find the partial derivative of the function J with respect to some coordinate j, right? J can take a value between 1 to d plus 1, right? And we calculate the partial derivative of the loss function with respect to j. And maybe I'll use a different color for here. We have multiplied by some constant Alpha, right? Where Alpha is what we call as a learning rate. And we repeat this for all js, for all j in 1 to d plus 1. Right? And this is, uh, with this rule, we calculate, we- we calculate Theta j for, uh, uh, for the next iteration, right? And that's gonna give us, for example, Theta 1. Now what's happening here. The- an easy way to understand this is to look at this with uh, with a vector notation. So in the vector notation, this will look like Theta 1 is equal to Theta naught minus Alpha times gradient with respect to Theta of-. In this notation, which is the same as this notation. [NOISE] So we have the loss function J over here, and the gradient of a scalar valued function with respect to Theta will give you the direction of steepest ascent. Right? So if we are at this value, the gradient of J of Theta with respect to Theta will tell us that you need to move in this direction to improve Theta- to, to, to increase the value of J Theta. Right? Instead what we do is we flip the sign, make it a negative. Which means now we are looking at this direction and update Theta by moving it by a small amount in the direction of steepest descent. Right? So the step size that we move, right? This difference, is Alpha times negative gradient. We started a Theta naught. Calculate the direction in which J Theta increases the most. Flip the sign, and take a small step in that direction where the step size is decided by Alpha. Any questions here? Yes question. [BACKGROUND] So Theta 0, can it be random? Theta 0 is some kind of an initialization. And uh, a common initialization is to start at 0. You can actually initialize it to uh, any, any value. And a topic that is a slightly more advanced is that if your cost function is convex, which is like a bowl shaped function, then the value that you initialize it two does not matter. You always, you know, end up- when this algorithm ends, you always reach the same value. Yes question. Good question. How do we know we're not trapped in a local minimum? I'm gonna postpone that question for now and assume, you know, we don't have local minimums. We'll deal with local minimums later. Yes. Question. Yes. In practice, what value of Alpha should we take? Great question, and there is no- um, uh, it so happens that well-defined cost functions like, you know, gradient descent on say, the linear regression are tolerant to a whole range of Alphas. And- and you know, but um, in practice, you actually experiment with a few different values of Alphas and- and figure out which one works best for you. Alright? So this is one step, you know, this is, uh, uh, one step in partial derivative notation, and this is one step in vector notation, right? And I personally feel the vector notation is easier to kind of understand. This looks a little cryptic, whereas over here there's uh, uh, a very clear geometric meaning where you are at one particular value, you know, in this- in this, uh, parameter space. You- you. Calculate the gradient of the cost function with respect to the current position. And that gives you the direction of ascent. You flip the sign, take a small step in that direction, and you- you reach a new Theta, Theta 1 such that j of Theta 1, is hopefully less than J of Theta naught. Why do I write hopefully here? Because if you take too large a step, you can overshoot and reach- you know, reach a point that is actually at a higher cost. So, uh, tuning the learning rate is important because you don't wanna overstep, right? And as long as your learning rate is- is well tuned for your cost function, then this hopefully will be true, most of the time. Right? And what we do, we iterate this. So repeat till convergence. Now, we say repeat til convergence. What does that mean? What we mean is um, if- if- if we- if I write this as Theta of t plus 1 equals Theta of t minus Alpha- t. If we repeat this process, we will get a series of parameters, Theta 0, Theta 1, Theta 2, Theta t, Theta t plus 1, right? We get a series of values. Now, for those of you who are- who have a more advanced math background and know what convergence means, then we are talking about the convergence of this series. Right? When the series converges, you can stop, um, um, iterating. Now, However, in practice, defining the convergence has a more practical definition. So there are many ways to check for convergence. One way to check for convergence is to look at the norm of Theta t minus Theta t minus 1. Right? Has- has- have you stopped after taking a step? Have you reached a stage where Theta t to Theta t minus 1 was, you know, was- was- was- was too small or ignorably small. That's one way to check for convergence. Another way to check for convergence is has your loss stopped going down? So J of Theta of t minus J of Theta of t minus 1. I'm putting it as a norm, but you know this is just a scalar. Right? And that's another way to check for convergence. Yet another way to check for convergence is, was the norm of your gradient too small? Okay, so you can look for the norm of the gradient and- and- and check if the norm of the gradient became too small. Or you can- you can- you can look at the difference between the two, you know, the previous uh, uh, parameter and the current parameter and check if, you know, that- that difference, uh, got smaller than some Epsilon. Or if, you know, the- the cost value itself, if it stopped, um, stopped producing right? The- the- there- there's no right answer for how you want to check for convergence in practice. These are just, you know, a few- few thoughts on how you can, um, implement it. So in your- in your code there would be some kind of an Epsilon that you define, which is, you know, maybe 10 to the minus 5 or some- some such small number. And you keep iterating this inner loop by, you know, wherever, you know, think of t as your iteration number and you keep iterating until one of these values becomes smaller than Epsilon. So that would be like your breaking condition for the loop. Yes, question. Oh no, there's just notation, so, you know, think of it as the absolute value, right? So if- if stops with- Yes. [BACKGROUND] So the question, if I understood right is, can we not have, um, um, um, a way to intelligently set Alpha for each iteration to get- get closer. You can- you can do a whole lot of, you know, such variants and there are a lot of variants of this algorithms that exist in practice, um, but for the most part, um, we kind of exploit the fact that as you get closer to- to, uh, the minimum, the gradient also keeps becoming smaller. Right? So as- as you get closer, the- the- the overall update, even for a constant Alpha, the overall update also keeps becoming smaller because your gradient also becomes, uh, smaller and smaller. I mean the- the, um, intuition there is that, once you reach the absolute minimum, right? So supposing this is J of Theta, right? Once you reach the absolute minimum, the gradient is 0. Right? So- and the gradient keeps getting smaller and smaller as, you know, you approach- approach the minimum. Right? So, uh, most of the times, some kind of a- a- a well-tuned value for Alpha, constant value is- is- is, you know, works- works well enough. Good question. Any other question? All right. So what's- what's happening now, um, we saw this- this algorithm called gradient descent, where we are given some kind of a cost function which we visualize, uh, with contour plots that is defined over the parameter space, right? And the shape of the cost function has the data set kind of embedded in it, right? If you chose a different training set, then maybe your cost function is tilted differently or has a minimum at a- at a- at a different location, but the training set is kind of embedded inside this cost function itself. And now we start with a random initialization of- of Theta and we keep, um, taking small steps in the direction of the negative gradient, and we keep taking steps until we hit some kind of a convergence, um, uh, convergence condition, and there are many choices for defining what the convergence condition is. And we take this algorithm, uh, which we call as gradient descent, and apply it to the linear regression cost function. Right? We take this algorithm and we apply to the linear regression cost function. What does that mean? It means the update rule that we have- update rule that we have over here will take a specific form. So the methodology we are following here is- is common for pretty much all the algorithms that we're gonna be- that we're gonna be, uh, uh, studying, where we define- given the training set, we define some kind of a cost function, defined over a parameter space and use gradient descent to find a parameter that minimizes the cost function, right? This- this is a common template that we repeat over and over for different algorithms. The algorith- uh, for- for different models. The model will give us different cost functions, right? And the corresponding cost function will be plugged into this algorithm until we minimize it. Right? That's gonna be a repeating pattern throughout this class. So in case of- now, gradient descent on linear regression. So gradient descent on linear regression will look like this. Repeat un- until convergence. Right? So first let's set Theta naught equal to some initialization, right? Repeat until convergence, Theta t plus 1 equals Theta t minus Alpha. That's the step size or the learning rate times the gradient of this vector jt. Right? Now, this is, um, the standard gradient descent and when you apply to linear regression, we replace the cost function with the cost function of linear regression.ikea So this is Theta t minus half Theta. And what's the cost function? Half equals 1 to n, h Theta of x_i minus y_i squared. Let's simplify this further. Theta of t minus Alpha times gradient half of n. Each Theta of X is Theta transpose X. Theta transpose X_i minus y_i square. Right? So just to make it clear, um, the thing that we are minimizing with respect to is respect to Theta, and this is the only place where Theta appears. Right? The xs and ys are all given, they are just constants, uh, for the purposes of optimizing this cost function, which is why we- we- we- we think of the- the training set kind of being embedded in- in your cost function. Right? They are just different constants as far as, uh, the cost function is concerned. Right? Let's- let's, um, expand this further using our matrix calculus tricks. This is Theta t minus Alpha of n is the square. Um, this will give us 2 times- I'm sorry. Yeah. 2 times Theta transpose X_i minus y_i times x_i, right? Now, the two-and-a-half cancel, so somebody asks, why do we have- have a half in the- in the- in the cost function? The reason is only to just make the gradient update rule look simple. Just- just- just the way multiplying it by 1 over n did not matter. In fact, the- the- you know, half also does not matter except to make our gradient expression look- look simple. This transpose t minus Alpha times sum of i equals 1 to n of Theta transpose x minus x_i, minus y_i times x_i. Right? Notice here that Theta transpose x_i is a scalar, y_i is a scalar, so thet- Theta transpose X_i minus y_i is also a scalar, so it is a scalar times a vector, and the dimension of this is D. Right? So it is- you're summing over a D dimensional vector scaled by some scalar in- in such terms, so this whole expression is D dimensional vector, right? And Theta t was also a D dimensional vector. So we- the next Theta t plus 1 is a D dimensional vector minus some scalar times a D dimensional vector. So this is also a D dimensional vector. Right? So when you- when you- when you, um, write out matrix calculus, it's always- always a good- good idea to make sure that the dimensions match. Right. Any questions about- any questions about this? So this is the gradient update, uh, algorithm, where you repeat this over and over until one of the convergence conditions hit, um, with your training set embedded here. And once you- once you, um- once this algorithm converges, you would have solved linear regression. Yes, there was a question. [inaudible]. Yes. So the question is, what's the dimen- what- what's- what's t- what are we talking about, uh, Theta over here? Theta is, uh, the- the set of all, uh, all- all parameters, which is vector-valued. And the superscript over here is basically saying, you know, which iteration in gradient descent are we at. Right? So this is a D dimensional vector, x is a D dimensional vector, or maybe d plus 1 to account for the, uh, uh, intercept. So this is also d plus 1 dimensional and, um, h Theta of x scalar, y scalar, you know, and so on. Any questions? Yes. [inaudible] If you take the log of the sum of the squared errors. Uh, I'm sorry, I missed the question, what's- what's? [inaudible] Um, not really so we go- taking a log wouldn't- wouldn't, uh, um, so the reason the- the 1/2 exist here is only to make this update rule looks simple. There's- there's no other, uh, no other reason. Because, you know, whatever constant you have, uh, you know, half or- or any other constant. You can always kind of counter it by choosing a different alpha, right? So there's- there's, uh- um, no purpose for that. All right. So that's gradient descent. However, um, in practice [NOISE] what we use is a variant of, um, so for- for- a lot of convex functions like this for- for example, for linear regression, uh, the algorithm that, you know, uh, gradient descent works perfectly fine. However, there's a variant of gradient descent or stochastic gradient descent [NOISE] called SGD. So the way SGD, uh, works is- it is- so what- what- let's- let's look at this gradient, uh, descent, um, algorithm on linear regression here. So this is the update rule. How- what- what we have over here, and we repeat this over and over for each- each- um, for each update. So in order to [NOISE] make so much of progress. So the goal is we start from here and we wanna reach here. And these are small steps of progress that you are making. Each step that we're- I know of- of gradient descent is like a small amount of progress that we're making. So in order to make so much of progress, one step worth of progress, [NOISE] we need to compute this step, right? And for those of you who are- who are probably kind of algorithmically minded, you might be wondering, for each small step of progress that we wanna make, we need to iterate over our entire training set, right? And the number of examples in your training set could be a million or it could be a billion, right? In order to- to make a small- uh, in order to make a small step of progress, we need to scan through our entire training set with this algorithm, right. And that can be extremely expensive. If your- if your, um, um, training set is too big or if your- uh, if- if your model is also too big to compute gradients on- on so many examples, right? And this motivates the- the SGD or the stochastic gradient descent algorithm, which is a variant of gradient descent. Where- what we do is, we say Theta of t plus 1 is equal to Theta of t plus or rather minus Alpha times I'm gonna call this J tilde of Theta. Where J tilde of Theta [NOISE] is- in case of linear regression, it's gonna be half of [NOISE]. So what do we do? Instead of calculating the gradient of the loss function on the full training set, we instead sample just one example, uniformly at random, right? And pretend that is our entire training set. Pretend that our training set has only one example. Calculate the gradient of that cost function, which has only one random example, right? And take a step in the direction according to that cost function, right? And we take one step and repeat this process by sampling a new training example to construct this temporary or proxy loss function specific to that example, there is no summation here. We use some example K, and that K is sampled uniformly at random from your training set. And for each iteration we sample a different example, right? This might look- this might look, uh, it might be surprising. You might wonder, would this even work? Right. The- the intuition over here is that, you know, um, this is our, um, J Theta- Theta 1 to Theta d. And we start at some random position here. In case of gradient descent- [NOISE] in case of gradient descent the- the trajectory followed by the sequence of Thetas would look something like this, right? You- that we- we are making- making- um, you're making progress in the direction of the final minima and taking smaller, and smaller steps as we go closer and closer, right? This would- this is a trajectory that gradient descent would have taken. Whereas, with stochastic gradient descent, the- the updates might look like this, right? We start with Theta naught, right? Instead of calculating the gradient with respect to the true cost function, we are calculating the gradient with respect to this proxy cost function, which has only one random example in it, right? And we're gonna make a step in the negative gradient of that, you know, um, um, uh, proxy cost function and that's gonna be of Theta_1. And Theta_2 will be with respect to the next random, um, example. And what we notice is that in- while in the case of gradient descent, the direction was consistent and always headed toward the local minima, and stochastic gradient descent, the directions are a little, um, crooked, so to speak, right? And you might even encounter cases where you are kind of actually going in the opposite direction of, you know, where you actually want to go eventually, right? But it so happens that by following this algorithm, eventually, you will reach a region around the true minima, right, and all further updates of- of, uh, uh, stochastic gradient descent is gonna keep you confined to a small ball around the true global minima, and the radius of that ball is gonna be a function of the step size Alpha. Right? There's a lot of theory that- that, um- that kind of precisely characterizes this. Uh, so, for example, if you- you know, if you're interested to go deeper into this theory, you can, um, um- you can- you can study stochastic approximation that give you- that- that- that formally, um- that precisely formulates the con- the- the behavior of SGD. But for- from this course point of view, all we need to, um, know is that SGD works, okay? [NOISE] So, um, SGD is- it makes very noisy updates because we're not taking the full set of training, uh- the full training set at every step. It's gonna make noisy steps, but on average, you're gonna reach the global minimum or you're gonna reach a region that is, you know, sufficiently close to the global minimum, and that region is characterized by the, um- by the step size Alpha that you choose. Yes, question. [inaudible]. Sorry, is- is there, uh- can you take a running sample of, uh, uh- A running sum. -a- a running sum of- of the previous gradients and- and do some kind of an averaging of them, um, um, to- to kind of have less noisy updates? [NOISE] Yeah, there are lots of variants of, uh- of, uh- of- of- of gradient descent, and the, um- the technique that you described is also commonly called as momentum. So when you- when you, uh- when you loo- uh, when you look up, uh, all the different, uh, optimization algorithms that exist for machine learning, you know, some of them are gonna have this parameter called momentum, and that does, uh, something exactly what you described, where it keeps a running average of previous descents to kind of, um, make the updates less noisy. [inaudible]. So, uh, for true SGD, um, each time you pick an example, you randomly pick it. The next time, you may pick the same example, but you don't care. You know, just- just pick it randomly, right? Uh, but in a lot of, uh, actual applications, and- and also in the notes, what we see is you kind of scan your training set from top to bottom. Preferably, you want to shuffle your training set once, and- and just loop over your training set from, you know, 1 through n by doing a scan, by using a different example each time, and then you can just repeat again or, you know, at the end of one sweep, you know, shuffle it- reshuffle it and- and, you know, um, repeat again. Yeah? [inaudible]. No, there is- there is no restriction on the minimum size of, uh, training- uh, um, training set. The algorithm works for any training set, uh, but, um, if your training set is not too large, then, you know, just regular gradient descent might work well for you. Um, also, another, uh, thing to note is that with stochastic gradient descent, you may need a lot more number of steps to converge as compared to gradient descent, but the cost of taking each step- uh, the computational cost of each step is so small that it's well-worth- well-worth it to have more number of steps, uh, but they are, you know, so much more inexpensive compared to a full gradient descent. Yes, question. [inaudible]. Yep, so the question is, uh, can- can- can we do something in between gradient descent and, um, uh, stochastic gradient descent where instead of one example, we- we take a small batch of examples, um, and the J Tilde Theta is then defined over as a summation over that batch of examples. Yes, you can absolutely do that, and that's, uh- that's called mini-batch gradient descent, and in fact, most of, um, uh, you know, deep learning and neural networks do exactly that, where you take, um, uh, a batch of examples, uh, where the batch size is some small number like 64 something, yeah. Is it faster or is it more advantageous to use that? Is it more advantageous to use that thing? It could be more advantageous, um, for- for, um, it so happens that the- the situations where we try- where we need to use, um, uh, uh, SGD or mini-batch SGD happen to be with deep learning or neural networks, where the cost function is not convex, right? And once you have, uh, uh, a nonconvex cost function, it's very hard to, um- it's very hard to analyze and make precise statements of what helps and what doesn't help. And so in- in- in those situations, it- the answer is almost always try and see if it works better. Okay. So that's SGD and gradient descent. Any- any questions before we move on? [NOISE] All right. So [NOISE] now that we've seen SGD and gradient descent, two different iterative algorithms, they are also called numerical algorithms because, um, in order to compute the values, you actually need a computer where you- where you code this algorithm and run it and get a solution. You don't have like a- a mathematical expression for what the final answer is, right? You just describe this algorithm, implement it as code, execute it on a computer, and it's gonna return some numerical values for your Theta, right? And this is- this is gonna be the case for most of our algorithms, where we don't have a precise mathematical expression for the final answer. You're gonna define a numerical solution, an iterative solution, and you then need to code it up and find a numerical answer for that particular problem. Yes, question. [inaudible]. Yes, that's what we're coming to, okay? So, er, the only exception or one of the few exceptions is linear regression, where there is a closed form- a- a close form [NOISE] solution for- for, uh, uh, uh, minimizing the cost function over here. Now, the reason why we first, uh, started with gradient descent for linear regression is because taking gradients is very easy and it's easy to kind of show you how gradient descent works, right? But in practice, as an exception, you know, for linear regression, there's actually, um, a closed-form solution which you can use and that's something we will see now. [NOISE] All right. So first, let's redefine J Theta. So first, we saw J Theta to be, uh- we define it as half i equals 1n, [NOISE] [NOISE] right? This was our cost function. Now, let us first rewrite this as in- in- in a vectorized notation. Let us define the, um- what you call as the design matrix x, which is an n by d vector, okay? And each- each row in this matrix is one input- input x vector right? And we also define y, y_1 through y_n. All right? And now Theta will be a vector, Theta_1 through Theta d, or d plus 1, so the row of each, uh- the row of each design matrix is d plus 1 dimensional, uh, if you include the interceptor and, um, the vector Theta is- is, uh, d plus 1, um, dimensional vector, and y is n dimensional, one for each, um- one for each example. Right? Now- [NOISE] The expression X Theta minus Y, will be the matrix X, multiplied by the vector Theta, minus the vector Y, right? So X as in R^n by d. Theta as in R- nd plus 1.Theta as in R^d plus 1, and Y as in R^n, right? So, uh, multiplying a- a- a matrix of n by d plus 1 times a vector of d plus 1 will give us a vector n- R^n minus R^n, right? And this will be just R^n, right? So what does this look like? Um, X Theta is equal to x^1 transpose Theta, x^i transpose Theta, x^n transpose Theta, right? Minus y, so we can include y right here. [NOISE] Minus y^1, minus y^i, minus y^n. And this is X Theta minus y, this turn. Is this clear? Any questions on this? And we define the J of Theta to be half X Theta minus Y transpose X Theta minus Y. Why is this? So this is X Theta minus y, we transpose it and, you know, take the dot product of this with this. So it's- it's basically you're squaring each term and summing them up, right? Does that make sense. You're squaring each term and summing it up, and you're dividing it by a half, which makes this actually exactly equal to half i equals 1 to n, Theta transpose X^i, minus y^i squared, right? So this is the same as the original cost function that we have, right? Any questions here? Exactly the same here, we are using vector notation. Here we were just, you know, um, iterating and having a loop over every example. This is just a vector rotation, right? And now, um, [NOISE] let's try to solve for Theta from the expression equal to 0, right? We want to- we want to set the gradient of this with respect to Theta to be equal to 0, and solve for Theta from this- from this expression, right? [NOISE] So half X Theta minus y transpose X Theta minus- X Theta minus y. Now the reason why there's no transpose between these two is because this is not one example. It is the full matrix, the design matrix, right? And in the matrix, every example is already in the form of a row vector, right? So, um, every example is already in the form of a row vector, it's kind of pre-transposed for you. So this is, um- this is a vector of n dimension- of scalars of n dimension, and this is equal to just the cost function. And now let's- let's, um- let's check through this. This is gradient of Theta of half. I'm just gonna expand this. So X Theta transpose X Theta, minus X Theta transpose y, minus y transpose X Theta, plus y transpose y. All right? [NOISE] This is in respect to Theta, half of Theta transpose X transpose X Theta, minus Theta transpose X transpose y. And these two are scalars, and these two evaluate to the same thing. And so I'm just going to make it 2 plus y transpose y, right? Any questions on how we went from here to here? There was a question on Piazza of how- of- of a similar- similar, uh- uh, step we did for the Gaussian, uh, MLE of the mean of the Gaussian. This is a scalar. X is a- X is a, uh, a matrix, Theta is a vector, X Theta is a vector, vector- uh, dot vector is a scalar. This is a scalar. So, um, and if you have scalars, the transpose of the scalars is the same thing. So if you transpose this, you get two times, um- um- um, the first expression. So, um, this is just 2 times, um- 2 times that. And- and now, um, let's observe a few things. So this is a vector, and this is some scalar- this is some vector. And vector, er, transpose another vector is a scalar. This is the bi- uh, the, uh- um, quadratic form that we've seen in the past, right? And when we take the, uh- uh- when we take the gradients, we get half of 2 X transpose X Theta, minus 2 X transpose y. And this is just 0, so this is gonna just cancel out, right? And this- this is the gradient, and we want this to be equal to 0, which gives us X transpose X Theta equals X transpose y, right? And this equation that we have here is called the normal equation. And from this you can solve for Theta to be equal to, um, X transpose X inverse X transpose Y, as long as X transpose X is invertible, right? And for now, let's assume X transpose X is invertible. If you have two rows- uh, two columns in your matrix that are duplicates of each other, then X transpose X may not be invertible, but we'll address those kinds of, um- um, oddities later. For now, assume X transpose X is invertible, and this gives you an estimator for Theta hat, right? So this is called the, uh, normal equation. And it is- it is- it is only in the case of, um- so linear regression is one of the few cases where you can come up with an exact solution and not- and not limited to a numerical solution for which we need a computer, right? This is an exact solution for calculating Theta hat, uh, or- or to minimize your cost function because J of this Theta hat is gonna minimize your- is gonna, uh, minimize this to be the smallest, right? Any questions? Yes, question. What is the use of the other approach? Right. So what's the use of the other approach? The- the other approach is, uh, we- we use linear regression to describe gradient descent because it is easy to derive the update rules for the purposes of, you know, education, right? You're gonna use gradient descent-based approaches for pretty much all other machine learning algorithms like, uh- that we're going to come across in this course. Linear regression is one exception, which you can solve it with gradient descent. There's no problem solving it with gradient descent, but you also have a closed form solution for linear regression, which the other algorithms don't have. Any other question? Okay. Now, let's- let's kind of, um, see a few more interpretations of this, right? There are, um- so there is a probabilistic interpretation of this. In the probabilistic interpretation, we make this assumption that the way our Y^is are generated in our training set is through this process. So Y^i is equal to Theta transpose X^i plus Epsilon^i. Where Epsilon^i is some kind of a- a Gaussian noise. It means 0 variance Sigma squared. Right. So what does this mean? It means the way our data set is generated is, we start with some X. For example, if you're talking about the -the, uh, a price of a house. The Xs will describe the features of that house, like the area or- or number of bedrooms, et cetera. Right. And there exists some unknown Theta vector that, you know, that we are interested in calculating. And the way the corresponding Y^i is generated is by taking the dot product of Theta and- Theta and the features of the- of- of the input and adding random Gaussian noise. Right. Now, the noise that gets added to each example is a different noise, but the noises are distributed according to a Gaussian variable, according to a Gaussian distribution. Right, this is- this is the assumption that we're making. Okay. This may or may not hold true in reality, but this, we- we start with the way our -our, uh, uh, we start with this assumption to describe the way our data is generated. You know, then what we- what we, um, what we then do is, swap the terms around. So Epsilon^i is-is- is, um, is a random Gaussian variable, which means Epsilon^i is also equal to Y^i minus Theta transpose X^i. Right. Just move Epsilon over and- and, uh, uh, Theta transpose X to the other side. And this term is, therefore, right, so Y minus Theta transpose X^i is also distributed according to a Gaussian variable, which now means that, all right, this is, um, this is the claim that we make, because Epsilon^i is normally distributed, right? Y minus Theta transpose X^i is also normally distributed with-with mean 0 and variance Sigma squared, which means probability of Y given X has this density function. Is this clear? Any questions of how we went from here to here? Yes? [inaudible] Yeah. So in this case, um, the parameters of this probability density- of this probably density is Theta. Right? Theta is the- is the, um, um, the probability density that's given to us or-or- the parameter that's given to us. Unlike the usual Gaussians where the parameter is Mu, this is a re-parameterized version where the parameters are Theta. [inaudible] So what this means is Y^i, so that- that's a good question. Let me- let me clarify this a little further. Right? So Gaussian variables have this- this property where, so if- if Y^i minus Theta transpose X^i is- is distributed according to a Gaussian. Now, Y^i minus Theta transpose X^i is distributed as normal, right? This also implies Y^i, because Gaussians have this- have this, um, what's also called as location-scale property, where you can- you can move your Gaussian. If- if you add some constant to your, um, um, Gaussian random variable, then it just, you know, moves the, uh, Gaussian there. So over here, nothing in this is random, it's just some constant. Right? So if Y^i minus Theta transpose X^i is distributed according to this, then it means Y^i is distributed according to, uh, a normal distribution that has mean Theta transpose X^i and variance Sigma squared. Right? And no, this is the mean and this the variance, and, you know, that's what you see. [BACKGROUND]. [inaudible] Exactly. So we're pro- we're- we're, uh, we're assuming that Y^i has- is- is distributed as a normal random variable, as- as a standard normal variable, whose mean is different for different examples. So for- for Y^2, the mean is going to be Theta transpose X^2. Okay. For Y^1 the mean is Theta transpose X^ 1. And this statement is exactly the same as this statement. Right? So to- to jump from here to here, so, um, it's easy to kind of, uh, go through these steps. So If Epsilon^i is Y minus Theta transpose X^i, distributed according- according to Gaussian. That means, um, you know, Y^i minus Theta transpose X^i is this distribution according to a mean 0 Gaussian, which also means Y^i is distribution- is distributed according to a normal with mean Theta transpose X^i and variance Sigma squared. And this can be, you know, and therefore the probability density of Y^i is given by this. Right? So Y^i is the- is- is the variable, this is the mean. This is the variance. Yes, question? [inaudible] [BACKGROUND] Yes, I missed the one-half, thank you. Thank you. Is- is this clear? Right. Now, once we have this, once we have, uh, uh, uh, Y^i in this form, we are now going to do, maximum likelihood. This question? [BACKGROUND] Exactly, so here um, you are given the data, x's and y's, which is distributed according to some unknown parameter. And now the data is given and the parameters are unknown. And we are gonna do maximum likelihood to estimate the parameters. [NOISE] Right? So, the data is the x's and y's. Parameters are- generally it is mu and sigma square. But in this case, the parameters are theta and sigma square. But theta is not the m- is not the mean, right? Theta transpose x is the mean for each y, right? So th- these is a, a, a conditional distribution, right? And so we can write the likelihood function as i equals 1 to n, probability of yi given xi with, with a given theta, right? And we are making an IID assumption. That's the case all the time. We assume that the epsilons, the [NOISE] epsilons over here are IID [NOISE]. Which means we can break down the [NOISE] likelihood as a product of n different, different terms. How are we doing with respect to time? We have ten more minutes. With respect to n, and, and, and then what we do is in- instead of, of ah, the likelihood, we take the log likelihood, right? And these is gonna give us [NOISE] log, um, log of that PDF. So ah, the sum i equals 1- 1 to n, We'll call this L theta [NOISE] log of 1 over square root of 2i sigma square [NOISE] x minus of minus half yi minus beta transpose xi squared over sigma squared. Let me try to select more clearly, it's just L theta equals- All right, now, let's take a step back and kind of analyze this. The function over which, the, the, the variable over which, we'll define this as theta and that comes up here, all right? And now wi- ah, with this likelihood function. You can kind of, you know, um, zoom out and look at this expression. And you can see that log and x are something that cancel out, all right? The, the 2pi sigma squared is just some constant. We are assuming sigma squared is some- just some unknown constant. This is some constant again. We have a minus half square. So all this is going to, you know, the, the, the, um, I guess the summary here is that, you know, by looking at this expression here your attention should naturally just focus on these part, right? Everything else just, just, just, just know, goes away, right? [NOISE] The- you can write this as sum over i equals 1 to n. Log of some constant is some k. We don't care. Log and the exponent cancel out. Minus half sum sigma square, all right? Yi minus theta transpose xi square. That's a half, okay? And so is equal to- I'm gonna take this k out. Or some n times k, um, some constant minus 1 over sigma square, half equals 1 to n, right? So, the likelihood function, the log likelihood function. By making a probabilistic assumption about the noise. Is gonna give us the negative of a scaled version of the original cost function, right? Which means by performing maximum likelihood. We are minimizing the squared error. Does that makes sense? So this, this, this is kind of, um, whenever you see a Gaussian come into picture. You know, this is probably the most important thing. It is the exponent of some squared entity. And when you take the log likelihood, you know, the log and the exponent cancel out and you're gonna be left with just the, the squared term, right? And it's gonna come with a negative sign. And because we're doing maximum likelihood, we're trying to maximize the negative of something. Which means we're trying to minimize these. Yes question. [BACKGROUND] Exactly what we've shown here is that um, is that linear regression can be viewed as performing maximum likelihood. Where the noise that comes into each example is assumed to be a Gaussian noise, right? If you make the assumption that the, the x's and y's have a linear relationship and have an additive Gaussian noise. Then the maximum likelihood theory tells you what you need to do is exactly the same as minimizing the, you know, squared error or ordinary least, least squares. The, the, the two are ah, the two ah, approaches of, you know, defining the cost function and minimizing it. Or giving it a probabilistic setting and maximizing the likelihood, are exactly the same, right? So to, you know, just minimize arg max of L theta was equal to arg min of [NOISE] J theta. There's some extra scaled versions, but we're not interested in the values. But we are interested in the arg max and the arg min, right? And they are exactly the same. Next question. [BACKGROUND] Yeah [BACKGROUND]. Yeah. So the question is, if we made an assumption that, uh, Epsilon instead of Gaussian was Poisson, what- what would happen? And for that I would say, wait till- well, we're going to cover GLMs, right? Uh, yes, that's a very good question. And what we're going to see is- is, um, the maximum likelihood, um, interpretation is actually more general and you can- you can, um, um, you can do many more things. Yes, question. [inaudible] So the Epsilon over here is actually the difference between, um, kind of the true y-value and the observed y-value, like, uh, Theta transpose x^i is like, you know, we had like the true price of the house, but what you observe has some noise in it. You know, maybe the mood of the buyer was- was bad that day or there's- there's some noise that gets added into the process. And you're making an assumption that the noise is- is- is Gaussian. Now in the last few minutes that we have, um, I want to provide yet another interpretation of linear regression, right? So we saw, um, linear regression can be solved through gradient descent or through the normal equations. We saw that linear regression is- is actually the same as maximum likelihood if you assume that the noise is Gaussian. Now, here's yet another, uh, view of linear regression, right? Now, if- if, um, what we want to solve is X Theta equals Y, right? Now here, X is a matrix, Theta is a vector and Y is another vector. Right? Now remember the- the, um, the functional view of matrices that we spoke about earlier? So you can imagine, um, an input space, Theta_1, Theta_2, Theta_d. You have a matrix X, and you have an output space, y_1, y_2, y_n. Now, let me intentionally keep this two-dimensional. So we have Theta_1 to- through Theta_d. Right? What this means is we have some Theta- some- some parameter space Theta. That's the input space to the matrix X. X is the design matrix, right? Now the rows of your- the data is actually in X not in Theta, right? And y is- is- is the corresponding, um, um, y labels. Now what we want is to find Theta that- that makes our output as close to y as possible, right? We also saw in- when we were lil- uh, reviewing linear algebra, that if X is not full rank, then there exists a subspace in X, a low-dimensional subspace that has a one-to-one mapping between the input and the output space, right? Now- for now, let's assume X is full rank. Which means if X is in- is in d by n and d is the smaller dimension, then assume that X has rank d, right? And Y is in an n-dimensional space. So n is generally much larger, right? N is much larger and the given- so the training set that we are given is represented by this matrix X and some point in this output space of ys. Where this point, remember ys are all scalars. So each dimension is representing a different example, right? It's an n-dimensional space where each dimension represents a different example. And there's gonna be some subspace in- in the output space that passes through the origin, which- for which there is a bijection between Theta and the subspace. And also the set of all points in the output space that can be reached by Theta is limited to that subspace, right? We remember that? Now we also spoke about projections, right? Now, if you have a matrix X whose columns define a set of basis onto which we want to project something. Right? The prediction matrix was anybody remembers? X transpose, X transpose, wait, X inverse, X transpose. This was the projection matrix. So this subspace defines the set of all points that can be reached through linear combinations of X, right? And those columns are, you know, are basically the columns over here, right? And the projection matrix of this output subspace is this, right? Now, which means the given y, supposing we are given some value y? Most likely it's not gonna reside in the subspace. It's because, you know, the- the- the space- the n-dimensional space is so much larger and the d-dimensional space- d is so much smaller than n, that very likely the y that we observe is not gonna lie in the space exactly- in the subspace exactly, right? And what we wanna do [NOISE] is now project this point onto the subspace such that it's perpendicular, right? So this is our observed vector of ys that we want to project onto the subspace that- that is reachable through X, right? And the- the- the projection matrix is gonna be this. And what- the- the point onto which it gets projected will have a one-to-one mapping with some point over here, right? So which means X Theta hat is gonna be equal to x, x transpose, x inverse, x transpose, y. Well, this is the projection matrix, this the vector that we wanna project, right? And this basically kind of is another way of seeing that, you know, Theta hat equals x transpose, x inverse, x transpose, y, right? Does it make sense? So this is the projection matrix that will take any vector in the output space and project it onto the column space of- of X. And we take the vector y and project it onto the column space of X. And once it's in the column space of X, there will exist some- made some- some- some vector in the input space that, [NOISE] you know because there is a bijection between the- the- the- the column space and the row space, which means X Theta hat equals some- some value, will have an inverse because there's a one-to-one bijection that exists. And that inverse is- is, you know, because- because now, you know, X is now kind of invertible because it's in that space where the bijection exists. Theta hat is equal to x transpose- X equals x, x transpose, x inverse, x transpose y. And so basically what linear reg- regression is doing is projecting the y-values onto the subspace reachable through X, and then solving- um, um, finding Theta hat, you know, corresponding to that projection. Yes, question? How do we know it's the optimal? I'm sorry. How do we know it's the optimal Theta? So ho- how do we know it's the optimal Theta? Um- here, optimal in the sense, um, so- so we- we- when we- when we spoke about, uh, projections, we discussed this interpretation that the point to which you get projected is in a way, the closest possible point to the true vector, right? So in that way, you're trying to find the Theta that will- that will take you to a point that is closest to y, right? And basically, Pythagoras's theorem tells you that the square of this distance is equal to the sum of the squares of, you know, the each components and that, you know, um, turns out to be exactly equal this. So Pythagoras theorem tells you that [NOISE] this is equal to the square of this residual, right? All right. With that we will- we will conclude, uh, our, um, um, our survey of, uh, linear regression and next we gonna to start with classification.
Stanford_CS229_Machine_Learning_Course_Summer_2019_Anand_Avati
Stanford_CS229_Machine_Learning_Summer_2019_Lecture_15_Reinforcement_Learning_II.txt
Welcome back, everyone. So today, we're going to start lecture number 15 and the topic for today is basically what's left of reinforcement learning. For the purposes of this course, we're going to wrap up our reinforcement learning today. So first we're going to discuss learned models. We're going to do this first and then talk about extensions of the methods we've seen to continue state settings. And we're going to cover two topics over there, discretization and fitted value function iteration. And once we are done with these two, that basically marks the end of what's part of the syllabus for this course, the parts that you'll be tested on. And for the rest of the lecture, once we're done with this, we will do a quick survey of the larger reinforcement field- reinforcement learning field in general. Just talk about the different kinds of approaches that are there in reinforcement learning that we're not covering and kind of give you an approximate mind map of how to kind of relate all the different concepts in RL to each other, all right? So before we jump into today's topics, let's do a quick review of what was covered on the previous class on Wednesday. So we started off with a formalism called Markov decision processes. Where this formalism describes the setting in which there is an agent that is making sequential decisions over time. And a MDP, or a Markov decision process, is a tuple of five entities. So first is a set of state, set of all possible states that the agent can currently be in. A, is the set of all actions that an agent can take. For simplicity, we'll assume that all actions can be taken in all states. P(s, a) is what is called as the transition probabilities. So transition probabilities is- we have a set of transition probabilities for the set of all states and the set of all actions. What that specifically means is the transition probability is each P(s, a) is a probability distribution over the set of states and it is different for every- for every combination of S and A. So if we are in current state S and we take current action A, then the state that we will end up in for the next time step is distributed according to P(s, a). This is only describing how the world works. Think of this as like the laws of physics. You take an action and you end up in some states, this is how the world works, right? And Gamma is what we call as a discount factor, it's a value between 0 and 1, and R is a function. It can either be a function of state or a function of a state and action pair and it is the immediate reward function. What is the immediate reward that we get for being in a particular state, right? And there are these two central concepts in reinforcement learning that's beyond the MDP formalism called the policy and the value, right? These two are kind of- you can in a loose sense- you can think of them as the duals of each other, right? So the policy basically is the rule book that the agent follows, right? It tells us when we are in a current state S, what is the action that you need to take? And the action that we take based on the policy will decide which new- new state will end up in based on the transition probabilities. Alright, so the policy is- is just a rule book of what action we need to take when we are in a particular state. And the value- value function is- is like the long-term accumulated or the expected long-term accumulated reward, along with the discount factor Gamma. So the V pi of S is the expected sum of all rewards. That is the reward at current state plus a discount times the reward at the second state, plus a discount- discount squared times the reward at the third state and so on and it is the expected sum of these future rewards, discounted future rewards, assuming we are starting at a state S naught equal to the point where we're evaluating the value function and assuming we are taking- we are following some particular policy pi, right? And this is also called the Bellman's equation and this- this expected sum of future rewards can be expressed this way according to the Bellman's equation. There was a question from some students about how- how we go from here to here and there is a short derivation that I posted on Piazza number 240. If you want to see how we went from here to here. It's pretty straightforward. Now, there was also a question of Gamma yesterday. Should it be the same value, should it be less than 1, and so on? The answer is, it should be less than 1, strictly less than 1. It cannot be greater than 1 and it is this Gamma that is less than 1, that makes the value function bounded, right? So it's- it's easy to see if supposing Gamma was equal to 1. Then in general, we could have accumulated- we can keep accumulating rewards by an infinite amount and in general, the expectation need not be bound, so we write the expected sum of all future rewards can explode to infinity. But because we have this discount factor Gamma, which is mostly, mostly here for mathematical reasons just to make the value function bounded. But it also so happens that it has nice interpretation such as the interest rate and whatnot if you're working in- in- in a finance setting or you can think of it as the rate at which your robot's fuel is decreasing or whatever. So you can have different kinds of interpretations for Gamma, but it exists mostly for mathematical convenience to make the value function bounded. And the reason why it's bounded is- is pretty- pretty simple. You can see that- you can- this expectation is going to be less than uh- it's going to be- you can show a quick proof here. So this expected sum of rewards are of S1 plus Gamma times so on, will be less than expectation of- I'm just going to call it R star plus Gamma times R star plus Gamma squared times R star, etc. Where R star is the maximum possible reward that you can get in any possible state. And once you have this, you can take R star common and you get a 1 plus Gamma plus Gamma squared and so on, that is bounded. So the expected sum of all future rewards will always be bounded if we use a Gamma of less than 1, right? And then based on these- these- these two central concepts, policy and value, we can define something called as the optimal value functions. So the optimal value function is the best possible long-term value we can have if we were to choose a suitable policy. So if you were to optimize over all the possible policies that can exist, what is the best possible future long-term- long-term value that we can come up with, right? And this is- similarly we can see that V star s is equal to- can also be written as R of S plus max of A of this term and you can see that this is pretty intuitive. What this means is, V star of s is- the best possible long-term value is equal to the sum of the current value that we got. That's- that's immutable. The reward that we got at the current step is the reward that we got at the current step no matter what our policy was and the- and choose. This is a recursive definition. And choose an action such that we maximize the long-term reward at the next time step, right? So depending on which action we choose, we get a different probability distribution for the future- different transition probability distribution, and take the expectation of the next time step according to the probability distribution. So max is basically- the max operator is choosing different transition probabilities essentially, right? Choose an action such that the resulting transition probabilities will give you the highest expected reward, expected future value at the next time step. It is this- this recursive definition which also happens to be called Bellman's equation confusingly. It is also called the Bellman's equation. And the corresponding to the value of the optimal value function there is this implicit optimal policy. So the implicit optimal policy is any such policy that allows us to achieve the optimum value function, right? In- in- in theory, it is possible that there exists multiple policies, so strictly speaking, this should be element of the set of arg max because this arg max- probably this should not be element. Yeah, so this should be an element because there could be multiple actions that you can take in any step that gives you the optimal the- that maximizes the future value. But for the purposes of this course, we'll assume that it is unique, right, and it is this optimal policy that is kind of our- that's our end goal. That's what we strive to achieve. And then we say we want to solve an MDP, it means we want to find the pi star, the optimal policy that in some way, and that's called solving an MDP. And then we- we saw the relation between policy and value, right? The policy and value are kind of, they have this intimate relationship where given- given- given a policy Pi, there is a corresponding value function that you achieve by just following the policy all the time, right? And we saw- we- we, uh, saw this derivation last time, where given a policy Pi, there is, uh, uh, an associated V Pi that can be derived in this way where this is the identity function and P Pi is the transition, uh, transition matrix, where for each state we choose the corresponding P (s,a) according where the A is taken from Pi. And- and so from- from-, uh, from Pi, we can recover a V Pi this way, and this is called policy evaluation. I think that's one direction, the other direction where you're given a value function, right? And this value function induces a policy, which is also called the greedy policy with respect to V, right? And that policy is basically at any given state S, choose the action- the- the- action that is prescribed by this policy is basically the action that maximizes the- the expected value of the next time step where the value is defined by the given value function. All right? So we can- we can- we can go between policy to value and value to policy using these two relations. Right? A subtle point to note is that given a policy, we get- we get a corresponding value function that- that's kind of associated with this policy. However, a subtle point which may not be obvious at- at first sight is if we were to take this policy Pi that was induced in this greedy way from this value function, right? If- if we were to follow this policy, the resulting value that we get is not this value function. It may be a different value function, right? That's a subtle point. A value function induces a policy, but if we were to follow this policy, it is not- we will not get the same value function. You need not get the same value function right? And that a subtle point, and then that's something that we're going to see in- in policy iteration, right? And we saw one algorithm of solving MDPs where the first algorithm we saw was value iteration. Now in value iteration, it is an iterative algorithm, you know, which is called value iteration, it's an iterative algorithm where we start at some arbitrary value function. It could be the value function is equal to, you can either set it to all zeros or you can initialize it to the reward function itself. You start at some value function, right? And we iterated over and over where we take the value function of the current iteration and run it through something what is called the Bellman Backup operator, right? And we get the next value function and then we plug it in and get the next value function and so on. And the Bellman value operator is defined in this way. So the Bellman Backup operator is, uh, the- the, uh, you know, if- if this is V pri- V prime of s is equal to well V prime is Bellman operator on the previous V. So the value function for the next time step is equal to R of s plus max of whatever expression here. And this thing here is basically the right-hand side of the optimal value function. Right? And this is called the Bellman Backup operator. And we run this algorithm in a loop and iteratively until we converge on a V- a V function where the V function stops- stops- stops changing. And once we get that V function, we can recover the corresponding policy using this expression. So given a V that has converged through value iteration, recover a policy and that's our, you know, optimal policy, that's our solution to an MDP. And the reason why this converges, we saw that, uh, it is possible to show that this Bellman op, um, Bellman backup operator is what is called as a contraction mapping. So if we imagine the space of function-a functional space where each point here defines a value function, not- a point here represents an entire value function where the coordinate of that, uh, where a coordinate along a particular axis is the value of the value function along that state, right? And the Bellman Backup operator is a contraction mapping, which means if we take any two value functions, run the two of them through the Bellman operator, right? The distance between the outputs will be closer than the distance between the inputs, right? That's why it's called a contraction mapping because as you- as you apply points through the Bellman Backup operator or through this contraction mapping, they get closer and closer, right? And the point at which they converge is called the fixed point of that operator, and that fixed point is V star, that V star is the optimal value. And then we saw this other algorithm called policy iteration, right? So this- this was- this is one possible algorithm for solving an MDP. This is another algorithm for solving an MDP. And in this algorithm, we basically exploit the subtlety that I mentioned earlier, that from a given Pi, we get a corresponding V, where this V is the value that you get by following, uh, the policy Pi. However, there is a- a way from which, you know, the- the- others- other direction where we recover a policy by using the guidance of some value function, right? We use this value function estimate to choose a particular policy to do- where we strive to do better, where we strive to do our best given the current policy. And using this policy, if we actually use this policy, the resulting value function may be different, right? And that is the subtlety that we make- that we use over here. So start with some randomly initialized policy so random init of Pi, right? And from this random initialization of Pi we, you know, we do step one from- for a given Pi, we use, uh, um, we- we calculate the corresponding value function, which is- which is just what we call as policy- a policy evaluation, right? And once we get the policy, uh, the- the corresponding policy V Pi, we get, you know, we- we- we try to recover- we try to calculate the optimal policy for this V pi. Right? And we do this over and over until Pi converges. And this is called policy iteration, and the- the converged Pi, will be Pi star. And- and once we get Pi star, we can again recover V star and the Pi star and V star that we get from this algorithm and a Pi star and V star we get from this algorithms are essentially the same. Right? So this is- this is how much we covered last- in the last class. Any questions on this before we jump into today's topics? Right, cool. So today we're gonna start seeing extensions of these methods. [NOISE] So the first extension we're going to see is what happens when we- when we don't know Psa. When- what happens when Psa is not given? [NOISE] Okay. So this is our first, um, first extension [NOISE] and this corresponds to section 3 in the notes, in learning a model for an MDP. So here, when we, um, in reinforcement learning, um, you, you, you- when you, when you see the literature, when you read text books and, and papers and so on, you will see this terminology called model-based versus model-free. [NOISE] So over here, in, in, in this terminology, model does not mean the model that we are constructing, right? So model over here refers to the model of the environmental model of the universe in which the agent is operating, right? Now, it is the model that this is referring to, right? And this essentially means- here model means [NOISE] Psa. Now, if you know Psa, if you know how- uh, what state you might end up in, depending on what action you took, it means you know how the environment works, right? You know- you have a model of the environment. That's what this model means, right? Now, Psa is also called the model and the, uh, the, the, the first topic that we're going to cover is learning a model. [NOISE] Again, here, by learning a model, I mean learning Psa. [NOISE] So supposing we don't know how the, um, how, how, how and how, how the transitions work, we don't know what, what state we might end up with, and supposing this, you know- ju- just assume Psa is not given, what do we do? And when Psa is not given, the one- one thing that we can do is basically, um, assuming we have some kind of a nom- um, a simulator where, you know, we can- we can, uh, run these experiments. Start, ru- run multiple trials [NOISE] okay. Run multiple trials where in each step you start at some- some state S naught, take a random action, you know, you're naught, you end up in S1. [NOISE] Take another random action, [NOISE] S2, [NOISE] and so on, and then repeat the trial S naught, [NOISE] and so on, right? So each horizontal line corresponds to one trial, right? In, uh, in the homework that you will see that we'll be releasing on Monday so that, um, you can think of each trial. So, so the homework problem is about, uh, uh, balancing the cartwheel or the inverted pendulum where the goal is, um, you're given, um, a stick and you need- which is, uh, which has one degree of freedom, which can move around and it's balanced on, uh, on a cartwheel. The cart will also has one degree of freedom and you need to keep moving the cartwheel to keep the stick balanced upright, right? And, uh, a trial in such a case would be starting at some initial configuration and initial position and initial angle of the stick and initial, um, uh, velocity of the cart, and the actions in that case would be to move the cart left or move, move the cart right, right? So take some random actions starting from some initial state until, you know, the, the, the, the stick falls down or some such terminating condition, [NOISE] right? And then start over again with a different starting, um, um, state and, you know, take different actions. You may be following some particular policy. You may be taking random actions. It does not matter. We are basically exploring the environment. We are learning the environment, okay, and, uh, take, uh, some such actions like this and then construct [NOISE] estimators for Psa from this. [NOISE] So P- I'm just going to call it P-hat, sa of S prime is equal to the number of times, [NOISE] number of times, uh, we took action A [NOISE] at state S and got to state S prime over a number of times we took action A at state S, right? So run such trials, lots and lots of them and once, once you, um, run all these trials, we just count the number of times we were at state, some particular S, and took a particular action A and that goes into the denominator, and the num- the number of times when we took that action in that state. What- what fraction of times did we arrive at state S prime. Okay. So this is the simple maximum likelihood estimator of- of, uh, the- the transition probabilities. All right. And it- it might so happen that sometimes both of these, uh, could end up being 0 or 0, right? We saw this when in- in Naive Bayes when you're trying to es- get maximum likelihood estimates based on, um, accounts such as this. It may so happen that for some of the cases, we may, uh, we may have 0 over 0. In those cases, it is, um, common to, you know, a common practice in such cases when we get 0 over 0 is to assume a uniform, um, a uniform distribution for uh, uh, Psa, which means, uh, if we have never reached the state S and taken an action A from state S, just assume that we are, you know, um, that we are going to- we're going to, um, um, um, uniformly end up in any one of them. Now, the moment we actually end up taking an action at state S, we can then update our counts to reflect what the, the actual, um, the, the actual dir- distribution was. The moment we reach, uh, uh, an- a state S and perform an action A, the denominator is no longer 0, it will be 1, right? And then we- we- we won't have a 0 over 0. Uh, until then, you know, it's common to, um, assume a uniform action, uh, er, assume a uniform distribution over, uh, fu- future states. Okay. And using this, this kind of an estimate, estimation of the, of the states o- of the, uh, transition probabilities, we can now construct an algorithm. It looks like this. [NOISE] So this is an algorithm with model learning. Again, here, by model I mean Psa, the transient probabilities. [NOISE] Okay. So [NOISE] initialize [NOISE] randomly, [NOISE] Okay. No. [NOISE] Two, uh, [NOISE] repeat. [NOISE] Execute Pi for some time. Right. Which means basically- no step one or 2a rather- 2a. Right. So run a few trials where the actions you are choosing are based on the random initialization of Pi. B, using these trials estimate [NOISE] P hat of sa [NOISE] and by estimate P-hat of sa. Basically, this means estimate s times a number of probability distribution where each distribution is over all the future states. Right. For each s, a. Right. So there are s times a number of such probability distributions, so estimate all of them. C, [NOISE] apply value iteration. I'm just going to write it as VI. Value iteration using P-hat sa- using P-hat sa to get updated V- Let's call it V^t. Right. And D, update Pi [NOISE] to be greedy with respect to V^t. Right. So what's this algorithm telling us? Start at some- some random initialization of- of policy- some random policy, right. And follow that policy for some time, and by following this policy for some time, basically what we are trying to do is ah, not only learn um, an estimate of our value function, but we're also learning Psa as well, right? By following some- some um, policy, we are learning how the environment works. You take some action from a current state, you end up in some new- new state and you keep observing that again and again. You tend to learn how the world works. You tend to learn how the dynamics of the system works, right? That's like a separate task from estimating the value function, right? Learning how the dynamics of the environment works as a separate task from learning the value function corresponding to that policy. Using this experience, we can estimate P hat sa. It's important to note that learning P-hat sa is independent of the policy that we were following, right? And in fact, what we can do is to estimate P hat sa. In each iteration, we can actually use the accumulated counts from all the previous iterations as well to get a better estimate of- of the P-hat sa. Right. And using the estimated P-hat sa, we perform value iteration. Now what was value iteration? In value iteration. [NOISE] To remind you this was value iteration. [NOISE] In value iteration, we basically loop ah. So this- this line over here, apply value iteration means perform this loop until convergence. Right? So step 2c is actually a full loop of value iteration. And in this value iteration, we see that we need to take an expectation as part of computing value iteration. And it is for this expectation where we use the P-hat sa's instead of Psa's. We don't know Psa. We are just- we are using data to construct an estimate P-hat sa. It's this P-hat sa- estimating this P-hat sa that we call as learning a model, right? We use the estimated dynamics of the environment to- to ah, to perform value iteration and read some V star. So it is the V star of that particular iteration. And the V star will change from iteration to iteration of that algorithm because we are using different dynamics in each iteration. Yes, question. [inaudible] [NOISE]. Yeah, good question. So the question is, why not just perform a and b over and over until we are satisfied and then do c and d. The reason why ah, you know, um, that might not work all the time is because unless you have a good policy, you may not even reach certain states. But, you know, you have to- like to be able to understand again, you have to exhaust [inaudible]. Okay? So- so the- so the question is, we nee- why not exhaustively search sa, all the states and all the actions. All tuples. All tuples. And the reason that might not be possible in practice is because, we may not be able to arbitrarily choose an s over our choice. Right. We- we- we may be limited to start from certain starting initial states and explore by taking actions. Maybe, it may not be possible to say, you know, I want to start at some particular ah, um- some particular state at some time. And to- to exhaustively try all possible s and a sequences can be exponentially, you know, complex. It may, you may not be able to do a brute force and do a full sweep that could take, you know, that could be exponentially logically, it could take years, for example, right, to do a full sweep of the entire state and action space. And that's why we- we- we learned, we follow this iterative approach where we- where we try some- try some initial trials for some policy, get an estimate and then perform value iteration and use that value iteration to drive the exploration in the next state. Right. Yes, question. [inaudible] So you say one of the problem [inaudible] you can use part of two episodes as new. So the question is, can we use parts of es- episodes as new episodes? And the answer is to- to- uh, to calculate this, we're only calculating, you know, these links at a time, right? We take S, a, and S prime and count them to- to, uh, estimate this. So when we're- uh, whe- when we're estimating the probabilities, we take all the episodes that we have and only look at every consecutive S, a, S prime, uh, triples. All right, so this is an algorithm that- that we get, where we are simultaneously performing value iteration and also learning the dynamics of the system. So at this step, we will achieve the optimal value function, assuming a gi- a certain given dynamics, right? For this particular dynamics of P hat_sa that we've- that we've learned, the resulting V at the end of one loop of- of- uh, of this algorithm will give us the V star corresponding to that P hat_sa, right? But that P hat_sa might not be the- the correct dynamics of the system because it's just some kind of an estimate, right? And then we use this- the resulting, uh, uh, pi star, to go back and perform so- get some more experience, right? Run some more trials, improve our P hat_sa, and then re-estimate, um, uh, the value iteration. Now, one obvious, um, optimization or computational optimization is that when we start the value iteration in some- in some loop, when- when we describe value iteration, we would always start, uh, the value function at zero. So instead, what we can do is, for the next- in the next iteration, when we start value iteration, we can initialize the value function at the same point where we had converged in the previous function, instead of starting it at 0 all the time, right? And that will- that will, uh, uh, allow us to have, uh, a much more faster, uh, uh, convergence in step c. Any questions on this? Cool. [NOISE] Question. Yes, question? Can you explain what step d [inaudible] Step d? Yes, wha- what [NOISE] So step d, um, you're saying, you know, update pi to be, uh, greedy with respect to V^t, and that is, um, [NOISE] this answer, step 2. You have- you- you get some V and you get the pi corresponding to that V. [NOISE] So- so value iteration and policy iteration, they assume that we know the dynamics, right? And when we don't know the dynamics, what- what the exact transition probabilities are, you know, you can think of that as one relaxation of the MDP where we don't know what the exact transition probabilities is. And- and that gives us this algorithm. This algorithm, where we are simultaneously learning the dynamics, learning the model, and also performing value iteration according to the learned dynamics. Next, we will, um, see another relaxation, which is- thus far we have limited our- our study to cases of- uh, or the types of MDP where the states and the actions were discrete and finite, right? We were limited to finite state and finite actions. Now, what happens when the states is continuous, right? What happens if, uh, the set of all possible states is- is, uh, continuous? For example, um, [NOISE] in the, uh, inverted pendulum problem that you'll be seeing in your next p set, the- the- the position of the pendulum, which- which you're trying to maintain it upright. The state of this- the pendulum can be represented by the angle it makes from- the angle it makes from the horizontal- uh, uh, uh, uh, horizontal floor or the horizontal table, wherever it is placed, right? And that angle is continuous, right? So in- in- uh, in- in, kind of, reality, the state of the pendulum or the state of your system or your agent is continuous. And what do we do in that s- uh, in- in such situations? One obvious answer or one simplistic approach is what is called as discretization. [NOISE] In discretization, the idea is pretty simple. Supposing our state- so, um, assume that, uh, he- this is the S space of, say, the inverted pendulum problems. So as a reminder, in the inverted pendulum problem, uh, we have a cart, right? That can move in on- along one direction. And on- on top of that, we have a stick that you're trying to balance. And the stick could, you know, tilt this way or it could tilt this way according to the dynamics of the system, right? And if it is tilting this way, then we want to have a policy where we- where we recognize that when, you know, um, the- the angle is turned- is- is tilted to the left, then we want to move the cart to the left. So the cart has actions: move left or move right, like, two actions. And the state can be, you know, any possible angle and any possible velocity, any possible position of this- um, uh, of this cart, right? So, uh, assume we have, uh, um, angle and [NOISE] velocity, right? So this- this space represents the set of all states that the system can be. However, um, the algorithms that we've studied so far are all designed for finite state and finite actions, right? So a common- a common technique, uh, in- in such cases is to discretize the- [NOISE] your state space into discrete parts, [NOISE] right? So what used to be, um, um, a continuous problem, by discretization, we can turn it into a discrete problem or a discrete set. And we generally have some kind of a bound, um, on- on th- on the- on the, uh, state space. For example, angles are kind of limited between, you know, zero and 180 degrees. So, you know, the- there is a natural boundary, uh, for- even though it is continuous, there's a natural boundary, uh, on- on both sides. And similarly, the velocity can be bounded within zero and the maximum velocity the cart can go, you know, whatever, 10 miles per hour, so some- some- some- some such thing, right? And then we can discretize this state space into a discrete number of actual states or the discrete number of, um, you know, virtual states that we'll be working with. Yes, question? So and, uh- and so you- you- in- in the- let's say in the piece, right? You will have to program it with, like, let's say, some laws of physics, right? Like you would [inaudible] might even extend differential equations which would [inaudible] And for the state space we can also exploit some kind of symmetry, right? Like, so we don't have to [OVERLAPPING] Yeah. So the question is we could- we can limit this more using, you know, a domain knowledge like symmetry and whatnot. But that's not the point here. So the- the main- the main concern with such approaches which are coming to next is once we- once we come up with a discretized state, S prime or S hat, which where S was continuous and this is discrete. So this is the discretized version of- of S. [BACKGROUND] A common problem that we will see is, you know, there are basically two problems. So the first problem is um, [NOISE] so the first problem is, now what happens? Let's -let's kind of switch back to supervised learning and linear regression. Right? Let's- let's step away from reinforcement learning for a brief moment. So if we had a data-set like this,t [NOISE] What kind of a model would you try to fit? Supposing this is X and this is Y, What kind of a model would we try to fit? Linear. Linear right? So this- this looks like a perfect um, you know, perfect textbook case of a data-set where linear regression would work very well. Right? Now in lin- in linear regression, we would just fit a straight line through this. However if we were to perform discretization and learn the appropriate Y-value within each discrete unit, we would get something like this. Right? Which means now we are, if we discretize it into- into bins and take the average in each bin to be the estimator for that bin, right? That's essentially what we're doing from- moving from a continuous state to a discrete state by discretizing, we would get a hypothesis like this. This is pretty bad because first, it is much more complex, right? The number of parameters we need is the number of bins that we are breaking down- breaking this down into. And further, we are not getting any kind of generalization. Now what happens if our dataset had some examples here and some examples here, and we did not have any examples here. Discretization would not work because we have never- we have never seen what's- what happens in the bin. So in these regions, you know, we would have zero examples and discretization is- is a pretty bad thing to do in such cases, right? Instead, we would rather have a simple straight line, right? Which uses a much smaller number of parameters. Also it can interpolate well in regions where we don't have a lot of examples. Right. So this- this, this idea of- of having some kind of a fitted function not only reduces the number of parameters that we need, but we also get some amount of generalization into regions where we don't have a lot of- lot of examples. [NOISE] And that's the idea that we're going to [NOISE] look into next. So as far as your homework problem is concerned, we will actually be solving it with discretization. That's because it's a simple problem and that's, you know, it's, the purpose of the homework is to- is to give you a framework to understand value iteration and estimating- estimating the transition probabilities, etc. However, in- in larger problems, in more complex problems, discretization usually is, it's not the first step that you can use. In- in um, in some cases, you do use discretization, but there you need to kind of use some pretty- you need to heavily use the domain knowledge of the problem space to reduce the dimensionality. Okay. So with discretization, one problem is this, you know, we need to break it into multiple bins and ah, you know, it does not generalize well, the other problem with discretization is what's called as the curse of dimensionality. [NOISE] Which means, suppose we have- suppose we have a state-space of, you know, let's call it six- six states. So let's say we have things like the, [NOISE] yes supposing it's, you know, it's ah, [NOISE] you're trying to learn- learn a helicopter, then the number of states that you have are like the X position, Y position, Z position and you know, the- the angle at which the helicopter is, is- is pointing to. So it's like how upward or downward the helicopters pointing to or, you know, um. So you have three different angles. The- the angle- the- the angle at which the nose of the helicopter's pointing, the angle at which it is turned left or right. And there's one more angle, which I'm not familiar with flying helicopters, but- but anyway, so you have, you know, different kinds of angles and different velocities. X, Y dot, and Z dot. So what's the velocity along the X-axis? What's the velocity on the Y-axis? What's the velocity in the Z-axis and all the angular velocity. So theta dot. [NOISE] y dot, psi dot, and I think it's psi here. Right. So one, two, three, four, five, six plus six. So you have 12- 12 different coordinates in your state space, and they're all continuous, right? And if you were to discretize them, right. Let's say you were to discretize them into where each thing into say, ten units, right? You would- would essentially get 10 to the 12 number of discretized states, right? So each- each, [NOISE] each coordinate of your state space is continuous and let's say you discretize them into ten- into ten units or ten bins, then the total size of your state space is essentially 10 to the 12, right. That's pretty massive. It- and it is exactly this kind of an explosion in states that is um, that you get due to discretization. That is called the curse of dimensionality. Right. And it is, and that's probably one of the biggest limitations of discretization because [NOISE] the number of states, as you add the number of components in your- in your state space, it just- just- just explodes. Right. And that brings us to what is called as the value function approximation. [NOISE] So in value function approximation, we- we now want to- to work in a setting where S is continuous- S is continuous. And we don't discretize, right? And in this case, the most common setting is to have something- to have some kind of a simulator, okay? So you have a simulator. And in this simulator accepts some state a_t and get you S^t plus 1. Okay. So assume you have some kind of a simulator where it takes as input the current state and the action and outputs the next state. Right? And the biggest difference between this and the previous model that we were working with was that in the previous model, we had a finite number of states, right? And we had a multinomial kind of a distribution or the next state for every state and action pair. But over here, the number of states is continuous and we have in place of a multinomial kind of a model -no, I'm referring to the collection Psa as a collection of multinomials in place of a multinomial which worked well in the discrete setting. We instead have, you know, assume we have some kind of a simulator where we feed a real valued input and some action and you get output as some other real valued state. All right? And these simulators can be- are- are generally, you know, things like, you know, a physics simulators are commonly used if you're working in some kind of a game-playing scenario, then the game engine can itself act as the simulator and so on, right? And now what we wanna do is basically try to do the equivalent of this in a continuous space, right? So you want to kind of transform this into a continuous setting. This worked well because we were just counting. And it worked well because, you know, we could count er, discrete things. However, now we are in a continuous state and counting doesn't work anymore, right? And here we're going to use- and here we're going to learn the simulator to the extent possible. And what do I mean by that? What I mean by that is we are going to make an assumption that S^t plus 1 is equal to some function of S^t, a^t. All right. And this relation of a common thing, er for example, what we could do is assume f of- assume a linear form for this relation, which means f is linear. So AS^t plus Ba^t, where A is a matrix and B is another matrix. And you can think of supposing S is in, let's say R^d. Now A is therefore in R^d by d. And similarly, if A is in R, let's call it R^p then B is in R^b by p, right? And for those of you who have er, you know, some background in- in electrical engineering, you might recognize this as a linear dynamic system, right? There's- you have some particular state and you're taking some action. And the action and the state, er have this linear relationship to decide the next state, right? And assuming that you're- you're- that your environment has this kind of, um, a linear dynamical relation, we can then do the equivalent thing of this, which is- what should I erase? Probably this- this. Which is um, follow some policy, and run against, er, basically get some experience against this simulator S_0^1, a_0^1, S_1^1, a_1^1, S_2^1 and so on. And another trial S_0^2, a_0^2 and so on. So what's happening here is these- these are trials that we are running according to some policy against the simulator, right? And from this we collect, Ss and these S, a, S prime tuples. So S_a got us to S prime. This is one example. And S_a got us to the next S prime, there's another example, and so on. And we can um, collect this data set of S, a, S prime tuples and then define a loss function, J of A, B is equal to sum over i equals 1 to n, assuming we do n trials. And just to make the math simple, assuming they are terminating all of them at S_t, right? After you take t time steps, Let's assume we just terminate it, okay, and t equals 0 to t minus 1. What we want to- what we want to do is minimize the squared error between S_t plus 1^i minus AS_t^i plus Ba_t^i. All right. So define a cost function where the S_t plus 1 or the S prime is considered the true or the y label. And this is your hypothesis. Okay. So this S_t plus 1, you think of this as y and this you think of as Theta and x, where Theta is A, B and x is s, a. And now, once you have a loss function like this, you can solve it using gradient descent and recover A and B, where A and B are the parameters of this- this cost function. Next question. [inaudible] So the uh, the question is- is this- can we think of this as a policy because it takes s and a and gives us the next state. So a policy takes as input to the current state, and returns as output the action that we need to take while we're in this current state, right? So this, um, think of this as- as the dynamics of the system, right? If you are in a current state and you take an action according to some policy, what is the new state that we're going to end up with? Right. Whereas a policy will tell us if we are given, uh, the current state s, what should a be? Tha- that's the policy. [inaudible] So the ah, the- the question is, are we not, uh, uh, concerned about policy in- in a continuous state? The answer is yes. Right now, we are still in the state where we are doing something that is equivalent to learning the model, and once we learne the model, we will plug it into some algorithm like this, where we learn the model and estimate value and policy. We are- we are yet to construct the full algorithm, we are only addressing, uh, this part of- of- of the corresponding algorithm of the continuous state. Yes, question. [inaudible] That's correct. So first sum ah, is across trials. The second sum is within a trial. That's correct. And would you explain [inaudible] So the question is, why is this a linear ah, ah, a reasonable assumption? Ah, it may or may not be a reasonable assumption. Ah, this is just an example of how if you model it as a and b as- as a linear dynamical system, then you can have a cost function like this and learn ah, suitable, um, you know, ah, ah, a suitable model. You can- you can absolutely use a more complex mo- model over here and minimize you know, instead of AST plus BAT, you would have a more complex form here and you can minimize it. That's totally reasonable too. [inaudible]. We will- we will - we will, ah, at this point, let's just assume that our model can be represented as a- ah, ah linear system. Ah, most, ah, a lot of- lot of, ah, a lot of examples can actually do pretty well in this. And we'll see you know what some changes can be done to this to actually, you know, what can, a lot more cases shortly. Right. Yes, question. Ah, so in this case, in the next step [inaudible] So the- the- so the question is, in a- in a continuous setting, ah, should the action also be real valued for the policy? Ah, it need not be. So you can have continuous states but still discrete actions. For example, if you're trying to balance a cart pull, um, your state can be continuous, ah, where you know the states have angles and velocities. But your action can still be discrete. It can be move left or move right? So you- you can- you can, um, um, yeah, having continuous state does not mean your action also have to be ah, ah continuous. In- in cases when the action is discreet and status continuous your policy will take continuous input and output discrete? It will be a functional [inaudible] Yes. [inaudible] Yes. Now policy will- will, ah- will be some kind of ah, ah, function. Yes. So policy. it ah, so- so one thing to notice, should a policy always be a function? So if you see this relation over here, so what you do in um, so the policy, if you think of it as something in this form, right? The- the induced policy according to sum v, and it need not actually have any particular functional form. Right? It can just evaluate the v function which takes a continuous input at, um, um at- at- at different possible, ah, ah, according to different possible actions, right? And you can get the- the next state according to the dynamical system that we- that we constructed, right? And evaluate the value function at the new states. And you can just take the argmax, right? So- so ah, the policy can be kind of implicit or induced according to some value function and some model. It need not always be an explicit functional form. All right. So then if we minimize this cost, so A hat, B hat equals argmin of A, B. Then A hat and B hat will be our learned model. By again, by model I mean the, you know, what we have learned about the way the world works. This A hat and B hat is, you can think of it as the equivalent of p-hat SA. Right? So p-hat SA gave us a way to relate state and action to the next state. And here A hat B hat gives us a way to relate state and action to the next state. Right. Yes, question. [inaudible] We'll come to stochasticity next, yeah. Um, right. So the model that we learned with um, the linear model that we learned is- was deterministic in some sense that ST plus 1 is equal to A hat ST plus B hat AT, right? So once we minimize A hat and B hat acc- according to this, um, according to this- this ah, ah, cost function, we will recover some A hat and B hat. And the- the prediction that we're going to make for the next state is according to this. Right. But in general- so- so if- if we just limit ourselves to this, we would call this a deterministic model. Right. But in reality it's always a good idea to have- to um, to have some kind of stochasticity in- in your model, which means we want to estimate, we want to predict ST plus 1 will be equal to A hat ST plus B hat AT plus some epsilon. Right. Where epsilon comes from, you know, a common choice is a normal distribution that has mean 0 and some covariance Sigma. And it's also possible to learn this Sigma from data. Basically, ah, the way to think of this ah, covariance is to um, you can- you can estimate this covariance as one over n times one over p minus 1 ST- IT plus 1 minus AS_ti plus BA_ti, BA_it transpose, right? So this is just the, if- if you know, you- you- you have done this with GDA, you'll, you know, this is something you should be familiar with, which is the maximum likelihood estimate of a covariance will be just the ah, residual, residual transpose ah, across all the examples. Yes, question. [inaudible] Because this is, ah, we're summing across all N and T, right? Right. So it is, um, um, in general, it is considered a good practice to build stochastic models. Where the prediction that we make about the next state is- follows this format. Where for each prediction, we sample an epsilon according to this distribution. And this distribution is learned where the covariance is estimated this way. So you can think of Sigma hat as this. Yes, question. Ah, yeah, so let's say [inaudible] like when do you know when to stop ah, that step, like how much- like when do you know your estimates are [inaudible] So the question is for the p-set, we- you know, um, you will not be doing any of this for the p-set. Right. In general, the- the question is, um, you know, when, when do we know we have collected enough evidence ah, or- or experience? When this- in case of the discrete ah, ah, setting, you can you know stop when the v stars that you get at the end of each iteration starts converging. [inaudible] So the question is, you know, you can- you can phrase the same question as when do you know you have ah, your estimate of p-hat SA is sufficient, you know, which is the same as when- when is the estimate of A and B you know sufficiently correct? And in this case, ah, you can consider it as sufficient when this overall, you know, this outer loop converges. [inaudible] Yeah. So, uh, again, so for the p set that we have instructions given in the p set itself, you can follow those instructions. And- and, uh, the idea in general is that you're- you're- you're, uh, your value function stops changing overtime, right? That's- that's a good sign. [NOISE] Um, right? So, um, instead of constructing a deterministic model, you can- it is always better to construct a stochastic model where the noise that you want to induce into the stochastic system can also be estimated using- using MLE, right? A and B were estimated using MLE. And then we can also estimate, um, uh, the noise that we want to induce, which is kind of consistent with the- the, uh- uh, with the errors that we are making can also be learned this way. And, ah, when you're making the next prediction, you know, just sample some noise according to that covariance and make your, you know, a noisy prediction every time. So this is how we would learn a model, so learn a model in continuous setting. [NOISE] Right? Now, let's, now- now let's see, uh, after having addressed, uh, modeling the- the, uh, modeling the environment itself as continuous. Now what do we do about the value function itself? Because if you remember- if you remember, for example, in this example, s comes up in- in the model, in the- in the transition, and s comes as an input to V, right? In the discrete setting, V was just an array where we had, uh, as the length of the array equal to the number of states that we had, and the transition function or the- or the environment was also a matrix where the number of states was continuous. So it was just a matrix. But now when we move to continuous states, the transitions or the- or the environment or the model, you know, can be made continuous. Next, we need to make the value function also continuous. Right? And a common approach for, uh, for doing that is to first, uh, think of our- think of our, ah, value function as, you know, V of s is equal to R of s plus Gamma times- Gamma times max a expectation of s prime from Psa V s prime. Now Psa, you can think of Psa from, you know, this, uh, model as Psa is equal to some normal distribution where the mean is As plus Ba and covariance is the, uh- uh, covariance that we estimated here. Right? So you can think of this as Psa where you sample the next state according to, um, this. And- and so this can now, you can think of it as R of s plus Gamma times max a integral p of- [NOISE] Psa of s prime times V of s prime ds prime. Where this is the Gaussian pdf. Right? So you can think of the value function as this analog of, uh, of- of the, uh, discrete version where the expectation is now over a continuous random variable instead of a discrete random variable, right? And the input to the value function is also a continuous, uh, continuous value, right? And this identity is- is, you know, is again, um, is also called the Bellman equation because it's- Bellman equation. Uh, it's just the, uh, continuous version of the discrete setting. However, um, what we end up doing in practice is to use what is called as, uh- uh, fitted value function or the value function approximation- value function approximation, where we say V of s is equal to, you know, for example, ah, in this case, let us assume of, uh- uh, a simple Theta transpose Phi of s, right? Just like linear regression with some kind of a feature map, you construct a feature map of your state space using, you know, whatever domain knowledge you have and assume you have some kind of, uh- uh, a linear relationship for, ah, for the- for the value of a function based on the state that it is- that it's in. Right? And with this assumption, what we- what we wanna do is try to come up with a value function, a V function that has this form, but satisfies the Bellman equation, right? With- which was basically what we did with- with, um, the value iteration in the discrete case. And we're gonna now do the same thing with the, ah, continuous, um, uh, in the continuous case. right? So this gives us an algorithm [NOISE] with this two assumptions and it looks like this. [NOISE] So now this is an algorithm where we are using, um, continuous state with, you know, sample model where the model in this, you know, you can assume it to be a- a linear dynamical system with, you know, where you have the A and B matrices. And you resu- you're doing value function approximation. Right? So here this was s^t plus 1 equals As^t plus Ba^t. The value function approximation where we have V of s is equal to Theta transpose Phi of s. So this algorithm is basically an adaptation of this algorithm. But to the setting of- of continuous states and value function approximation, right? Over here, both the states and the value functions, um, you can- you can think of them as non-parametric, which means we did not have a family of, um, uh, some kind of a family of functions for V or Psa. However, these are parametric in the sense that the transition or the model has A and B as the parameters. Or if you're having Epsilon, it has Sigma as also a parameter, and the value function has Theta as the parameter. Right? So it's basically an adaptation of that algorithm into the setting. So Step 1, randomly sample n states. Call them S^1, S^2, S^n and S. Two, initialize Theta to be 0. And Step 3, repeat until convergence. Now, there's an inner loop here for i equals 1 till n. The inner loop. Is there a question? Sure, for i equals 1 through n, also write this an inner loop within this as well. For each A in the action space, sample, S_1 prime till S_k prime from Ps^i a. Right? So in this case it could be, for example, um, the normal distribution with As^i plus Ba covariance Sigma hat, for example. And then set qa is equal to average 1 over k, right? J equals 1 through k R of s^i plus Gamma of V of sj prime. Again, where here V of s is equal to Theta transpose phi of s. Right? So we do this for each action, and then continue it over here, set Y I equals- equals max A, Q of A and that ends this. So let's call this as a start A, start B and start C. So this is finish C, this is finish B [NOISE] and then site Theta hat equals arg min Theta I equals 1 to n, Theta transpose Phi of S I minus Y I square. This is finish, A, right? So start A, ends with finish A, and start B ends finish B and start C ends with finished C. Right? So what's happening here? First, we start by initializing the Theta of our value function, ah, class to be some, ah, to- to some value in our Theta equals 0, and then we repeat this iteration over and over, where first, for each example, our goal is for each, um, for each [NOISE] I, N- yeah, so for, ah, for each of the states that we have sampled randomly, we want to, ah, we want to come up with an estimate Y I for each S I, right? That's the goal here of- of the outer loop. So for each of the ah, states that we've sampled, we want to come up with ah, Y I, which is our- think of Y I as the ah, label for V of S I. Right? So we want to construct ah, an estimate of V of S I for each I, and that's what ah, the ah, the start B to end B achieves. And the way we go about doing that is for at each- ah, for each ah, S I take all the possible actions starting from S I, and for each action, we sample, K, next states. So for each S I and action, we get K, number, of next states sampled according to the transition. Right? And this transition we are going to use the learnt model, it could be a linear dynamical system, it could be, you know, anything more complex. And then we're going to define this quantity Q of A, as the expected value of taking that action from that state, right? So Q of A is the average of R over S I plus Gamma of V of the next state and average across all the J's. Right, does make sense? So we're going to take- you're going to start with some- some, random state S, and from this S, we're going to take all the possible actions a, a1, a2, A- in this case, capital A. We are going to take all the possible actions from each state, and for each action that we take, we are going to get K, number, of S prime, so S prime 1, until S prime K for each action, right? And similarly, you get S prime 1 until S prime K for each action, for each state. Right? And then from each of those final states, you're going to evaluate the value according to the current estimate of the V function, right? So the current esti- estimate of the V function uses the Theta value we have so far, right? And the first iteration is going to be 0, after that, it's gonna be something else, right? According to the current estimate of the- of the V function, we're going to estimate the- the ah, we're going to use the evaluation of these states using the current estimate of the value function, and we're gonna es- construct this Q value as the- as the expected value of taking this action. Yes, function- yes question. [inaudible]. Yes, Phi- so Phi of S is- is just a feature map and you define Phi of S according to your problem setting, right? So in- in homework one, we define feature maps as sinusoidal squared, polynomials, etc. You define, sum such feature map for your state space. Question? Yes, question? According to that answer, can you explain that design to us probably, like why do we say, we have [inaudible] I'll come to that ah, for- for now, assume V has this linear relationship, right? So what we do is, for each S, we'll take all the possible actions, and from each possible action, you know, sample new states and for each of those new states, calculate the value of the- each new state according to the value function approximation that we have, right? And then take the average and this ah, Q is now an approximation for, right? So Q is, therefore, an approximation, Q of um, Q of A, is, therefore, appro- approximately R of S I, plus Gamma times expectation of S prime NQ, ah, sorry, S prime of Psa, of V of S prime. Right? So the Q value that we constructed is an exp- is- is approximately this and the reason is because- so R is just a constant. So you can actually take out R from this averaging, right? And what remains here, you know this- what we're seeing here is, therefore, a Monte Carlo estimate of this expectation, right? It's a Monte Carlo expectation because we are approximating the transition with K items, and it is also an approximation of V because we are only using the current estimate of- ah, of the V- of the value function with this functional form. Yes. So Q of A is the estimated, ah- it is the expected result will get [inaudible] Expected long-term value for taking from- from this ah, ah, state for taking action A. So shouldn't Q of S, A? Yeah, so you can think of it as Q of S, A. That's fine too. So yeah Q of S prime. Right? That's fine. So right? So this Q is an approximation of the- of the right-hand side of the- of the um, ah of the right-hand side of the Bellman equation, right? So now we define Y I as the max over Q I. So Y I now max A of Q I of- of Q A and Q A is basically um, R of S I plus max A of Gamma times- Gamma times the expectation of S prime from the Psa of V, S I. Right? So Y I, which is max over QA, is max over, you know, max with respect to A of this thing and therefore it is, you know, when we apply the max, R comes out of the max because there's no A term in it, and its max overr A of this term. Right? And this now looks like the Bellman update operator, right? Or the Bellman backup operator. So according to the Bellman uh, backup operator, we should have want to set V of S to be equal to this, right? But our V takes this parametric form, so we just cannot set V of S to be some value, right? Because V is not an array and we're not setting the Sth element to be some value. What we instead want to do because we are in this value function setting, is to minimize the distance between Y I and V of S I. And that's exactly what we're gonna do. We're gonna minimize the distance between Y- Y I and V of S I. Yes, question. Why- why is it V is not a vector of [inaudible] ? V is not a vector because S is continuous, we are in a continuous setting and we are, you know, we are- we are approximating this- this continuous function with this linear form. So if S is continuous what do you mean by V of S? V of S is a feature map for, you know, sum S. For S okay. So, um, the way to- to- to think about this is to see the similarity with the Bellman backup operator, right? In case of the bellman backup operator, you would have V of S^i to be equal to R of S^i plus max of a over this expectation of, um, V of s prime. Instead, because we are in thi- in this, you know, um, fitted value- fitted function setting, we are going to approximate this expectation with the Monte Carlo expectation. That is, you know, this is the Monte Carlo expectation. This- this expectation is going to come in here. R of S^i is just R of S^i and this will evaluate to some y of S^i, which in the discrete setting we would have just set it as the- as the Vth element or Sth element of the VR^a but instead we are in, you know uh, uh, uh, uh fitted function setting, which means we want V of S to evaluate to y or as close to y as possible, right? Which is why we minimize the squared error between Phi of- Phi- so think of this as V of S^i, right? And this is the RHS of the Bellman backup operator, all right? So what's happening here, um, okay, is that first of all, um, let's see the sum, um, um, a few things, you know, the way that's similar to this. So first of all, here we are- in- in- in the discrete setting we were learning both the model and performing value iteration kind of intertwined and we are trying to do something similar here in the sense we are first- in this setting, we are- we are- we are learning the A's and B's, the dynamics, which is equal to this, first, we're not doing it, uh, um, intertwine, but we are first learning the dynamics and using those dynamics, we are now performing value iteration using the Bellman backup operator, but we're performing it in such a way that the- the- the, uh, Bellman op- backup operator is working with these fitted value functions, right? Instead of the, uh, two value functions. And another way to kind of un- understand this better is, okay, if you remember value iteration. So S1 through to Sd right? In the- in the, uh, fitted- in the- in the discrete setting, we had, now supposing this was V-star, right? we start with some, you know, random initialization V-node and apply the bellman backup operator to get V1 and we would keep getting closer to- closer and closer, right? Where each hop was one application of the Bellman backup operator and the Bellman backup operator was a contraction mapping so no matter where it started it would converge at V-star. But now in this setting, this- the space of value functions is somewhat limited. We are limiting ourselves to only those that can be expressed as a linear combination of the feature maps of S, which means this whole algorithm is kind of limited to some set of, you know, think of this as set of all V, which can be, you know, such that V of S equals theta transpose V of S. Right? This family of functions cannot represent all the possible functions in this functional space, right? So we're kind of limited in that sense. And when we- if we start with some value function over here, and we apply one iteration of this fitted value, uh, algorithm- fitted value, uh, iteration. It will give us, uh, so let me- so if we are over here, let's call this V of t. The fitted value iteration might give us y^i such that y^i is over here, right? It might be outside our um, um, a functional space, the set of y^i's, right? So we ideally want V of S^i to be equal to y^i. Right? We want V of S^i to be equal to i, we want this to be 0, right? But this y may be outside our- our, uh, um, our functional space, and therefore when we perform this minimization, we are going to get the nearest- the projection of y^i onto this subspace, right? That's- that's exactly, you know, the- the- what happened in linear regression. We're going to get this function over here, which is the projection of y onto this- onto this space. The nearest point in our family of functions that is nearest to y, right? And then we're- we're probably going to keep hopping, you know, by- by keeping ourselves within the family of- of, um, the V functions of this parametric space, uh, that we have defined and get as close as possible to V-star. That's what this- this, uh, algorithm will do us. This algorithm does not have the convergence gua- guarantees of the discrete space setting. In the discrete space setting, the bellman backup operator would take us to the true V-star after certain number of, uh, uh, trials. Whereas in the fitted value functions, we have limited ourselves to some space and V-star might be out of the space. There's no guarantee and there is no guarantee that this algorithm will converge. But in practice, it seems, you know, it generally tends to work well. Yes, question. Is it- is it same like RV that is that is held in some low dimension subspace in the bigger space? So think of this as some kind of a functional space, don't think of this as- as. you know, uh, um, yeah, just think of it as some functional space where the- the set of all functions that can be defined in this form is- is limited to some kind of a set. If that- can that problem be solved by coming up with [inaudible]. Yeah, I am coming to that next, yeah. Yeah, so your V-star can be outside the set of all functions. Yes, question. Is this assuming the actions are still discrete? Yes, here we are still assuming actions are discrete because we are performing a max over the different q^a's we're- is still assuming actions are discrete, good question. And the [inaudible]. Yeah. Right? The- the fact that we are minimizing this, you know, think of this as the, um, the resulting, um, theta transpose S^i offer minimizing this, um, uh, after minimizing this squared error, the result, you know, one- once you do this, so V hat of S is equal to theta hat of Phi of S. So this is essentially the projection of y onto the nearest point in the function. Yeah. Also why- why- why have you not physically described below to that space, because- Yeah, so the question is, um, yeah, that's- that's- that's, uh, something I'm coming, uh, when I address next. Right? So, uh, there are- there are- there are, um, we- we've seen two kind of, uh, extensions to the standard policy iteration and value iteration so far, one extension was when we don't know the model, when we don't know the- the transition, uh, probabilities we estimate it from data, right? PSA, and the other extension that we saw was to move on to this continuous setting- from discrete setting and once we move into continuous setting, uh, the number of parameters we have can potentially be infinite and therefore we limit ourselves- limit our express ability to some kind of, uh, parametric family model, right? And once we are in- limited to this- to this, uh, um, parametric family, uh, of- of value functions, which we call as value function approximation, we then define an algorithm which is, you know, kind of, uh, uh, similar in spirit to the value iteration. Right? It is similar in spirit because we are performing the same kind of updates except when we are, uh, in- in- in the- in the, uh, value iteration, we could just set the value- the value that V of S needed to evaluate to. However, in the- because we are now limited to, uh, a parametric family, what we do is minimize the loss between, um, what- what our, uh, hypothesis can predict and what the right answer should be. Right? Now that we're, um, you know, based on these two extensions, we can, um, so this- this- this basically kind of, uh, marks the end of what's kind of in syllabus. Let's see how we can, you know, the- some of the other concepts in reinforcement learning and how you can think of the other concepts that you may come across as extension for what we have seen so far. Right? So in reinforcement learning, in the- in the broader space , all right? We can think of two approaches in general. The two approaches are approaches based on value functions and approaches based on policies. So value function-based approaches versus policy-based, all right? So in the simplest setting, right? And when everything is discrete and finite, it's basically the value iteration and the policy based-approach in this is basically the policy iteration. Right? And as we move on to more, uh, as we move on in terms of the, uh, complexity hierarchy. When we move on to, uh, infinite space- infinite state space, we get the fitted value iteration. So fitted value iteration is basically this algorithm that we saw. All right. In both these algorithms, we call both these algorithms as model-based algorithms. We call them model-based algorithms because we explicitly- we constructed explicit versions of the way we believe that the model works, right? It was either in the form of the PSA transition matrix or it was in the form of this linear dynamical system, where S, S_t plus 1 was distributed according to a normal equation. Okay? Either we constructed a- a dynamical system approximation of the- of the model, or we used, um, you know, directly the PSA, uh, transition matrix for the, uh, a finite, uh, finite state case. Okay? However, we can move on to, you know, model-free methods. In the model free methods, right, we acknowledge that we don't know how the model, uh, how the real world works, or how the dynamics of the system works. But we also say we don't care how it works. Whereas in model-based approaches we acknowledged that we didn't know, and we tried to learn it, right? In model-free approaches, we say we don't know how the- how the- how the world works. We don't know how the transitions happen, and we don't care. And the way we say we don't care is by coming up with what is called as a Q function, right? And, uh, the Q function is basically, think of it as, you know, Q of s, a to be the value- the expected value of being in state S and taking an action a, right? And the reason why, you know, even though it seems like a simple addition of an action, it is fundamentally very different. The reason why it's very different because supposing we learned, you know, V star of s and we don't know how the dynamics of the system works, but we still somehow figured out the optimal value of being in each, uh, uh, being in, uh, for some state. Now, how do we can- decide what the action is? There's no way because we don't know which state we will end up in by taking some action because we don't know how the dynamics of the system works, right? So knowing V star is insufficient if you don't know the dynamics of the system. And the way we go about, uh, uh, go about that limitation is to instead learn these Q functions, which are- which is basically, uh, the Q function is basically telling us what is the expected sum of all future rewards- discounted sum of future rewards if we are at state S and take an action a. We don't know what state we're going to end up in, but if you take this action from the state, you know, the Q of s, a tells us what's the expected sum of future rewards, right? We just stay agnostic to which state we're going to end up in and directly assign a value to taking an action at a given state, right? And these techniques are called Q learning, and we can apply Q learning, uh, function approximation to Q learning as well. We saw function approximation applied to value functions and just- just in a very similar way, you can- we can perform function approximation for Q values or- of- of a Q function, right? And, um, we saw an example where the family of, uh, of- of - of the function space with which we perform the approximation was linear. But there's nothing stopping us from using more complex functions. All right? So this, uh, so this approximation of the value function using a linear function, this could have been something much more complex. We could have used a neural network here where the input was S and the output was the value at the last layer. And- and- and when we're minimizing this, you know, use back propagation and minimize the loss with respect to, you know, the parameters of a neural network, right? There was nothing that stopped us from doing it. And when we use neural networks as the function approximators for either the value function or the Q function that basically gives us the field of deep RL or deep reinforcement learning. So you can use deep reinforcement learning by using neural networks as approximators for your value function or the Q function. There is another- there are- there are other approaches you or- or other kinds of classifications you can think of. Uh, in the finite case, the way we evaluated the value function was using something called as exact method. So the Bellman backup operator, when you apply it over and over again, you get the exact V star. And there are other, uh, approaches where we- either we can use Markov chains or Monte Carlo approaches, not Markov chain, Monte Carlo approaches or you can use something called as TD learning or temporal difference learning. TD Learning, uh, so in Mon- in- in Monte Carlo approaches, what we do is we take some value- we- we take some policy and just follow that policy until the end, until we reach a terminating state and then average- and then calculate the discounted sum of rewards and use that as the- as the estimate of the value function. That's actually try it out and measure how much total value you got, and average it across multiple runs. That's- that's, uh, Monte Carlo based. However, the limitations with that is that you've got to follow the policy until things terminate, right? Whereas there is, uh, something called TD learning or temporal difference learning that is something in between exact and, uh, Monte Carlo and in fact, in practice TD learning is what's used most commonly. Okay? If you're using, uh, uh, so TD learning is in fact something very similar to this. What we did over here is something very similar to TD learning. Hence, TD learning means temporal difference learning. Uh, that's probably all that I wanted to. So there's- there's probably one more thing I want to say. So there is another- another kind of, uh, line of- line of- of classification you can use for, uh, a reinforcement learning algorithm. There's something called on policy versus off policy methods. So in on policy methods, you're trying to learn the value function by staying on the given policy whereas with off policy, you know, you're looking at data that was- that was where the- the- the, uh, trajectories were- were obtained from some other policy, and you're- you trying to learn the value function of a new policy that you're trying to come up with. And this off policy method has, you know, for those of you who have a more, uh, statistical background, off policy, uh, methods and reinforcement- reinforcement learning has a strong connection to causal inference, right? Because in causal inference you're seeing some observational data, and you're trying to- to work your way backwards to decide, you know, what would have happened if we had taken some other action instead, right? And- and- and that has, uh, kind of, uh, pretty strong ties to causal inference where here we're, you know, observing a sequence of actions that were taken according to some policy, and we are trying to answer what would have happened if you had tried some other policy instead, right? So reinforcement learning is, uh, is- is- is a vast, uh, area of machine learning and in fact, probably reinforcement learning has its roots, uh, in- in- in- in mostly in- in the field of electrical engineering called, uh, optimal control. And I think, you know, it's history is probably older than machine learning in general. And so it's- it's a pretty vast field and there are lots of ways in which you can use machine learning. And the most common way of the most popular way where machine learning and reinforcement learning are kind of coming together is using, uh, is- is using more expressive, uh, uh, function approximation methods to approximate either the value function or- or the Q function using neural networks or deep learning techniques. And also there is, uh, it's- you can also use neural networks for, you know, policy based methods for something or policy iteration though we didn't cover any of that, uh, in this course. All right. So that brings us to an end to, uh, reinforcement learning, and starting Monday we will start the next chapter which is unsupervised learning, right. Have a good weekend. [NOISE]
TedEd_History
대륙들이_충돌하면_어떤_일이_일어날까_존_디_카릴로.txt
Tens of millions of years ago, a force of nature set two giant masses on an unavoidable collision course that would change the face of the Earth and spell life or death for thousands of species. The force of nature was plate tectonics, and the bodies were North and South America. And even though they were hurdling towards each other at an underwhelming 2.5 cm per year, their collision actually did have massive biological reprocussions by causing one of the greatest episodes of biological migration in Earth's history: The Great American Biotic Interchange. Our story begins 65 million years ago, the beginning of the age of mammals, when what is now North and South America were continents separated by a marine connection between the Pacific and Atlantic Oceans. During this time, South America was the home of fauna that included armored glyptodonts as large as compact cars, giant ground sloths weighing more than a ton, opossums, monkeys, and carnivorous terror birds. North America had its own species, such as horses, bears, and saber-toothed cats. Over 20 million years, the shifting of the Farallon and Caribbean Plates produced the Central America Volcanic Arc, a peninsula connected to North America, with only a very narrow seaway separating it from South America. As these plates continued to surf the Earth's magma layer far beneath the Pacific Ocean floor, the Caribbean Plate migrated eastward, and about 15 million years ago, South America finally collided with this Central American Arc. This gradually closed the water connection between the Pacific and the Caribbean, creating a land bridge, which connected North America to South America. Terrestrial organisms could now cross between the two continents, and from the fossil records, it's evident that different waves of their dispersals took place. Even though plants don't physically move, they are easily dispersed by wind and waves, so they migrated first, along with a few species of birds. They were followed by some freshwater fishes and amphibians, and finally, various mammals began to traverse the bridge. From South America, mammals like ground sloths and glyptodonts were widly distributed in North America. Moreover, many South American tropical mammals, like monkeys and bats, colonized the forests of Central America, and are very abundant today. South American predator marsupials went extinct 3 million years ago, at which point North American predators, such as cats, bears and foxes, migrated south and occupied the ecological space left behind. Horses, llamas, tapirs, cougars, saber-toothed cats, gomphotheres, and later humans also headed south across the land bridge. But what happened on land is only half the story. What had been one giant ocean was now two, creating differences in temperature and salinity for the two bodies of water. The isthmus also became a barrier for many marine organisms, like mollusks, crustaceans, foraminifera, bryozoans, and fish, and separated the populations of many marine species. It also allowed the establishment of the thermohaline circulation, a global water conveyor belt, which transports warm water across the Atlantic, and influences the climate of the East Coast of North America, the West Coast of Europe, and many other areas. It's a challenge to track all of the ways the collision of the Americas changed the world, but it's safe to say that the ripples of the Great American Biotic Interchange have propagated through the history of life on the planet, and that of mankind. What if these species hadn't gone extinct, or if there were no monkeys in Central America, or jaguars in South America? What if the thermohaline circulation wasn't flowing? Would the East Coast of North America be much colder? It all goes to show some of the most impactful transformations of our planet aren't the explosive ones that happen in an instant, but the ones that crawl towards irreversible change. We are the product of history.
TedEd_History
The_incredible_history_of_Chinas_terracotta_warriors_Megan_Campisi_and_PenPen_Chen.txt
What happens after death? Is there a restful paradise? An eternal torment? A rebirth? Or maybe just nothingness? Well, one Chinese emperor thought that whatever the hereafter was, he better bring an army. We know that because in 1974, farmers digging a well near their small village stumbled upon one of the most important finds in archeological history: vast underground chambers surrounding that emperor's tomb, and containing more than 8,000 life-size clay soldiers ready for battle. The story of the subterranean army begins with Ying Zheng, who came to power as the king of the Qin state at the age of 13 in 246 BCE. Ambitious and ruthless, he would go on to become Qin Shi Huangdi, the first emperor of China after uniting its seven warring kingdoms. His 36 year reign saw many historic accomplishments, including a universal system of weights and measures, a single standardized writing script for all of China, and a defensive barrier that would later come to be known as the Great Wall. But perhaps Qin Shi Huangdi dedicated so much effort to securing his historical legacy because he was obsessed with his mortality. He spent his last years desperately employing alchemists and deploying expeditions in search of elixirs of life that would help him achieve immortality. And as early as the first year of his reign, he began the construction of a massive underground necropolis filled with monuments, artifacts, and an army to accompany him into the next world and continue his rule. This magnificent army is still standing in precise battle formation and is split across several pits. One contains a main force of 6,000 soldiers, each weighing several hundred pounds, a second has more than 130 war chariots and over 600 horses, and a third houses the high command. An empty fourth pit suggests that the grand project could not be finished before the emperor's death. In addition, nearby chambers contain figures of musicians and acrobats, workers and government officials, and various exotic animals, indicating that Emperor Qin had more plans for the afterlife than simply waging war. All the figurines are sculpted from terracotta, or baked earth, a type of reddish brown clay. To construct them, multiple workshops and reportedly over 720,000 laborers were commandeered by the emperor, including groups of artisans who molded each body part separately to construct statues as individual as the real warriors in the emperor's army. They stand according to rank and feature different weapons and uniforms, distinct hairstyles and expressions, and even unique ears. Originally, each warrior was painted in bright colors, but their exposure to air caused the paint to dry and flake, leaving only the terracotta base. It is for this very reason that another chamber less than a mile away has not been excavated. This is the actual tomb of Qin Shi Huangdi, reported to contain palaces, precious stones and artifacts, and even rivers of mercury flowing through mountains of bronze. But until a way can be found to expose it without damaging the treasures inside, the tomb remains sealed. Emperor Qin was not alone in wanting company for his final destination. Ancient Egyptian tombs contain clay models representing the ideal afterlife, the dead of Japan's Kofun period were buried with sculptures of horses and houses, and the graves of the Jaina island off the Mexican coast are full of ceramic figurines. Fortunately, as ruthless as he was, Emperor Qin chose to have servants and soldiers built for this purpose, rather than sacrificing living ones to accompany him, as had been practiced in Egypt, West Africa, Anatolia, parts of North America and even China during the previous Shang and Zhou dynasties. And today, people travel from all over the world to see these stoic soldiers silently awaiting their battle orders for centuries to come.
TedEd_History
어떻게_잘못된_뉴스가_확산될_수_있는가_노아_타블린Noah_Tavlin.txt
There's a quote usually attributed to the writer Mark Twain that goes, "A lie can travel halfway around the world while the truth is putting on its shoes." Funny thing about that. There's reason to doubt that Mark Twain ever said this at all, thus, ironically, proving the point. And today, the quote, whoever said it, is truer than ever before. In previous decades, most media with global reach consisted of several major newspapers and networks which had the resources to gather information directly. Outlets like Reuters and the Associated Press that aggregate or rereport stories were relatively rare compared to today. The speed with which information spreads now has created the ideal conditions for a phenomenon known as circular reporting. This is when publication A publishes misinformation, publication B reprints it, and publication A then cites B as the source for the information. It's also considered a form of circular reporting when multiple publications report on the same initial piece of false information, which then appears to another author as having been verified by multiple sources. For instance, the 1998 publication of a single pseudoscientific paper arguing that routine vaccination of children causes autism inspired an entire antivaccination movement, despite the fact that the original paper has repeatedly been discredited by the scientific community. Deliberately unvaccinated children are now contracting contagious diseases that had been virtually eradicated in the United States, with some infections proving fatal. In a slightly less dire example, satirical articles that are formatted to resemble real ones can also be picked up by outlets not in on the joke. For example, a joke article in the reputable British Medical Journal entitled "Energy Expenditure in Adolescents Playing New Generation Computer Games," has been referenced in serious science publications over 400 times. User-generated content, such as wikis, are also a common contributer to circular reporting. As more writers come to rely on such pages for quick information, an unverified fact in a wiki page can make its way into a published article that may later be added as a citation for the very same wiki information, making it much harder to debunk. Recent advances in communication technology have had immeasurable benefits in breaking down the barriers between information and people. But our desire for quick answers may overpower the desire to be certain of their validity. And when this bias can be multiplied by billions of people around the world, nearly instantaneously, more caution is in order. Avoiding sensationalist media, searching for criticisms of suspicious information, and tracing the original source of a report can help slow down a lie, giving the truth more time to put on its shoes.
TedEd_History
매카시즘은_무엇이고_어떻게_생겨났을까요_엘렌_슈레커Ellen_Schrecker.txt
Imagine that one day, you're summoned before a government panel. Even though you haven't committed any crime, or been formally charged with one, you are repeatedly questioned about your political views, accused of disloyalty, and asked to incriminate your friends and associates. If you don't cooperate, you risk jail or losing your job. This is exactly what happened in the United States in the 1950s as part of a campaign to expose suspected communists. Named after its most notorious practitioner, the phenomenon known as McCarthyism destroyed thousands of lives and careers. For over a decade, American political leaders trampled democratic freedoms in the name of protecting them. During the 1930s and 1940s, there had been an active but small communist party in the United States. Its record was mixed. While it played crucial roles in wider progressive struggles for labor and civil rights, it also supported the Soviet Union. From the start, the American Communist Party faced attacks from conservatives and business leaders, as well as from liberals who criticized its ties to the oppressive Soviet regime. During World War II, when the USA and USSR were allied against Hitler, some American communists actually spied for the Russians. When the Cold War escalated and this espionage became known, domestic communism came to be seen as a threat to national security. But the attempt to eliminate that threat soon turned into the longest lasting and most widespread episode of political repression in American history. Spurred on by a network of bureaucrats, politicians, journalists, and businessmen, the campaign wildly exaggerated the danger of communist subversion. The people behind it harassed anyone suspected of holding left-of-center political views or associating with those who did. If you hung modern art on your walls, had a multiracial social circle, or signed petitions against nuclear weapons, you might just have been a communist. Starting in the late 1940s, FBI Director J. Edgar Hoover used the resources of his agency to hunt down such supposed communists and eliminate them from any position of influence within American society. And the narrow criteria that Hoover and his allies used to screen federal employees spread to the rest of the country. Soon, Hollywood studios, universities, car manufacturers, and thousands of other public and private employers were imposing the same political tests on the men and women who worked for them. Meanwhile, Congress conducted its own witchhunt subpoenaing hundreds of people to testify before investigative bodies like the House Un-American Activities Committee. If they refused to cooperate, they could be jailed for contempt, or more commonly, fired and blacklisted. Ambitious politicians, like Richard Nixon and Joseph McCarthy, used such hearings as a partisan weapon accusing democrats of being soft on communism and deliberately losing China to the Communist Bloc. McCarthy, a Republican senator from Wisconsin became notorious by flaunting ever-changing lists of alleged communists within the State Department. Egged on by other politicians, he continued to make outrageous accusations while distorting or fabricating evidence. Many citizens reviled McCarthy while others praised him. And when the Korean War broke out, McCarthy seemed vindicated. Once he became chair of the Senate's permanent subcommittee on investigations in 1953, McCarthy recklessness increased. It was his investigation of the army that finally turned public opinion against him and diminished his power. McCarthy's colleagues in the Senate censured him and he died less than three years later, probably from alcoholism. McCarthyism ended as well. It had ruined hundreds, if not thousands, of lives and drastically narrowed the American political spectrum. Its damage to democratic institutions would be long lasting. In all likelihood, there were both Democrats and Republicans who knew that the anti-communist purges were deeply unjust but feared that directly opposing them would hurt their careers. Even the Supreme Court failed to stop the witchhunt, condoning serious violations of constitutional rights in the name of national security. Was domestic communism an actual threat to the American government? Perhaps, though a small one. But the reaction to it was so extreme that it caused far more damage than the threat itself. And if new demagogues appeared in uncertain times to attack unpopular minorities in the name of patriotism, could it all happen again?
TedEd_History
왕좌의_게임에_영감을_주었던_전쟁ㅣ알렉스_젠들러Alex_Gendler.txt
As far as we know, Medieval England was never invaded by ice zombies, or terrorized by dragons, but it was shaken by a power struggle between two noble families spanning generations and involving a massive cast of characters with complex motives and shifting loyalties. If that sounds familiar, it's because the historical conflicts known as the Wars of the Roses served as the basis for much of the drama in Game of Thrones. The real-life seeds of war were sewn by the death of King Edward III in 1377. Edward's oldest son had died before his father, but his ten-year-old son, Richard II, succeeded to the throne ahead of Edward's three surviving sons. This skipping of an entire generation left lingering claims to the throne among their various offspring, particularly the Lancasters, descended from Edward's third son, and the Yorks, descended from his fourth son. The name of the ensuing wars comes from the symbols associated with the two families, the white rose of York and the red rose of Lancaster. The Lancasters first gained the throne when Richard II was deposed by his cousin Henry IV in 1399. Despite sporadic unrest, their reign remained secure until 1422, when Henry V's death in a military campaign left an infant Henry VI as king. Weak-willed and dominated by advisors, Henry was eventually convinced to marry Margaret of Anjou to gain French support. Margaret was beautiful, ambitious, and ruthless in persecuting any threat to her power, and she distrusted Richard of York, most of all. York had been the King's close advisor and loyal General, but was increasingly sidelined by the Queen, who promoted her favorite supporters, like the Earls of Suffolk and Somerset. York's criticism of their inept handling of the war against France led to his exclusion from court and transfer to Ireland. Meanwhile, mounting military failures, and corrupt rule by Margaret and her allies caused widespread discontent, and in the midst of this chaos, Richard of York returned with an army to arrest Somerset and reform the court. Initially unsuccessful, he soon got his chance when he was appointed Protector of the Realm after Henry suffered a mental breakdown. However, less than a year later, Henry suddendly recovered and the Queen convinced him to reverse York's reforms. York fled and raised an army once more. Though he was unable to directly seize the throne, he managed to be reinstated as Protector and have himself and his heirs designated to succeed Henry. But instead of a crown, York's head acquired a pike after he was killed in battle with the Queen's loyalists. His young son took up the claim and was crowned Edward IV. Edward enjoyed great military success against the Lancasters. Henry was captured, while Margaret fled into exile with their reportedly cruel son, Edward of Westminster. But the newly crowned King made a tragic political mistake by backing out of his arranged marriage with a French Princess to secretly marry the widow of a minor Noble. This alienated his most powerful ally, the Earl of Warwick. Warwick allied with the Lancasters, turned Edward's jealous younger brother, George, against him, and even briefly managed to restore Henry as King, but it didn't last. Edward recaptured the throne, the Lancaster Prince was killed in battle, and Henry himself died in captivity not long after. The rest of Edward IV's reign was peaceful, but upon his death in 1483, the bloodshed resumed. Though his twelve-year-old son was due to succeed him, Edward's younger brother Richard III declared his nephews illegitimate due to their father's secret marriage. He assumed the regency himself and threw the boys in prison. Though no one knows what ultimately became of them, after a while, the Princes disappeared and Richard's power seemed secure. But his downfall would come only two years later from across the narrow sea of the English Channel. Henry Tudor was a direct descendant of the first Duke of Lancaster, raised in exile after his father's death in a previous rebellion. With Richard III's power grab causing a split in the York faction, Henry won support for his royal claim. Raising an army in France, he crossed the Channel in 1485 and quickly defeated Richard's forces. And by marrying Elizabeth of York, elder sister of the disappeared Princes, the newly crowned Henry VII joined the two roses, finally ending nearly a century of war. We often think of historical wars as decisive conflicts with clearly defined winners and losers. But the Wars of the Roses, like the fiction they inspired, show us that victories can be uncertain, alliances unstable, and even the power of Kings as fleeting as the seasons.
TedEd_History
히틀러는_어떻게_힘을_얻을_수_있었을까_알렉스_겐들러_안토니_하자드Alex_Gendler_Anthony_Hazard.txt
How did Adolf Hitler, a tyrant who orchestrated one of the largest genocides in human history, rise to power in a democratic country? The story begins at the end of World War I. With the successful Allied advance in 1918, Germany realized the war was unwinnable and signed an armistice ending the fighting. As its imperial government collapsed, civil unrest and worker strikes spread across the nation. Fearing a Communist revolution, major parties joined to suppress the uprisings, establishing the parliamentary Weimar Republic. One of the new government's first tasks was implementing the peace treaty imposed by the Allies. In addition to losing over a tenth of its territory and dismantling its army, Germany had to accept full responsibility for the war and pay reparations, debilitating its already weakened economy. All this was seen as a humiliation by many nationalists and veterans. They wrongly believed the war could have been won if the army hadn't been betrayed by politicians and protesters. For Hitler, these views became obsession, and his bigotry and paranoid delusions led him to pin the blame on Jews. His words found resonance in a society with many anti-Semitic people. By this time, hundreds of thousands of Jews had integrated into German society, but many Germans continued to perceive them as outsiders. After World War I, Jewish success led to ungrounded accusations of subversion and war profiteering. It can not be stressed enough that these conspiracy theories were born out of fear, anger, and bigotry, not fact. Nonetheless, Hitler found success with them. When he joined a small nationalist political party, his manipulative public speaking launched him into its leadership and drew increasingly larger crowds. Combining anti-Semitism with populist resentment, the Nazis denounced both Communism and Capitalism as international Jewish conspiracies to destroy Germany. The Nazi party was not initially popular. After they made an unsuccessful attempt at overthrowing the government, the party was banned, and Hitler jailed for treason. But upon his release about a year later, he immediately began to rebuild the movement. And then, in 1929, the Great Depression happened. It led to American banks withdrawing their loans from Germany, and the already struggling German economy collapsed overnight. Hitler took advantage of the people's anger, offering them convenient scapegoats and a promise to restore Germany's former greatness. Mainstream parties proved unable to handle the crisis while left-wing opposition was too fragmented by internal squabbles. And so some of the frustrated public flocked to the Nazis, increasing their parliamentary votes from under 3% to over 18% in just two years. In 1932, Hitler ran for president, losing the election to decorated war hero General von Hindenburg. But with 36% of the vote, Hitler had demonstrated the extent of his support. The following year, advisors and business leaders convinced Hindenburg to appoint Hitler as Chancellor, hoping to channel his popularity for their own goals. Though the Chancellor was only the administrative head of parliament, Hitler steadily expanded the power of his position. While his supporters formed paramilitary groups and fought protestors in streets. Hitler raised fears of a Communist uprising and argued that only he could restore law and order. Then in 1933, a young worker was convicted of setting fire to the parliament building. Hitler used the event to convince the government to grant him emergency powers. Within a matter of months, freedom of the press was abolished, other parties were disbanded, and anti-Jewish laws were passed. Many of Hitler's early radical supporters were arrested and executed, along with potential rivals, and when President Hindenburg died in August 1934, it was clear there would be no new election. Disturbingly, many of Hitler's early measures didn't require mass repression. His speeches exploited people's fear and ire to drive their support behind him and the Nazi party. Meanwhile, businessmen and intellectuals, wanting to be on the right side of public opinion, endorsed Hitler. They assured themselves and each other that his more extreme rhetoric was only for show. Decades later, Hitler's rise remains a warning of how fragile democratic institutions can be in the face of angry crowds and a leader willing to feed their anger and exploit their fears.
TedEd_History
What_makes_the_Great_Wall_of_China_so_extraordinary_Megan_Campisi_and_PenPen_Chen.txt
A 13,000 mile dragon of earth and stone winds its way through the countryside of China with a history almost as long and serpentine as the structure. The Great Wall began as multiple walls of rammed earth built by individual feudal states during the Chunqiu period to protect against nomadic raiders north of China and each other. When Emperor Qin Shi Huang unified the states in 221 BCE, the Tibetan Plateau and Pacific Ocean became natural barriers, but the mountains in the north remained vulnerable to Mongol, Turkish, and Xiongnu invasions. To defend against them, the Emperor expanded the small walls built by his predecessors, connecting some and fortifying others. As the structures grew from Lintao in the west to Liaodong in the east, they collectively became known as The Long Wall. To accomplish this task, the Emperor enlisted soldiers and commoners, not always voluntarily. Of the hundreds of thousands of builders recorded during the Qin Dynasty, many were forcibly conscripted peasants and others were criminals serving out sentences. Under the Han Dynasty, the wall grew longer still, reaching 3700 miles, and spanning from Dunhuang to the Bohai Sea. Forced labor continued under the Han Emperor Han-Wudi , and the walls reputation grew into a notorious place of suffering. Poems and legends of the time told of laborers buried in nearby mass graves, or even within the wall itself. And while no human remains have been found inside, grave pits do indicate that many workers died from accidents, hunger and exhaustion. The wall was formidable but not invincible. Both Genghis and his son Khublai Khan managed to surmount the wall during the Mongol invasion of the 13th Century. After the Ming dynasty gained control in 1368, they began to refortify and further consolidate the wall using bricks and stones from local kilns. Averaging 23 feet high and 21 feet wide, the walls 5500 miles were punctuated by watchtowers. When raiders were sighted, fire and smoke signals traveled between towers until reinforcements arrived. Small openings along the wall let archers fire on invaders, while larger ones were used to drop stones and more. But even this new and improved wall was not enough. In 1644, northern Manchu clans overthrew the Ming to establish the Qing dynasty, incorporating Mongolia as well, Thus, for the second time, China was ruled by the very people the wall had tried to keep out. With the empire's borders now extending beyond the Great Wall, the fortifications lost their purpose. And without regular reinforcement, the wall fell into disrepair, rammed earth eroded, while brick and stone were plundered for building materials. But its job wasn't finished. During World War II, China used sections for defense against Japanese invasion, and some parts are still rumored to be used for military training. But the Wall's main purpose today is cultural. As one of the largest man-made structures on Earth, it was granted UNESCO World Heritage Status in 1987. Originally built to keep people out of China, the Great Wall now welcomes millions of visitors each year. In fact, the influx of tourists has caused the wall to deteriorate, leading the Chinese government to launch preservation initiatives. It's also often acclaimed as the only man-made structure visible from space. Unfortunately, that's not at all true. In low Earth orbit, all sorts of structures, like bridges, highways and airports are visible, and the Great Wall is only barely discernible. From the moon, it doesn't stand a chance. But regardless, it's the Earth we should be studying it from because new sections are still discovered every few years, branching off from the main body and expanding this remarkable monument to human achievement.
TedEd_History
기록으로_함께_역사를_만듭시다_StoryCorps_TED_Prize.txt
StoryCorps Founder & TED Prize winner Davey Isay has created an app that aims to bring people together in a project of listening, connection, and generosity. Here's why... This is the library of lost stories. It's where you'll find the true origins of the Sphinx and Stonehenge, the text lost in the fire of Alexandria, all of the great ideas that Einstein never thought to write down, the dream you can't quite remember from last night, your ancestor's first words, her last words. And it's the fastest growing library in the world. In the next year, 25 languages will be added to the collection, never to be heard aloud again, 50 million points of view never related, the last eyewitness account of an incredible act of athleticism, disobedience, courage, unread, unheard, unwatched. But this is the StoryCorps archive at the Library of Congress where everything recorded by StoryCorps is preserved for posterity. This is where, if you record your parents, your grandparents, your neighbors, your children, their stories will live on. What if Anne Frank hadn't kept her diary? What if no one could listen to Martin Luther King's Mountaintop speech? What if the camera hadn't been rolling during the first moon landing? But what if, this Thanksgiving, the youngest member of every family interviewed the oldest? "It's like the only thing on his mind was to tell the kids that he loved them." Or if on February 14th, you asked a person you love some questions you've never thought to ask. "Being married is like having a color television set, you never want to go back to black and white." History is all of these things, the testament to tragedy, the progress of civilization, the heroic triumphs, and the moments and stories that are our lives. It's also the act of actively listening to the voices of the past and the people who matter to us. "Grand Central Station, now, we know there's an architect, but who hung the iron? Who were the brick masons? Who swept the floor? Who kept the trains going? We shall begin celebrating the lives of the uncelebrated." So you can make history by recording it.
TedEd_History
선물경제란_알렉스_젠들러_Alex_Gendler.txt
This holiday season, people around the world will give and receive presents. You might even get a knitted sweater from an aunt. But what if instead of saying "thanks" before consigning it to the closet, the polite response expected from you was to show up to her house in a week with a better gift? Or to vote for her in the town election? Or let her adopt your firstborn child? All of these things might not sound so strange if you are involved in a gift economy. This phrase might seem contradictory. After all, isn't a gift given for free? But in a gift economy, gifts given without explicit conditions are used to foster a system of social ties and obligations. While the market economies we know are formed by relationships between the things being traded, a gift economy consists of the relationships between the people doing the trading. Gift economies have existed throughout human history. The first studies of the concept came from anthropologists Bronislaw Malinowski and Marcel Mauss who describe the natives of the Trobriand islands making dangerous canoe journeys across miles of ocean to exchange shell necklaces and arm bands. The items traded through this process, known as the kula ring, have no practical use, but derive importance from their original owners and carry an obligation to continue the exchange. Other gift economies may involve useful items, such as the potlatch feast of the Pacific Northwest, where chiefs compete for prestige by giving away livestock and blankets. We might say that instead of accumulating material wealth, participants in a gift economy use it to accumulate social wealth. Though some instances of gift economies may resemble barter, the difference is that the original gift is given without any preconditions or haggling. Instead, the social norm of reciprocity obligates recipients to voluntarily return the favor. But the rules for how and when to do so vary between cultures, and the return on a gift can take many forms. A powerful chief giving livestock to a poor man may not expect goods in return, but gains social prestige at the debtor's expense. And among the Toraja people of Indonesia, the status gained from gift ceremonies even determines land ownership. The key is to keep the gift cycle going, with someone always indebted to someone else. Repaying a gift immediately, or with something of exactly equal value, may be read as ending the social relationship. So, are gift economies exclusive to small-scale societies outside the industrialized world? Not quite. For one thing, even in these cultures, gift economies function alongside a market system for other exchanges. And when we think about it, parts of our own societies work in similar ways. Communal spaces, such as Burning Man, operate as a mix of barter and a gift economy, where selling things for money is strictly taboo. In art and technology, gift economies are emerging as an alternative to intellectual property where artists, musicians, and open-source developers distribute their creative works, not for financial profit, but to raise their social profile or establish their community role. And even potluck dinners and holiday gift traditions involve some degree of reciprocity and social norms. We might wonder if a gift is truly a gift if it comes with obligations or involves some social pay off. But this is missing the point. Our idea of a free gift without social obligations prevails only if we already think of everything in market terms. And in a commericalized world, the idea of strengthening bonds through giving and reciprocity may not be such a bad thing, wherever you may live.
TedEd_History
북아메리카_탄생_배경_피터_J_하프로프_Peter_J_Haproff.txt
The geography of our planet is in flux. Each continent has ricocheted around the globe on one or more tectonic plates, changing quite dramatically with time. Today, we'll focus on North America and how its familiar landscape and features emerged over hundreds of millions of years. Our story begins about 750 million years ago. As the super continent Rodinia becomes unstable, it rifts along what's now the west coast of North America to create the Panthalassa Ocean. You're seeing an ancestral continent called Laurentia, which grows over the next few hundred million years as island chains collide with it and add land mass. We're now at 400 million years ago. Off today's east coast, the massive African plate inches westward, closing the ancient Iapetus Ocean. It finally collides with Laurentia at 250 million years to form another supercontinent Pangea. The immense pressure causes faulting and folding, stacking up rock to form the Appalachian Mountains. Let's fast forward a bit. About 100 million years later, Pangea breaks apart, opening the Southern Atlantic Ocean between the new North American Plate and the African Plate. We forge ahead, and now the eastward-moving Farallon Plate converges with the present-day west coast. The Farallon Plate's greater density makes it sink beneath North America. This is called subduction, and it diffuses water into the magma-filled mantle. That lowers the magma's melting point and makes it rise into the overlying North American plate. From a subterranean chamber, the magma travels upwards and erupts along a chain of volcanos. Magma still deep underground slowly cools, crystallizing to form solid rock, including the granite now found in Yosemite National Park and the Sierra Nevada Mountains. We'll come back to that later. Now, it's 85 million years ago. The Farallon Plate becomes less steep, causing volcanism to stretch eastward and eventually cease. As the Farallon Plate subducts, it compresses North America, thrusting up mountain ranges like the Rockies, which extend over 3,000 miles. Soon after, the Eurasian Plate rifts from North America, opening the North Atlantic Ocean. We'll fast forward again. The Colorado Plateau now uplifts, likely due to a combination of upward mantle flow and a thickened North American Plate. In future millennia, the Colorado River will eventually sculpt the plateau into the epic Grand Canyon. 30 million years ago, the majority of the Farallon Plate sinks into the mantle, leaving behind only small corners still subducting. The Pacific and North American plates converge and a new boundary called the San Andreas Fault forms. Here, North America moves to the south, sliding against the Pacific Plate, which shifts to the north. This plate boundary still exists today, and moves about 30 millimeters per year capable of causing devastating earthquakes. The San Andreas also pulls apart western North America across a wide rift zone. This extensional region is called the Basin and Range Province, and through uplift and erosion, is responsible for exposing the once deep granite of Yosemite and the Sierra Nevada. Another 15 million years off the clock, and magma from the mantle burns a giant hole into western North America, periodically erupting onto the surface. Today, this hotspot feeds an active supervolcano beneath Yellowstone National Park. It hasn't erupted in the last 174,000 years, but if it did, its sheer force could blanket most of the continent with ash that would blacken the skies and threaten humanity. The Yellowstone supervolcano is just one reminder that the Earth continues to seethe below our feet. Its mobile plates put the planet in a state of constant flux. In another few hundred million years, who knows how the landscape of North America will have changed. As the continent slowly morphs into something unfamiliar, only geological time will tell.
TedEd_History
셰익스피어의_작품은_정말_그의_작품일까.txt
"Some are born great, some achieve greatness, and others have greatness thrust upon them", quoth William Shakespeare. Or did he? Some people question whether Shakespeare really wrote the works that bear his name, or whether he even existed at all. They speculate that Shakespeare was a pseudonym for another writer, or a group of writers. Proposed candidates for the real Shakespeare include other famous playwrights, politicians and even some prominent women. Could it be true that the greatest writer in the English language was as fictional as his plays? Most Shakespeare scholars dismiss these theories based on historical and biographical evidence. But there is another way to test whether Shakespeare's famous lines were actually written by someone else. Linguistics, the study of language, can tell us a great deal about the way we speak and write by examining syntax, grammar, semantics and vocabulary. And in the late 1800s, a Polish philosopher named Wincenty Lutosławski formalized a method known as stylometry, applying this knowledge to investigate questions of literary authorship. So how does stylometry work? The idea is that each writer's style has certain characteristics that remain fairly uniform among individual works. Examples of characteristics include average sentence length, the arrangement of words, and even the number of occurrences of a particular word. Let's look at use of the word thee and visualize it as a dimension, or axis. Each of Shakespeare's works can be placed on that axis, like a data point, based on the number of occurrences of that word. In statistics, the tightness of these points gives us what is known as the variance, an expected range for our data. But, this is only a single characteristic in a very high-dimensional space. With a clustering tool called Principal Component Analysis, we can reduce the multidimensional space into simple principal components that collectively measure the variance in Shakespeare's works. We can then test the works of our candidates against those principal components. For example, if enough works of Francis Bacon fall within the Shakespearean variance, that would be pretty strong evidence that Francis Bacon and Shakespeare are actually the same person. What did the results show? Well, the stylometrists who carried this out have concluded that Shakespeare is none other than Shakespeare. The Bard is the Bard. The pretender's works just don't match up with Shakespeare's signature style. However, our intrepid statisticians did find some compelling evidence of collaborations. For instance, one recent study concluded that Shakespeare worked with playwright Christopher Marlowe on "Henry VI," parts one and two. Shakespeare's identity is only one of the many problems stylometry can resolve. It can help us determine when a work was written, whether an ancient text is a forgery, whether a student has committed plagiarism, or if that email you just received is of a high priority or spam. And does the timeless poetry of Shakespeare's lines just boil down to numbers and statistics? Not quite. Stylometric analysis may reveal what makes Shakespeare's works structurally distinct, but it cannot capture the beauty of the sentiments and emotions they express, or why they affect us the way they do. At least, not yet.
TedEd_History
What_are_the_universal_human_rights_Benedetta_Berti.txt
The idea of human rights is that each one of us, no matter who we are or where we are born, is entitled to the same basic rights and freedoms. Human rights are not privileges, and they cannot be granted or revoked. They are inalienable and universal. That may sound straighforward enough, but it gets incredibly complicated as soon as anyone tries to put the idea into practice. What exactly are the basic human rights? Who gets to pick them? Who enforces them, and how? The history behind the concept of human rights is a long one. Throughout the centuries and across societies, religions, and cultures we have struggled with defining notions of rightfulness, justice, and rights. But one of the most modern affirmations of universal human rights emerged from the ruins of World War II with the creation of the United Nations. The treaty that established the UN gives as one of its purposes to reaffirm faith in fundamental human rights. And with the same spirit, in 1948, the UN General Assembly adopted the Universal Declaration of Human Rights. This document, written by an international committee chaired by Eleanor Roosevelt, lays the basis for modern international human rights law. The declaration is based on the principle that all human beings are born free and equal in dignity and rights. It lists 30 articles recognizing, among other things, the principle of nondiscrimination and the right to life and liberty. It refers to negative freedoms, like the freedom from torture or slavery, as well as positive freedoms, such as the freedom of movement and residence. It encompasses basic civil and political rights, such as freedom of expression, religion, or peaceful assembly, as well as social, economic, and cultural rights, such as the right to education and the right to freely choose one's occupation and be paid and treated fairly. The declaration takes no sides as to which rights are more important, insisting on their universality, indivisibility, and interdependence. And in the past decades, international human rights law has grown, deepening and expanding our understanding of what human rights are, and how to better protect them. So if these principles are so well-developed, then why are human rights abused and ignored time and time again all over the world? The problem in general is that it is not at all easy to universally enforce these rights or to punish transgressors. The UDHR itself, despite being highly authoritative and respected, is a declaration, not a hard law. So when individual countries violate it, the mechanisms to address those violations are weak. For example, the main bodies within the UN in charge of protecting human rights mostly monitor and investigate violations, but they cannot force states to, say, change a policy or compensate a victim. That's why some critics say it's naive to consider human rights a given in a world where state interests wield so much power. Critics also question the universality of human rights and emphasize that their development has been heavily guided by a small number of mostly Western nations to the detriment of inclusiveness. The result? A general bias in favor of civil policital liberties over sociopolitical rights and of individual over collective or groups rights. Others defend universal human rights laws and point at the positive role they have on setting international standards and helping activists in their campaigns. They also point out that not all international human rights instruments are powerless. For example, the European Convention on Human Rights establishes a court where the 47 member countries and their citizens can bring cases. The court issues binding decisions that each member state must comply with. Human rights law is constantly evolving as are our views and definitions of what the basic human rights should be. For example, how basic or important is the right to democracy or to development? And as our lives are increasingly digital, should there be a right to access the Internet? A right to digital privacy? What do you think?
TedEd_History
여성_탐험가들의_공헌_코트니_스티븐Courtney_Stephens.txt
Transcriber: Andrea McDonough Reviewer: Jessica Ruby Nowadays, we take curiosity for granted. We believe that if we put in the hard work, we might one day stand before the pyramids, discover a new species of flower, or even go to the moon. But, in the 18th and 19th century, female eyes gazed out windows at a world they were unlikely to ever explore. Life for women in the time of Queen Victoria was largely relegated to house chores and gossip. And, although they devoured books on exotic travel, most would never would leave the places in which they were born. However, there were a few Victorian women, who, through privilege, endurance, and not taking "no" for an answer, did set sail for wilder shores. In 1860, Marianne North, an amateur gardener and painter, crossed the ocean to America with letters of introduction, an easel, and a love of flowers. She went on to travel to Jamaica, Peru, Japan, India, Australia. In fact, she went to every continent except Antarctica in pursuit of new flowers to paint. "I was overwhelmed with the amount of subjects to be painted," she wrote. "The hills were marvelously blue, piled one over the other beyond them. I never saw such abundance of pure color." With no planes or automobiles and rarely a paved street, North rode donkeys, scaled cliffs, and crossed swamps to reach the plants she wanted. And all this in the customary dress of her day, floor-length gowns. As photography had not yet been perfected, Marianne's paintings gave botanists back in Europe their first glimpses of some of the world's most unusual plants, like the giant pitcher plant of Borneo, the African torch lily, and the many other species named for her as she was the first European to catalog them in the wild. Meanwhile, back in London, Miss Mary Kingsley was the sheltered daughter of a traveling doctor who loved hearing her father's tales of native customs in Africa. Midway through writing a book on the subject, her father fell ill and died. So, Kingsley decided she would finish the book for him. Peers of her father advised her not to go, showing her maps of tropical diseases, but she went anyhow, landing in modern-day Sierra Leone in 1896 with two large suitcases and a phrase book. Traveling into the jungle, she was able to confirm the existence of a then-mythical creature, the gorilla. She recalls fighting with crocodiles, being caught in a tornado, and tickling a hippopotamus with her umbrella so that he'd leave the side of her canoe. Falling into a spiky pit, she was saved from harm by her thick petticoat. "A good snake properly cooked is one of the best meals one gets out here," she wrote. Think Indiana Jones was resourceful? Kingsley could out-survive him any day! But when it comes to breaking rules, perhaps no female traveler was as daring as Alexandra David-Neel. Alexandra, who had studied Eastern religions at home in France, wanted desperately to prove herself to Parisian scholars of the day, all of whom were men. She decided the only way to be taken seriously was to visit the fabled city of Lhasa in the mountains of Tibet. "People will have to say, 'This woman lived among the things she's talking about. She touched them and she saw them alive,'" she wrote. When she arrived at the border from India, she was forbidden to cross. So, she disguised herself as a Tibetan man. Dressed in a yak fur coat and a necklace of carved skulls, she hiked through the barren Himilayas all the way to Lhasa, where she was subsequently arrested. She learned that the harder the journey, the better the story, and went on to write many books on Tibetan religion, which not only made a splash back in Paris but remain important today. These brave women, and others like them, went all over the world to prove that the desire to see for oneself not only changes the course of human knowledge, it changes the very idea of what is possible. They used the power of curiosity to try and understand the viewpoints and peculiarities of other places, perhaps because they, themselves, were seen as so unusual in their own societies. But their journeys revealed to them something more than the ways of foreign lands, they revealed something only they, themselves, could find: a sense of their own self.
TedEd_History
What_does_it_mean_to_be_a_refugee_Benedetta_Berti_and_Evelien_Borgman.txt
Around the globe, there are approximately 60 million people who have been forced to leave their homes to escape war, violence, and persecution. The majority of them have become internally displaced persons, which means they have fled their homes but are still within their own countries. Others have crossed a border and sought shelter outside of their own countries. They are commonly referred to as refugees. But what exactly does that term mean? The world has known refugees for millennia, but the modern definition was drafted in the UN's 1951 Convention relating to the status of refugees in response to mass persecutions and displacements of the Second World War. It defines a refugee as someone who is outside their country of nationality, and is unable to return to their home country because of well-founded fears of being persecuted. That persecution may be due to their race, religion, nationality, membership in a particular social group, or political opinion, and is often related to war and violence. Today, roughly half the world's refugees are children, some of them unaccompanied by an adult, a situation that makes them especially vulnerable to child labor or sexual exploitation. Each refugee's story is different, and many must undergo dangerous journeys with uncertain outcomes. But before we get to what their journeys involve, let's clear one thing up. There's a lot of confusion regarding the difference between the terms "migrant" and "refugee." "Migrants" usually refers to people who leave their country for reasons not related to persecution, such as searching for better economic opportunities or leaving drought-stricken areas in search of better circumstances. There are many people around the world who have been displaced because of natural disasters, food insecurities, and other hardships, but international law, rightly or wrongly, only recognizes those fleeing conflict and violence as refugees. So what happens when someone flees their country? Most refugee journeys are long and perilous with limited access to shelter, water, or food. Since the departure can be sudden and unexpected, belongings might be left behind, and people who are evading conflict often do not have the required documents, like visas, to board airplanes and legally enter other countries. Financial and political factors can also prevent them from traveling by standard routes. This means they can usually only travel by land or sea, and may need to entrust their lives to smugglers to help them cross borders. Whereas some people seek safety with their families, others attempt passage alone and leave their loved ones behind with the hopes of being reunited later. This separation can be traumatic and unbearably long. While more than half the world's refugees are in cities, sometimes the first stop for a person fleeing conflict is a refugee camp, usually run by the United Nations Refugee Agency or local governments. Refugee camps are intended to be temporary structures, offering short-term shelter until inhabitants can safely return home, be integrated to the host country, or resettle in another country. But resettlement and long-term integration options are often limited. So many refugees are left with no choice but to remain in camps for years and sometimes even decades. Once in a new country, the first legal step for a displaced person is to apply for asylum. At this point, they are an asylum seeker and not officially recognized as a refugee until the application has been accepted. While countries by and large agree on one definition of refugee, every host country is responsible for examining all requests for asylum and deciding whether applicants can be granted the status of refugee. Different countries guidelines can vary substantially. Host countries have several duties towards people they have recognized as refugees, like the guarantee of a minimum standard of treatment and non-discrimination. The most basic obligation towards refugees is non-refoulement, a principle preventing a nation from sending an individual to a country where their life and freedom are threatened. In reality, however, refugees are frequently the victims of inconsistent and discriminatory treatment. They're increasingly obliged to rebuild their lives in the face of xenophobia and racism. And all too often, they aren't permitted to enter the work force and are fully dependent on humanitarian aid. In addition, far too many refugee children are out of school due to lack of funding for education programs. If you go back in your own family history, chances are you will discover that at a certain point, your ancestors were forced from their homes, either escaping a war or fleeing discrimination and persecution. It would be good of us to remember their stories when we hear of refugees currently displaced, searching for a new home.
TedEd_History
비단길_역사_최초의_세계교역망_샤논_헤리스_카스텔로Shannon_Harris_Castelo.txt
A banker in London sends the latest stock info to his colleagues in Hong Kong in less than a second. With a single click, a customer in New York orders electronics made in Beijing, transported across the ocean within days by cargo plane or container ship. The speed and volume at which goods and information move across the world today is unprecedented in history. But global exchange itself is older than we think, reaching back over 2,000 years along a 5,000 mile stretch known as the Silk Road. The Silk Road wasn't actually a single road, but a network of multiple routes that gradually emerged over centuries, connecting to various settlements and to each other thread by thread. The first agricultural civilizations were isolated places in fertile river valleys, their travel impeded by surrounding geography and fear of the unknown. But as they grew, they found that the arid deserts and steps on their borders were inhabited, not by the demons of folklore, but nomadic tribes on horseback. The Scythians, who ranged from Hungary to Mongolia, had come in contact with the civilizations of Greece, Egypt, India and China. These encounters were often less than peaceful. But even through raids and warfare, as well as trade and protection of traveling merchants in exchange for tariffs, the nomads began to spread goods, ideas and technologies between cultures with no direct contact. One of the most important strands of this growing web was the Persian Royal Road, completed by Darius the First in the 5th century BCE. Stretching nearly 2,000 miles from the Tigris River to the Aegean Sea, its regular relay points allowed goods and messages to travel at nearly 1/10 the time it would take a single traveler. With Alexander the Great's conquest of Persia, and expansion into Central Asia through capturing cities like Samarkand, and establishing new ones like Alexandria Eschate, the network of Greek, Egyptian, Persian and Indian culture and trade extended farther east than ever before, laying the foundations for a bridge between China and the West. This was realized in the 2nd century BCE, when an ambassador named Zhang Qian, sent to negotiate with nomads in the West, returned to the Han Emperor with tales of sophisticated civilizations, prosperous trade and exotic goods beyond the western borders. Ambassadors and merchants were sent towards Persia and India to trade silk and jade for horses and cotton, along with armies to secure their passage. Eastern and western routes gradually linked together into an integrated system spanning Eurasia, enabling cultural and commercial exhange farther than ever before. Chinese goods made their way to Rome, causing an outflow of gold that led to a ban on silk, while Roman glassware was highly prized in China. Military expeditions in Central Asia also saw encounters between Chinese and Roman soldiers. Possibly even transmitting crossbow technology to the Western world. Demand for exotic and foreign goods and the profits they brought, kept the strands of the Silk Road in tact, even as the Roman Empire disintegrated and Chinese dynasties rose and fell. Even Mongolian hoards, known for pillage and plunder, actively protected the trade routes, rather than disrupting them. But along with commodities, these routes also enabled the movement of traditions, innovations, ideologies and languages. Originating in India, Buddhism migrated to China and Japan to become the dominant religion there. Islam spread from the Arabian Penninsula into South Asia, blending with native beliefs and leading to new faiths, like Sikhism. And gunpowder made its way from China to the Middle East forging the futures of the Ottoman, Safavid and Mughul Empires. In a way, the Silk Road's success led to its own demise as new maritime technologies, like the magnetic compass, found their way to Europe, making long land routes obsolete. Meanwhile, the collapse of Mongol rule was followed by China's withdrawal from international trade. But even though the old routes and networks did not last, they had changed the world forever and there was no going back. Europeans seeking new maritime routes to the riches they knew awaited in East Asia led to the Age of Exploration and expansion into Africa and the Americas. Today, global interconnectedness shapes our lives like never before. Canadian shoppers buy t-shirts made in Bangladesh, Japanese audiences watch British television shows, and Tunisians use American software to launch a revolution. The impact of globalization on culture and economy is indisputable. But whatever its benefits and drawbacks, it is far from a new phenomenon. And though the mountains, deserts and oceans that once separated us are now circumvented through super sonic vehicles, cross-continental communication cables, and signals beamed through space rather than caravans traveling for months, none of it would have been possible without the pioneering cultures whose efforts created the Silk Road: history's first world wide web.
TedEd_History
Who_am_I_A_philosophical_inquiry_Amy_Adkins.txt
Throughout the history of mankind, three little words have sent poets to the blank page, philosophers to the Agora, and seekers to the oracles: "Who am I?" From the ancient Greek aphorism inscribed on the Temple of Apollo, "Know thyself," to The Who's rock anthem, "Who Are You?" philosophers, psychologists, academics, scientists, artists, theologians and politicians have all tackled the subject of identity. Their hypotheses are widely varied and lack significant consensus. These are smart, creative people, so what's so hard about coming up with the right answer? One challenge certainly lies with the complex concept of the persistence of identity. Which you is who? The person you are today? Five years ago? Who you'll be in 50 years? And when is "am"? This week? Today? This hour? This second? And which aspect of you is "I"? Are you your physical body? Your thoughts and feelings? Your actions? These murky waters of abstract logic are tricky to navigate, and so it's probably fitting that to demonstrate the complexity, the Greek historian Plutarch used the story of a ship. How are you "I"? As the tale goes, Theseus, the mythical founder King of Athens, single-handedly slayed the evil Minotaur at Crete, then returned home on a ship. To honor this heroic feat, for 1000 years Athenians painstakingly maintained his ship in the harbor, and annually reenacted his voyage. Whenever a part of the ship was worn or damaged, it was replaced with an identical piece of the same material until, at some point, no original parts remained. Plutarch noted the Ship of Theseus was an example of the philosophical paradox revolving around the persistence of identity. How can every single part of something be replaced, yet it still remains the same thing? Let's imagine there are two ships: the ship that Theseus docked in Athens, Ship A, and the ship sailed by the Athenians 1000 years later, Ship B. Very simply, our question is this: does A equal B? Some would say that for 1000 years there has been only one Ship of Theseus, and because the changes made to it happened gradually, it never at any point in time stopped being the legendary ship. Though they have absolutely no parts in common, the two ships are numerically identical, meaning one and the same, so A equals B. However, others could argue that Theseus never set foot on Ship B, and his presence on the ship is an essential qualitative property of the Ship of Theseus. It cannot survive without him. So, though the two ships are numerically identical, they are not qualitatively identical. Thus, A does not equal B. But what happens when we consider this twist? What if, as each piece of the original ship was cast off, somebody collected them all, and rebuilt the entire original ship? When it was finished, undeniably two physical ships would exist: the one that's docked in Athens, and the one in some guy's backyard. Each could lay claim to the title, "The Ship of Theseus," but only would could actually be the real thing. So which one is it, and more importantly, what does this have to do with you? Like the Ship of Theseus, you are a collection of constantly changing parts: your physical body, mind, emotions, circumstances, and even your quirks, always changing, but still in an amazing and sometimes illogical way, you stay the same, too. This is one of the reasons that the question, "Who am I?" is so complex. And in order to answer it, like so many great minds before you, you must be willing to dive into the bottomless ocean of philosophical paradox. Or maybe you could just answer, "I am a legendary hero sailing a powerful ship on an epic journey." That could work, too.
TedEd_History
이슬람_디자인의_복잡한_기하학_에릭_브루그.txt
In Islamic culture, geometry is everywhere. You can find it in mosques, madrasas, palaces and private homes. This tradition began in the 8th century CE during the early history of Islam, when craftsmen took preexisting motifs from Roman and Persian cultures and developed them into new forms of visual expression. This period of history was a golden age of Islamic culture, during which many achievements of previous civilizations were preserved and further developed, resulting in fundamental advancements in scientific study and mathematics. Accompanying this was an increasingly sophisticated use of abstraction and complex geometry in Islamic art, from intricate floral motifs adorning carpets and textiles, to patterns of tilework that seemed to repeat infinitely, inspiring wonder and contemplation of eternal order. Despite the remarkable complexity of these designs, they can be created with just a compass to draw circles and a ruler to make lines within them. And from these simple tools emerges a kaleidoscope multiplicity of patterns. So how does that work? Well, everything starts with a circle. The first major decision is how will you divide it up? Most patterns split the circle into four, five or six equal sections. And each division gives rise to distinctive patterns. There's an easy way to determine whether any pattern is based on fourfold, fivefold, or sixfold symmetry. Most contain stars surrounded by petal shapes. Counting the number of rays on a starburst, or the number of petals around it, tells us what category the pattern falls into. A star with six rays, or surrounded by six petals, belongs in the sixfold category. One with eight petals is part of the fourfold category, and so on. There's another secret ingredient in these designs: an underlying grid. Invisible, but essential to every pattern, the grid helps determine the scale of the composition before work begins, keeps the pattern accurate, and facilitates the invention of incredible new patterns. Let's look at an example of how these elements come together. We'll start with a circle within a square, and divide it into eight equal parts. We can then draw a pair of criss-crossing lines and overlay them with another two. These lines are called construction lines, and by choosing a set of their segments, we'll form the basis of our repeating pattern. Many different designs are possible from the same construction lines just by picking different segments. And the full pattern finally emerges when we create a grid with many repetitions of this one tile in a process called tessellation. By choosing a different set of construction lines, we might have created this pattern, or this one. The possibilities are virtually endless. We can follow the same steps to create sixfold patterns by drawing construction lines over a circle divided into six parts, and then tessellating it, we can make something like this. Here's another sixfold pattern that has appeared across the centuries and all over the Islamic world, including Marrakesh, Agra, Konya and the Alhambra. Fourfold patterns fit in a square grid, and sixfold patterns in a hexagonal grid. Fivefold patterns, however, are more challenging to tessellate because pentagons don't neatly fill a surface, so instead of just creating a pattern in a pentagon, other shapes have to be added to make something that is repeatable, resulting in patterns that may seem confoundingly complex, but are still relatively simple to create. Also, tessellation is not constrained to simple geometric shapes, as M.C. Escher's work demonstrates. And while the Islamic geometric design tradition doesn't tend to employ elements like fish and faces, it does sometimes make use of multiple shapes to craft complex patterns. This more than 1,000-year-old tradition has wielded basic geometry to produce works that are intricate, decorative and pleasing to the eye. And these craftsmen prove just how much is possible with some artistic intuition, creativity, dedication and a great compass and ruler.
TedEd_History
The_Atlantic_slave_trade_What_too_few_textbooks_told_you_Anthony_Hazard.txt
Slavery, the treatment of human beings as property, deprived of personal rights, has occurred in many forms throughout the world. But one institution stands out for both its global scale and its lasting legacy. The Atlantic slave trade, occurring from the late 15th to the mid 19th century and spanning three continents, forcibly brought more than 10 million Africans to the Americas. The impact it would leave affected not only these slaves and their descendants, but the economies and histories of large parts of the world. There had been centuries of contact between Europe and Africa via the Mediterranean. But the Atlantic slave trade began in the late 1400s with Portuguese colonies in West Africa, and Spanish settlement of the Americas shortly after. The crops grown in the new colonies, sugar cane, tobacco, and cotton, were labor intensive, and there were not enough settlers or indentured servants to cultivate all the new land. American Natives were enslaved, but many died from new diseases, while others effectively resisted. And so to meet the massive demand for labor, the Europeans looked to Africa. African slavery had existed for centuries in various forms. Some slaves were indentured servants, with a limited term and the chance to buy one's freedom. Others were more like European serfs. In some societies, slaves could be part of a master's family, own land, and even rise to positions of power. But when white captains came offering manufactured goods, weapons, and rum for slaves, African kings and merchants had little reason to hesitate. They viewed the people they sold not as fellow Africans but criminals, debtors, or prisoners of war from rival tribes. By selling them, kings enriched their own realms, and strengthened them against neighboring enemies. African kingdoms prospered from the slave trade, but meeting the European's massive demand created intense competition. Slavery replaced other criminal sentences, and capturing slaves became a motivation for war, rather than its result. To defend themselves from slave raids, neighboring kingdoms needed European firearms, which they also bought with slaves. The slave trade had become an arms race, altering societies and economies across the continent. As for the slaves themselves, they faced unimaginable brutality. After being marched to slave forts on the coast, shaved to prevent lice, and branded, they were loaded onto ships bound for the Americas. About 20% of them would never see land again. Most captains of the day were tight packers, cramming as many men as possible below deck. While the lack of sanitation caused many to die of disease, and others were thrown overboard for being sick, or as discipline, the captain's ensured their profits by cutting off slave's ears as proof of purchase. Some captives took matters into their own hands. Many inland Africans had never seen whites before, and thought them to be cannibals, constantly taking people away and returning for more. Afraid of being eaten, or just to avoid further suffering, they committed suicide or starved themselves, believing that in death, their souls would return home. Those who survived were completley dehumanized, treated as mere cargo. Women and children were kept above deck and abused by the crew, while the men were made to perform dances in order to keep them exercised and curb rebellion. What happened to those Africans who reached the New World and how the legacy of slavery still affects their descendants today is fairly well known. But what is not often discussed is the effect that the Atlantic slave trade had on Africa's future. Not only did the continent lose tens of millions of its able-bodied population, but because most of the slaves taken were men, the long-term demographic effect was even greater. When the slave trade was finally outlawed in the Americas and Europe, the African kingdoms whose economies it had come to dominate collapsed, leaving them open to conquest and colonization. And the increased competition and influx of European weapons fueled warfare and instability that continues to this day. The Atlantic slave trade also contributed to the development of racist ideology. Most African slavery had no deeper reason than legal punishment or intertribal warfare, but the Europeans who preached a universal religion, and who had long ago outlawed enslaving fellow Christians, needed justification for a practice so obviously at odds with their ideals of equality. So they claimed that Africans were biologically inferior and destined to be slaves, making great efforts to justify this theory. Thus, slavery in Europe and the Americas acquired a racial basis, making it impossible for slaves and their future descendants to attain equal status in society. In all of these ways, the Atlantic slave trade was an injustice on a massive scale whose impact has continued long after its abolition.
TedEd_History
The_myth_of_Icarus_and_Daedalus_Amy_Adkins.txt
In mythological ancient Greece, soaring above Crete on wings made from wax and feathers, Icarus, the son of Daedalus, defied the laws of both man and nature. Ignoring the warnings of his father, he rose higher and higher. To witnesses on the ground, he looked like a god, and as he peered down from above, he felt like one, too. But, in mythological ancient Greece, the line that separated god from man was absolute and the punishment for mortals who attempted to cross it was severe. Such was the case for Icarus and Daedalus. Years before Icarus was born, his father Daedalus was highly regarded as a genius inventor, craftsman, and sculptor in his homeland of Athens. He invented carpentry and all the tools used for it. He designed the first bathhouse and the first dance floor. He made sculptures so lifelike that Hercules mistook them for actual men. Though skilled and celebrated, Daedalus was egotistical and jealous. Worried that his nephew was a more skillful craftsman, Daedalus murdered him. As punishment, Daedalus was banished from Athens and made his way to Crete. Preceded by his storied reputation, Daedalus was welcomed with open arms by Crete's King Minos. There, acting as the palace technical advisor, Daedalus continued to push the boundaries. For the king's children, he made mechanically animated toys that seemed alive. He invented the ship's sail and mast, which gave humans control over the wind. With every creation, Daedalus challenged human limitations that had so far kept mortals separate from gods, until finally, he broke right through. King Minos's wife, Pasiphaë, had been cursed by the god Poseidon to fall in love with the king's prized bull. Under this spell, she asked Daedalus to help her seduce it. With characteristic audacity, he agreed. Daedalus constructed a hollow wooden cow so realistic that it fooled the bull. With Pasiphaë hiding inside Daedalus's creation, she conceived and gave birth to the half-human half-bull minotaur. This, of course, enraged the king who blamed Daedalus for enabling such a horrible perversion of natural law. As punishment, Daedalus was forced to construct an inescapable labyrinth beneath the palace for the minotaur. When it was finished, Minos then imprisoned Daedalus and his only son Icarus within the top of the tallest tower on the island where they were to remain for the rest of their lives. But Daedalus was still a genius inventor. While observing the birds that circled his prison, the means for escape became clear. He and Icarus would fly away from their prison as only birds or gods could do. Using feathers from the flocks that perched on the tower, and the wax from candles, Daedalus constructed two pairs of giant wings. As he strapped the wings to his son Icarus, he gave a warning: flying too near the ocean would dampen the wings and make them too heavy to use. Flying too near the sun, the heat would melt the wax and the wings would disintegrate. In either case, they surely would die. Therefore, the key to their escape would be in keeping to the middle. With the instructions clear, both men leapt from the tower. They were the first mortals ever to fly. While Daedalus stayed carefully to the midway course, Icarus was overwhelmed with the ecstasy of flight and overcome with the feeling of divine power that came with it. Daedalus could only watch in horror as Icarus ascended higher and higher, powerless to change his son's dire fate. When the heat from the sun melted the wax on his wings, Icarus fell from the sky. Just as Daedalus had many times ignored the consequences of defying the natural laws of mortal men in the service of his ego, Icarus was also carried away by his own hubris. In the end, both men paid for their departure from the path of moderation dearly, Icarus with his life and Daedalus with his regret.
TedEd_History
차의_역사_슈난_텡_Shunan_Teng.txt
During a long day spent roaming the forest in search of edible grains and herbs, the weary divine farmer Shennong accidentally poisoned himself 72 times. But before the poisons could end his life, a leaf drifted into his mouth. He chewed on it and it revived him, and that is how we discovered tea. Or so an ancient legend goes at least. Tea doesn't actually cure poisonings, but the story of Shennong, the mythical Chinese inventor of agriculture, highlights tea's importance to ancient China. Archaeological evidence suggests tea was first cultivated there as early as 6,000 years ago, or 1,500 years before the pharaohs built the Great Pyramids of Giza. That original Chinese tea plant is the same type that's grown around the world today, yet it was originally consumed very differently. It was eaten as a vegetable or cooked with grain porridge. Tea only shifted from food to drink 1,500 years ago when people realized that a combination of heat and moisture could create a complex and varied taste out of the leafy green. After hundreds of years of variations to the preparation method, the standard became to heat tea, pack it into portable cakes, grind it into powder, mix with hot water, and create a beverage called muo cha, or matcha. Matcha became so popular that a distinct Chinese tea culture emerged. Tea was the subject of books and poetry, the favorite drink of emperors, and a medium for artists. They would draw extravagant pictures in the foam of the tea, very much like the espresso art you might see in coffee shops today. In the 9th century during the Tang Dynasty, a Japanese monk brought the first tea plant to Japan. The Japanese eventually developed their own unique rituals around tea, leading to the creation of the Japanese tea ceremony. And in the 14th century during the Ming Dynasty, the Chinese emperor shifted the standard from tea pressed into cakes to loose leaf tea. At that point, China still held a virtual monopoly on the world's tea trees, making tea one of three essential Chinese export goods, along with porcelain and silk. This gave China a great deal of power and economic influence as tea drinking spread around the world. That spread began in earnest around the early 1600s when Dutch traders brought tea to Europe in large quantities. Many credit Queen Catherine of Braganza, a Portuguese noble woman, for making tea popular with the English aristocracy when she married King Charles II in 1661. At the time, Great Britain was in the midst of expanding its colonial influence and becoming the new dominant world power. And as Great Britain grew, interest in tea spread around the world. By 1700, tea in Europe sold for ten times the price of coffee and the plant was still only grown in China. The tea trade was so lucrative that the world's fastest sailboat, the clipper ship, was born out of intense competition between Western trading companies. All were racing to bring their tea back to Europe first to maximize their profits. At first, Britain paid for all this Chinese tea with silver. When that proved too expensive, they suggested trading tea for another substance, opium. This triggered a public health problem within China as people became addicted to the drug. Then in 1839, a Chinese official ordered his men to destroy massive British shipments of opium as a statement against Britain's influence over China. This act triggered the First Opium War between the two nations. Fighting raged up and down the Chinese coast until 1842 when the defeated Qing Dynasty ceded the port of Hong Kong to the British and resumed trading on unfavorable terms. The war weakened China's global standing for over a century. The British East India company also wanted to be able to grow tea themselves and further control the market. So they commissioned botanist Robert Fortune to steal tea from China in a covert operation. He disguised himself and took a perilous journey through China's mountainous tea regions, eventually smuggling tea trees and experienced tea workers into Darjeeling, India. From there, the plant spread further still, helping drive tea's rapid growth as an everyday commodity. Today, tea is the second most consumed beverage in the world after water, and from sugary Turkish Rize tea, to salty Tibetan butter tea, there are almost as many ways of preparing the beverage as there are cultures on the globe.
TedEd_History
Where_did_English_come_from_Claire_Bowern.txt
When we talk about English, we often think of it as a single language but what do the dialects spoken in dozens of countries around the world have in common with each other, or with the writings of Chaucer? And how are any of them related to the strange words in Beowulf? The answer is that like most languages, English has evolved through generations of speakers, undergoing major changes over time. By undoing these changes, we can trace the language from the present day back to its ancient roots. While modern English shares many similar words with Latin-derived romance languages, like French and Spanish, most of those words were not originally part of it. Instead, they started coming into the language with the Norman invasion of England in 1066. When the French-speaking Normans conquered England and became its ruling class, they brought their speech with them, adding a massive amount of French and Latin vocabulary to the English language previously spoken there. Today, we call that language Old English. This is the language of Beowulf. It probably doesn't look very familiar, but it might be more recognizable if you know some German. That's because Old English belongs to the Germanic language family, first brought to the British Isles in the 5th and 6th centuries by the Angles, Saxons, and Jutes. The Germanic dialects they spoke would become known as Anglo-Saxon. Viking invaders in the 8th to 11th centuries added more borrowings from Old Norse into the mix. It may be hard to see the roots of modern English underneath all the words borrowed from French, Latin, Old Norse and other languages. But comparative linguistics can help us by focusing on grammatical structure, patterns of sound changes, and certain core vocabulary. For example, after the 6th century, German words starting with "p," systematically shifted to a "pf" sound while their Old English counterparts kept the "p" unchanged. In another split, words that have "sk" sounds in Swedish developed an "sh" sound in English. There are still some English words with "sk," like "skirt," and "skull," but they're direct borrowings from Old Norse that came after the "sk" to "sh" shift. These examples show us that just as the various Romance languages descended from Latin, English, Swedish, German, and many other languages descended from their own common ancestor known as Proto-Germanic spoken around 500 B.C.E. Because this historical language was never written down, we can only reconstruct it by comparing its descendants, which is possible thanks to the consistency of the changes. We can even use the same process to go back one step further, and trace the origins of Proto-Germanic to a language called Proto-Indo-European, spoken about 6000 years ago on the Pontic steppe in modern day Ukraine and Russia. This is the reconstructed ancestor of the Indo-European family that includes nearly all languages historically spoken in Europe, as well as large parts of Southern and Western Asia. And though it requires a bit more work, we can find the same systematic similarities, or correspondences, between related words in different Indo-European branches. Comparing English with Latin, we see that English has "t" where Latin has "d", and "f" where latin has "p" at the start of words. Some of English's more distant relatives include Hindi, Persian and the Celtic languages it displaced in what is now Britain. Proto-Indo-European itself descended from an even more ancient language, but unfortunately, this is as far back as historical and archeological evidence will allow us to go. Many mysteries remain just out of reach, such as whether there might be a link between Indo-European and other major language families, and the nature of the languages spoken in Europe prior to its arrival. But the amazing fact remains that nearly 3 billion people around the world, many of whom cannot understand each other, are nevertheless speaking the same words shaped by 6000 years of history.
TedEd_History
아우슈비츠에서의_수업_우리가_하는_말의_힘_벤자민_잰더_Benjamin_Zander.txt
It really makes a difference what we say. I learned this from a woman who survived Auschwitz. She went to Auschwitz when she was fifteen years old, and her brother was eight, and the parents were lost. And she told me this, "We were in the train going to Auschwitz, and I looked down, and I saw my brother's shoes were missing. And I said, 'Why are you so stupid? Can't you keep your things together? For goodness sake!' The way an elder sister might speak to a younger brother." Unfortunately, it was the last thing she ever said to him because she never saw him again. He did not survive. And so when she came out of Auschwitz, she made a vow. She said, "I walked out of Auschwitz into life. And the vow was, "I will never say anything that couldn't stand as the last thing I ever say." Now, can we do that? No. But it is a possibility to live in to. Thank you.
TedEd_History
미켈란젤로의_다비드_상이_지닌_여러_의미제임스_얼James_Earle.txt
When we think of classic works of art, the most common setting we imagine them in is a museum. But what we often forget is that much of this art was not produced with a museum setting in mind. What happens to an artwork when it's taken out of its originally intended context? Take the example of Michelangelo's Statue of David, depicting the boy hero who slew the giant philistine, Goliath, armed with only his courage and his slingshot. When Michelangelo began carving a block of pure white marble to communicate this famous Biblical story, the city of Florence intended to place the finished product atop their grand cathedral. Not only would the 17 foot tall statue be easily visible at this height, but its placement alongside 11 other statues of Old Testament heroes towering over onlookers would have a powerful religious significance, forcing the viewer to stare in awe towards the heavens. But by the time Michelangelo had finished the work, in 1504, the plans for the other statues had fallen through, and the city realized that lifting such a large sculpture to the roof would be more difficult than they had thought. Furthermore, the statue was so detailed and lifelike, down to the bulging veins in David's arm and the determination on his face, that it seemed a shame to hide it so far from the viewer. A council of politicians and artists convened to decide on a new location for the statue. Ultimately voting to place it in front of the Palazzo della Signoria, the town hall and home of the new Republican government. This new location transformed the statue's meaning. The Medici family, who for generations had ruled the city through their control of banking, had recently been exiled, and Florence now saw itself as a free city, threatened on all sides by wealthy and powerful rivals. David, now the symbol of heroic resistance against overwhelming odds, was placed with his intense stare, now a look of stern warning, focused directly towards Rome, the home of Cardinal Giovanni de Medici. Though the statue itself had not been altered, its placement changed nearly every aspect of it from a religious to a political significance. Though a replica of David still appears at the Palazzo, the original statue was moved in 1873 to the Galleria dell'Accademia, where it remains today. In the orderly, quiet environment of the museum, alongside numerous half-finished Michelangelo sculptures, overt religious and political interpretations fall away, giving way to detached contemplation of Michelangelo's artistic and technical skill. But even here, the astute viewer may notice that David's head and hand appear disproportionately large, a reminder that they were made to be viewed from below. So, not only does context change the meaning and interpretation of an artwork throughout its history, sometimes it can make that history resurface in the most unexpected ways.
TedEd_History
모의_법정_역사_대_리처드_닉슨_사건_알렉스_겐들러.txt
The presidency of the United States of America is often said to be one of the most powerful positions in the world. But of all the U.S. presidents accused of misusing that power, only one has left office as a result. Does Richard Nixon deserve to be remembered for more than the scandal that ended his presidency? Find out as we put this disgraced president's legacy on trial in History vs. Richard Nixon. "Order, order. Now, who's the defendant today, some kind of crook?" "Cough. No, your Honor. This is Richard Milhous Nixon, the 37th president of the United States, who served from 1969 to 1974." "Hold on. That's a weird number of years for a president to serve." "Well, you see, President Nixon resigned for the good of the nation and was pardoned by President Ford, who took over after him." "He resigned because he was about to be impeached, and he didn't want the full extent of his crimes exposed." "And what were these crimes?" "Your Honor, the Watergate scandal was one of the grossest abuses of presidential power in history. Nixon's men broke into the Democratic National Committee headquarters to wiretap the offices and dig up dirt on opponents for the reelection campaign." "Cough It was established that the President did not order this burglary." "But as soon as he learned of it, he did everything to cover it up, while lying about it for months." "Uh, yes, but it was for the good of the country. He did so much during his time in office and could have done so much more without a scandal jeopardizing his accomplishments." "Uh, accomplishments?" "Yes, your Honor. Did you know it was President Nixon who proposed the creation of the Environmental Protection Agency, and signed the National Environmental Policy Act into law? Not to mention the Endangered Species Act, Marine Mammal Protection Act, expansion of the Clean Air Act." "Sounds pretty progressive of him." "Progressive? Hardly. Nixon's presidential campaign courted Southern voters through fear and resentment of the civil rights movement." "Speaking of civil rights, the prosecution may be surprised to learn that he signed the Title IX amendment, banning gender-based discrimination in education, and ensured that desegregation of schools occurred peacefully, and he lowered the voting age to 18, so that students could vote." "He didn't have much concern for students after four were shot by the National Guard at Kent State. Instead, he called them bums for protesting the Vietnam War, a war he had campaigned on ending." "But he did end it." "He ended it two years after taking office. Meanwhile, his campaign had sabotaged the previous president's peace talks, urging the South Vietnamese government to hold out for supposedly better terms, which, I might add, didn't materialize. So, he protracted the war for four years, in which 20,000 more U.S. troops, and over a million more Vietnamese, died for nothing." "Hmm, a presidential candidate interfering in foreign negotiations -- isn't that treason?" "It is, your Honor, a clear violation of the Logan Act of 1799." "Uh, I think we're forgetting President Nixon's many foreign policy achievements. It was he who normalized ties with China, forging economic ties that continue today." "Are we so sure that's a good thing? And don't forget his support of the coup in Chile that replaced the democratically-elected President Allende with a brutal military dictator." "It was part of the fight against communism." "Weren't tyranny and violence the reasons we opposed communism to begin with? Or was it just fear of the lower class rising up against the rich?" "President Nixon couldn't have predicted the violence of Pinochet's regime, and being anti-communist didn't mean neglecting the poor. He proposed a guaranteed basic income for all American families, still a radical concept today. And he even pushed for comprehensive healthcare reform, just the kind that passed 40 years later." "I'm still confused about this burglary business. Was he a crook or not?" "Your Honor, President Nixon may have violated a law or two, but what was the real harm compared to all he accomplished while in office?" "The harm was to democracy itself. The whole point of the ideals Nixon claimed to promote abroad is that leaders are accountable to the people, and when they hold themselves above the law for whatever reason, those ideals are undermined." "And if you don't hold people accountable to the law, I'll be out of a job." Many politicians have compromised some principles to achieve results, but law-breaking and cover-ups threaten the very fabric the nation is built on. Those who do so may find their entire legacy tainted when history is put on trial.
TedEd_History
호메로스의_오디세이를_읽기_위해_알아야할_것_질_대쉬Jill_Dash.txt
A close encounter with a man-eating giant, a sorceress who turns men into pigs, a long-lost king taking back his throne. On their own, any of these make great stories, but each is just one episode in the "Odyssey," a 12,000-line poem spanning years of Ancient Greek history, myth, and legend. How do we make sense of such a massive text that comes from and tells of a world so far away? The fact that we can read the "Odyssey" at all is pretty incredible, as it was composed before the Greek alphabet appeared in the 8th century BCE. It was made for listeners rather than readers, and was performed by oral poets called rhapsodes. Tradition identifies its author as a blind man named Homer. But no one definitively knows whether he was real or legendary. The earliest mentions of him occur centuries after his era. And the poems attributed to him seem to have been changed and rearranged many times by multiple authors before finally being written down in their current form. In fact, the word rhapsode means stitching together, as these poets combined existing stories, jokes, myths, and songs into a single narrative. To recite these massive epics live, rhapsodes employed a steady meter, along with mnemonic devices, like repetition of memorized passages or set pieces. These included descriptions of scenery and lists of characters, and helped the rhapsode keep their place in the narrative, just as the chorus or bridge of a song helps us to remember the next verses. Because most of the tales were familiar to the audience, it was common to hear the sections of the poem out of order. At some point, the order became set in stone and the story was locked into place as the one we read today. But since the world has changed a bit in the last several thousand years, it helps to have some background before jumping in. The "Odyssey" itself is a sequel to Homer's other famous epic, the "Iliad," which tells the story of the Trojan War. If there's one major theme uniting both poems, it's this: do not, under any circumstances, incur the wrath of the gods. The Greek Pantheon is a dangerous mix of divine power and human insecurity, prone to jealousy and grudges of epic proportions. And many of the problems faced by humans in the poems are caused by their hubris, or excessive pride in believing themselves superior to the gods. The desire to please the gods was so great that the Ancient Greeks traditionally welcomed all strangers into their homes with generosity for fear that the strangers might be gods in disguise. This ancient code of hospitality was called xenia. It involved hosts providing their guests with safety, food, and comfort, and the guests returning the favor with courtesy and gifts if they had them. Xenia has a significant role in the "Odyssey," where Odysseus in his wanderings is the perpetual guest, while in his absence, his clever wife Penelope plays non-stop host. The "Odyssey" recounts all of Odysseus's years of travel, but the narrative begins in medias res in the middle of things. Ten years after the Trojan War, we find our hero trapped on an island, still far from his native Ithaca and the family he hasn't seen for 20 years. Because he's angered the sea god Poseidon by blinding his son, a cyclops, Odysseus's passage home has been fraught with mishap after mishap. With trouble brewing at home and gods discussing his fate, Odysseus begins the account of those missing years to his hosts. One of the most fascinating things about the "Odyssey" is the gap between how little we know about its time period and the wealth of detail the text itself contains. Historians, linguists, and archeologists have spent centuries searching for the ruins of Troy and identifying which islands Odysseus visited. Just like its hero, the 24-book epic has made its own long journey through centuries of myth and history to tell us its incredible story today.
TedEd_History
좋은_수면의_이득_샤이_마르쿠_Shai_Marcu.txt
It's 4 a.m., and the big test is in eight hours, followed by a piano recital. You've been studying and playing for days, but you still don't feel ready for either. So, what can you do? Well, you can drink another cup of coffee and spend the next few hours cramming and practicing, but believe it or not, you might be better off closing the books, putting away the music, and going to sleep. Sleep occupies nearly a third of our lives, but many of us give surprisingly little attention and care to it. This neglect is often the result of a major misunderstanding. Sleep isn't lost time, or just a way to rest when all our important work is done. Instead, it's a critical function, during which your body balances and regulates its vital systems, affecting respiration and regulating everything from circulation to growth and immune response. That's great, but you can worry about all those things after this test, right? Well, not so fast. It turns out that sleep is also crucial for your brain, with a fifth of your body's circulatory blood being channeled to it as you drift off. And what goes on in your brain while you sleep is an intensely active period of restructuring that's crucial for how our memory works. At first glance, our ability to remember things doesn't seem very impressive at all. 19th century psychologist Herman Ebbinghaus demonstrated that we normally forget 40% of new material within the first twenty minutes, a phenomenon known as the forgetting curve. But this loss can be prevented through memory consolidation, the process by which information is moved from our fleeting short-term memory to our more durable long-term memory. This consolidation occurs with the help of a major part of the brain, known as the hippocampus. Its role in long-term memory formation was demonstrated in the 1950s by Brenda Milner in her research with a patient known as H.M. After having his hippocampus removed, H.M.'s ability to form new short-term memories was damaged, but he was able to learn physical tasks through repetition. Due to the removal of his hippocampus, H.M.'s ability to form long-term memories was also damaged. What this case revealed, among other things, was that the hippocampus was specifically involved in the consolidation of long-term declarative memory, such as the facts and concepts you need to remember for that test, rather than procedural memory, such as the finger movements you need to master for that recital. Milner's findings, along with work by Eric Kandel in the 90's, have given us our current model of how this consolidation process works. Sensory data is initially transcribed and temporarily recorded in the neurons as short-term memory. From there, it travels to the hippocampus, which strengthens and enhances the neurons in that cortical area. Thanks to the phenomenon of neuroplasticity, new synaptic buds are formed, allowing new connections between neurons, and strengthening the neural network where the information will be returned as long-term memory. So why do we remember some things and not others? Well, there are a few ways to influence the extent and effectiveness of memory retention. For example, memories that are formed in times of heightened feeling, or even stress, will be better recorded due to the hippocampus' link with emotion. But one of the major factors contributing to memory consolidation is, you guessed it, a good night's sleep. Sleep is composed of four stages, the deepest of which are known as slow-wave sleep and rapid eye movement. EEG machines monitoring people during these stages have shown electrical impulses moving between the brainstem, hippocampus, thalamus, and cortex, which serve as relay stations of memory formation. And the different stages of sleep have been shown to help consolidate different types of memories. During the non-REM slow-wave sleep, declarative memory is encoded into a temporary store in the anterior part of the hippocampus. Through a continuing dialogue between the cortex and hippocampus, it is then repeatedly reactivated, driving its gradual redistribution to long-term storage in the cortex. REM sleep, on the other hand, with its similarity to waking brain activity, is associated with the consolidation of procedural memory. So based on the studies, going to sleep three hours after memorizing your formulas and one hour after practicing your scales would be the most ideal. So hopefully you can see now that skimping on sleep not only harms your long-term health, but actually makes it less likely that you'll retain all that knowledge and practice from the previous night, all of which just goes to affirm the wisdom of the phrase, "Sleep on it." When you think about all the internal restructuring and forming of new connections that occurs while you slumber, you could even say that proper sleep will have you waking up every morning with a new and improved brain, ready to face the challenges ahead.
TedEd_History
When_will_the_next_mass_extinction_occur_Borths_DEmic_and_Pritchard.txt
About 66 million years ago, something terrible happened to life on our planet. Ecosystems were hit with a double blow as massive volcanic eruptions filled the atmosphere with carbon dioxide and an asteroid roughly the size of Manhattan struck the Earth. The dust from the impact reduced or stopped photosynthesis from many plants, starving herbivores and the carnivores that preyed on them. Within a short time span, three-quarters of the world's species disappeared forever, and the giant dinosaurs, flying pterosaurs, shelled squids, and marine reptiles that had flourished for ages faded into prehistory. It may seem like the dinosaurs were especially unlucky, but extinctions of various severities have occurred throughout the Earth's history, and are still happening all around us today. Environments change, pushing some species out of their comfort zones while creating new opportunities for others. Invasive species arrive in new habitats, outcompeting the natives. And in some cases, entire species are wiped out as a result of activity by better adapted organisms. Sometimes, however, massive changes in the environment occur too quickly for most living creatures to adapt, causing thousands of species to die off in a geological instant. We call this a mass extinction event, and although such events may be rare, paleontologists have been able to identify several of them through dramatic changes in the fossil record, where lineages that persisted through several geological layers suddenly disappear. In fact, these mass extinctions are used to divide the Earth's history into distinct periods. Although the disappearance of the dinosaurs is the best known mass extinction event, the largest occurred long before dinosaurs ever existed. 252 million years ago, between the Permian and Triassic periods, the Earth's land masses gathered together into the single supercontinent Pangaea. As it coalesced, its interior was filled with deserts, while the single coastline eliminated many of the shallow tropical seas where biodiversity thrived. Huge volcanic eruptions occurred across Siberia, coinciding with very high temperatures, suggesting a massive greenhouse effect. These catastrophes contributed to the extinction of 95% of species in the ocean, and on land, the strange reptiles of the Permian gave way to the ancestors of the far more familiar dinosaurs we know today. But mass extinctions are not just a thing of the distant past. Over the last few million years, the fluctuation of massive ice sheets at our planet's poles has caused sea levels to rise and fall, changing weather patterns and ocean currents along the way. As the ice sheets spread, retreated, and returned, some animals were either able to adapt to the changes, or migrate to a more suitable environment. Others, however, such as giant ground sloths, giant hyenas, and mammoths went extinct. The extinction of these large mammals coincides with changes in the climate and ecosystem due to the melting ice caps. But there is also an uncomfortable overlap with the rise of a certain hominid species originating in Africa 150,000 years ago. In the course of their adaptation to the new environment, creating new tools and methods for gathering food and hunting prey, humans may not have single-handedly caused the extinction of these large animals, as some were able to coexist with us for thousands of years. But it's clear that today, our tools and methods have become so effective that humans are no longer reacting to the environment, but are actively changing it. The extinction of species is a normal occurrence in the background of ecosystems. But studies suggest that rates of extinction today for many organisms are hundreds to thousands of times higher than the normal background. But the same unique ability that makes humans capable of driving mass extinctions can also enable us to prevent them. By learning about past extinction events, recognizing what is happening today as environments change, and using this knowledge to lessen our effect on other species, we can transform humanity's impact on the world from something as destructive as a massive asteroid into a collaborative part of a biologically diverse future.
TedEd_History
Why_is_the_US_Constitution_so_hard_to_amend_Peter_Paccone.txt
When it was ratified in 1789, the U.S. Constitution didn't just institute a government by the people. It provided a way for the people to alter the constitution itself. And yet, of the nearly 11,000 amendments proposed in the centuries since, only 27 have succeeded as of 2016. So what is it that makes the Constitution so hard to change? In short, its creators. The founders of the United States were trying to create a unified country from thirteen different colonies, which needed assurance that their agreements couldn't be easily undone. So here's what they decided. For an amendment to even be proposed, it must receive a two-thirds vote of approval in both houses of Congress, or a request from two-thirds of state legislatures to call a national convention, and that's just the first step. To actually change the Constitution, the amendment must be ratified by three-quarters of all states. To do this, each state can either have its legislature vote on the amendment, or it can hold a separate ratification convention with delegates elected by voters. The result of such high thresholds is that, today, the American Constitution is quite static. Most other democracies pass amendments every couple of years. The U.S., on the other hand, hasn't passed one since 1992. At this point, you may wonder how any amendments managed to pass at all. The first ten, known as the Bill of Rights, includes some of America's most well-known freedoms, such as the freedom of speech, and the right to a fair trial. These were passed all at once to resolve some conflicts from the original Constitutional Convention. Years later, the Thirteenth Amendment, which abolished slavery, as well as the Fourteenth and Fifteenth Amendments, only passed after a bloody civil war. Ratifying amendments has also become harder as the country has grown larger and more diverse. The first ever proposed amendment, a formula to assign congressional representatives, was on the verge of ratification in the 1790s. However, as more and more states joined the union, the number needed to reach the three-quarter mark increased as well, leaving it unratified to this day. Today, there are many suggested amendments, including outlawing the burning of the flag, limiting congressional terms, or even repealing the Second Amendment. While many enjoy strong support, their likelihood of passing is slim. Americans today are the most politically polarized since the Civil War, making it nearly impossible to reach a broad consensus. In fact, the late Supreme Court Justice Antonin Scalia once calculated that due to America's representative system of government, it could take as little as 2% of the total population to block an amendment. Of course, the simplest solution would be to make the Constitution easier to amend by lowering the thresholds required for proposal and ratification. That, however, would require its own amendment. Instead, historical progress has mainly come from the U.S. Supreme Court, which has expanded its interpretation of existing constitutional laws to keep up with the times. Considering that Supreme Court justices are unelected and serve for life once appointed, this is far from the most democratic option. Interestingly, the founders themselves may have foreseen this problem early on. In a letter to James Madison, Thomas Jefferson wrote that laws should expire every 19 years rather than having to be changed or repealed since every political process is full of obstacles that distort the will of the people. Although he believed that the basic principles of the Constitution would endure, he stressed that the Earth belongs to the living, and not to the dead.
TedEd_History
지폐에_가치를_주는_것은_무엇일까_Doug_Levinson.txt
If you tried to pay for something with a piece of paper, you might run into some trouble. Unless, of course, the piece of paper was a hundred dollar bill. But what is it that makes that bill so much more interesting and valuable than other pieces of paper? After all, there's not much you can do with it. You can't eat it. You can't build things with it. And burning it is actually illegal. So what's the big deal? Of course, you probably know the answer. A hundred dollar bill is printed by the government and designated as official currency, while other pieces of paper are not. But that's just what makes them legal. What makes a hundred dollar bill valuable, on the other hand, is how many or few of them are around. Throughout history, most currency, including the US dollar, was linked to valuable commodities and the amount of it in circulation depended on a government's gold or silver reserves. But after the US abolished this system in 1971, the dollar became what is known as fiat money, meaning not linked to any external resource but relying instead solely on government policy to decide how much currency to print. Which branch of our government sets this policy? The Executive, the Legislative, or the Judicial? The surprising answer is: none of the above! In fact, monetary policy is set by an independent Federal Reserve System, or the Fed, made up of 12 regional banks in major cities around the country. Its board of governors, which is appointed by the president and confirmed by the Senate, reports to Congress, and all the Fed's profit goes into the US Treasury. But to keep the Fed from being influenced by the day-to-day vicissitudes of politics, it is not under the direct control of any branch of government. Why doesn't the Fed just decide to print infinite hundred dollar bills to make everyone happy and rich? Well, because then the bills wouldn't be worth anything. Think about the purpose of currency, which is to be exchanged for goods and services. If the total amount of currency in circulation increases faster than the total value of goods and services in the economy, then each individual piece will be able to buy a smaller portion of those things than before. This is called inflation. On the other hand, if the money supply remains the same, while more goods and services are produced, each dollar's value would increase in a process known as deflation. So which is worse? Too much inflation means that the money in your wallet today will be worth less tomorrow, making you want to spend it right away. While this would stimulate business, it would also encourage overconsumption, or hoarding commodities, like food and fuel, raising their prices and leading to consumer shortages and even more inflation. But deflation would make people want to hold onto their money, and a decrease in consumer spending would reduce business profits, leading to more unemployment and a further decrease in spending, causing the economy to keep shrinking. So most economists believe that while too much of either is dangerous, a small, consistent amount of inflation is necessary to encourage economic growth. The Fed uses vast amounts of economic data to determine how much currency should be in circulation, including previous rates of inflation, international trends, and the unemployment rate. Like in the story of Goldilocks, they need to get the numbers just right in order to stimulate growth and keep people employed, without letting inflation reach disruptive levels. The Fed not only determines how much that paper in your wallet is worth but also your chances of getting or keeping the job where you earn it.
TedEd_History
5_tips_to_improve_your_critical_thinking_Samantha_Agoos.txt
Every day, a sea of decisions stretches before us. Some are small and unimportant, but others have a larger impact on our lives. For example, which politician should I vote for? Should I try the latest diet craze? Or will email make me a millionaire? We're bombarded with so many decisions that it's impossible to make a perfect choice every time. But there are many ways to improve our chances, and one particularly effective technique is critical thinking. This is a way of approaching a question that allows us to carefully deconstruct a situation, reveal its hidden issues, such as bias and manipulation, and make the best decision. If the critical part sounds negative that's because in a way it is. Rather than choosing an answer because it feels right, a person who uses critical thinking subjects all available options to scrutiny and skepticism. Using the tools at their disposal, they'll eliminate everything but the most useful and reliable information. There are many different ways of approaching critical thinking, but here's one five-step process that may help you solve any number of problems. One: formulate your question. In other words, know what you're looking for. This isn't always as straightforward as it sounds. For example, if you're deciding whether to try out the newest diet craze, your reasons for doing so may be obscured by other factors, like claims that you'll see results in just two weeks. But if you approach the situation with a clear view of what you're actually trying to accomplish by dieting, whether that's weight loss, better nutrition, or having more energy, that'll equip you to sift through this information critically, find what you're looking for, and decide whether the new fad really suits your needs. Two: gather your information. There's lots of it out there, so having a clear idea of your question will help you determine what's relevant. If you're trying to decide on a diet to improve your nutrition, you may ask an expert for their advice, or seek other people's testimonies. Information gathering helps you weigh different options, moving you closer to a decision that meets your goal. Three: apply the information, something you do by asking critical questions. Facing a decision, ask yourself, "What concepts are at work?" "What assumptions exist?" "Is my interpretation of the information logically sound?" For example, in an email that promises you millions, you should consider, "What is shaping my approach to this situation?" "Do I assume the sender is telling the truth?" "Based on the evidence, is it logical to assume I'll win any money?" Four: consider the implications. Imagine it's election time, and you've selected a political candidate based on their promise to make it cheaper for drivers to fill up on gas. At first glance, that seems great. But what about the long-term environmental effects? If gasoline use is less restricted by cost, this could also cause a huge surge in air pollution, an unintended consequence that's important to think about. Five: explore other points of view. Ask yourself why so many people are drawn to the policies of the opposing political candidate. Even if you disagree with everything that candidate says, exploring the full spectrum of viewpoints might explain why some policies that don't seem valid to you appeal to others. This will allow you to explore alternatives, evaluate your own choices, and ultimately help you make more informed decisions. This five-step process is just one tool, and it certainly won't eradicate difficult decisions from our lives. But it can help us increase the number of positive choices we make. Critical thinking can give us the tools to sift through a sea of information and find what we're looking for. And if enough of us use it, it has the power to make the world a more reasonable place.
TedEd_History
만사_무사_가장_부유하게_살았던_사람들_중_한_사람_제시카_스미스.txt
If someone asked you who the richest people in history were, who would you name? Perhaps a billionaire banker or corporate mogul, like Bill Gates or John D. Rockefeller. How about African King Musa Keita I? Ruling the Mali Empire in the 14th century CE, Mansa Musa, or the King of Kings, amassed a fortune that possibly made him one of the wealthiest people who ever lived. But his vast wealth was only one piece of his rich legacy. When Mansa Musa came to power in 1312, much of Europe was racked by famine and civil wars. But many African kingdoms and the Islamic world were flourishing, and Mansa Musa played a great role in bringing the fruits of this flourishing to his own realm. By strategically annexing the city of Timbuktu, and reestablishing power over the city of Gao, he gained control over important trade routes between the Mediterranean and the West African Coast, continuing a period of expansion, which dramatically increased Mali's size. The territory of the Mali Empire was rich in natural resources, such as gold and salt. The world first witnessed the extent of Mansa Musa's wealth in 1324 when he took his pilgrimage to Mecca. Not one to travel on a budget, he brought a caravan stretching as far as the eye could see. Accounts of this journey are mostly based on an oral testimony and differing written records, so it's difficult to determine the exact details. But what most agree on is the extravagant scale of the excursion. Chroniclers describe an entourage of tens of thousands of soldiers, civilians, and slaves, 500 heralds bearing gold staffs and dressed in fine silks, and many camels and horses bearing an abundance of gold bars. Stopping in cities such as Cairo, Mansa Musa is said to have spent massive quantities of gold, giving to the poor, buying souvenirs, and even having mosques built along the way. In fact, his spending may have destabilized the regional economy, causing mass inflation. This journey reportedly took over a year, and by the time Mansa Musa returned, tales of his amazing wealth had spread to the ports of the Mediterranean. Mali and its king were elevated to near legendary status, cemented by their inclusion on the 1375 Catalan Atlas. One of the most important world maps of Medieval Europe, it depicted the King holding a scepter and a gleaming gold nugget. Mansa Musa had literally put his empire and himself on the map. But material riches weren't the king's only concern. As a devout Muslim, he took a particular interest in Timbuktu, already a center of religion and learning prior to its annexation. Upon returning from his pilgrimage, he had the great Djinguereber Mosque built there with the help of an Andalusian architect. He also established a major university, further elevating the city's reputation, and attracting scholars and students from all over the Islamic world. Under Mansa Musa, the Empire became urbanized, with schools and mosques in hundreds of densely populated towns. The king's rich legacy persisted for generations and to this day, there are mausoleums, libraries and mosques that stand as a testament to this golden age of Mali's history.
TedEd_History
How_Magellan_circumnavigated_the_globe_Ewandro_Magalhaes.txt
On September 6, 1522, the "Victoria" sailed into harbor in southern Spain. The battered vessel and its 18 sailors were all that remained of a fleet that had departed three years before. Yet her voyage was considered a success for the "Victoria" had achieved something unprecedented: the first circumnavigation of the globe. But this story really begins in 1494, two years after Columbus's voyage on behalf of Spain. Columbus's discovery had prompted the Catholic Spanish rulers to turn to the Pope to preempt any claims by Portugal to the new lands. The Pope resolved this dispute by drawing an imaginary line on the world map. Spain had the right to claim territories west of the divide, and Portugal to the east. Spain and Portugal, the two major seafaring super powers at the time, agreed to these terms in what came to be called the Treaty of Tordesillas. At the time, these nations had their eyes on the same prize: trade routes to the Spice Islands in today's Indonesia. The spices found there, which were used as seasonings, food preservatives, and aphrodisiacs, were worth many times their weight in gold. But because of Portugal's control over eastern sea routes, Spain's only viable option was to sail west. So when a Portuguese defector named Ferdinand Magellan claimed that a westward route to the Spice Islands existed, King Charles made him captain of a Spanish armada, and gave him all the resources he would need. Along with a share in the voyage's profits, he granted Magellan five ships and about 260 men. The crew included a young slave named Enrique, captured by Magellan on a previous journey to Malacca, and Antonio Pigafetta, a Venetian nobleman seeking adventure. On September 20, 1519, the fleet weighed anchor and headed southwest. After making landfall in what is now Brazil, it proceeded along the coast, exploring any water way leading inland. They were looking for the fabled passage linking east and west. As the weather worsened, the Spaniards resentment at having a Portuguese captain escalated. A full-blown mutiny soon erupted, which Magellan crushed with unspeakable cruelty. But his problems were only just beginning. During a reconnaissance mission, the "Santiago" was wrecked by a storm. Then while exploring a narrow waterway, the captain of the "San Antonio" took the first opportunity to slip away and sail back home. Magellan pressed forward, and on October 21, he started exploring a navigable sea way. 27 freezing days later, the three remaining ships emerged from what we now call the Strait of Magellan into the Mar Pacifico. The fleet never expected the new ocean to be so vast. After 98 days at sea, dozens of sailors had succumbed to scurvy and famine. When they finally reached land again, Enrique, the young slave, proved able to communicate with the natives. Their goal couldn't be far. Sailing further west, Magellan was warmly received by Rajah Humabon of Cebu. So when the ruler asked him to help subdue and convert the rebellious chief of Mactan, the captain readily agreed. The adventure would be his last. Overconfident and severely outnumbered, Magellan's force was overwhelmed, and the native's bamboo spears ended the captain's life. Yet the voyage had to continue. Magellan's will specified that Enrique should be freed, but the expedition still needed an interpreter. With his freedom at stake, Enrique is believed to have plotted with the Rajah to have about 30 of the Spaniards killed at a feast on the beach. Enrique was never heard from again, but if he ever made it back to Malacca, he may have been the first person to actually circumnavigate the globe. Meanwhile, the survivors burned the Concepcion and proceeded onward. They finally reached the Spice Islands in November of 1521 and loaded up on precious cargo. But they still had to return to Spain. The "Trinidad" sank shortly after being captured by the Portuguese. The "Victoria" continued west, piloted by Juan Sebastián Elcano, one of the pardoned mutineers. Against all odds, the small vessel made it back to Spain with a full cargo of cloves and cinnamon, enough to cover the expedition and turn a profit. An obsessive chronicler, Pigafetta described the lands and people they encountered. With the help of a humble slave, he also compiled the world's first phrase book of native languages. His journal is the reason we can tell this story. Magellan's legacy lingers. He had galaxies and space programs named after him. Elcano, too, was celebrated in Spain with a coat of arms and his face on currency and stamps. United by fate, the survivors and the hundreds who sacrificed their lives challenged conventional wisdom and completed a historic journey once thought impossible.
TedEd_History
언어가_진화하는_방법_알렉스_젠들러_Alex_Gendler.txt
In the biblical story of the Tower of Babel, all of humanity once spoke a single language until they suddenly split into many groups unable to understand each other. We don't really know if such an original language ever existed, but we do know that the thousands of languages existing today can be traced back to a much smaller number. So how did we end up with so many? In the early days of human migration, the world was much less populated. Groups of people that shared a single language and culture often split into smaller tribes, going separate ways in search of fresh game and fertile land. As they migrated and settled in new places, they became isolated from one another and developed in different ways. Centuries of living in different conditions, eating different food and encountering different neighbors turned similar dialects with varied pronunciation and vocabulary into radically different languages, continuing to divide as populations grew and spread out further. Like genealogists, modern linguists try to map this process by tracing multiple languages back as far as they can to their common ancestor, or protolanguage. A group of all languages related in this way is called a language family, which can contain many branches and sub-families. So how do we determine whether languages are related in the first place? Similar sounding words don't tell us much. They could be false cognates or just directly borrowed terms rather than derived from a common root. Grammar and syntax are a more reliable guide, as well as basic vocabulary, such as pronouns, numbers or kinship terms, that's less likely to be borrowed. By systematically comparing these features and looking for regular patterns of sound changes and correspondences between languages, linguists can determine relationships, trace specific steps in their evolution and even reconstruct earlier languages with no written records. Linguistics can even reveal other important historical clues, such as determining the geographic origins and lifestyles of ancient peoples based on which of their words were native, and which were borrowed. There are two main problems linguists face when constructing these language family trees. One is that there is no clear way of deciding where the branches at the bottom should end, that is, which dialects should be considered separate languages or vice versa. Chinese is classified as a single language, but its dialects vary to the point of being mutually unintelligible, while speakers of Spanish and Portuguese can often understand each other. Languages actually spoken by living people do not exist in neatly divided categories, but tend to transition gradually, crossing borders and classifications. Often the difference between languages and dialects is a matter of changing political and national considerations, rather than any linguistic features. This is why the answer to, "How many languages are there?" can be anywhere between 3,000 and 8,000, depending on who's counting. The other problem is that the farther we move back in time towards the top of the tree, the less evidence we have about the languages there. The current division of major language families represents the limit at which relationships can be established with reasonable certainty, meaning that languages of different families are presumed not to be related on any level. But this may change. While many proposals for higher level relationships -- or super families -- are speculative, some have been widely accepted and others are being considered, especially for native languages with small speaker populations that have not been extensively studied. We may never be able to determine how language came about, or whether all human languages did in fact have a common ancestor scattered through the babel of migration. But the next time you hear a foreign language, pay attention. It may not be as foreign as you think.
TedEd_History
Should_you_trust_unanimous_decisions_Derek_Abbott.txt
Imagine a police lineup where ten witnesses are asked to identify a bank robber they glimpsed fleeing the crime scene. If six of them pick out the same person, there's a good chance that's the real culprit, and if all ten make the same choice, you might think the case is rock solid, but you'd be wrong. For most of us, this sounds pretty strange. After all, much of our society relies on majority vote and consensus, whether it's politics, business, or entertainment. So it's natural to think that more consensus is a good thing. And up until a certain point, it usually is. But sometimes, the closer you start to get to total agreement, the less reliable the result becomes. This is called the paradox of unanimity. The key to understanding this apparent paradox is in considering the overall level of uncertainty involved in the type of situation you're dealing with. If we asked witnesses to identify the apple in this lineup, for example, we shouldn't be surprised by a unanimous verdict. But in cases where we have reason to expect some natural variance, we should also expect varied distribution. If you toss a coin one hundred times, you would expect to get heads somewhere around 50% of the time. But if your results started to approach 100% heads, you'd suspect that something was wrong, not with your individual flips, but with the coin itself. Of course, suspect identifications aren't as random as coin tosses, but they're not as clear cut as telling apples from bananas, either. In fact, a 1994 study found that up to 48% of witnesses tend to pick the wrong person out of a lineup, even when many are confident in their choice. Memory based on short glimpses can be unreliable, and we often overestimate our own accuracy. Knowing all this, a unanimous identification starts to seem less like certain guilt, and more like a systemic error, or bias in the lineup. And systemic errors don't just appear in matters of human judgement. From 1993-2008, the same female DNA was found in multiple crime scenes around Europe, incriminating an elusive killer dubbed the Phantom of Heilbronn. But the DNA evidence was so consistent precisely because it was wrong. It turned out that the cotton swabs used to collect the DNA samples had all been accidentally contaminated by a woman working in the swab factory. In other cases, systematic errors arise through deliberate fraud, like the presidential referendum held by Saddam Hussein in 2002, which claimed a turnout of 100% of voters with all 100% supposedly voting in favor of another seven-year term. When you look at it this way, the paradox of unanimity isn't actually all that paradoxical. Unanimous agreement is still theoretically ideal, especially in cases when you'd expect very low odds of variability and uncertainty, but in practice, achieving it in situations where perfect agreement is highly unlikely should tell us that there's probably some hidden factor affecting the system. Although we may strive for harmony and consensus, in many situations, error and disagreement should be naturally expected. And if a perfect result seems too good to be true, it probably is.
TedEd_History
How_to_recognize_a_dystopia_Alex_Gendler.txt
Have you ever tried to picture an ideal world? One without war, poverty, or crime? If so, you're not alone. Plato imagined an enlightened republic ruled by philosopher kings, many religions promise bliss in the afterlife, and throughout history, various groups have tried to build paradise on Earth. Thomas More's 1516 book "Utopia" gave this concept a name, Greek for "no place." Though the name suggested impossibility, modern scientific and political progress raised hopes of these dreams finally becoming reality. But time and time again, they instead turned into nightmares of war, famine, and oppression. And as artists began to question utopian thinking, the genre of dystopia, the not good place, was born. One of the earliest dystopian works is Jonathan Swift's "Gulliver's Travels." Throughout his journey, Gulliver encounters fictional societies, some of which at first seem impressive, but turn out to be seriously flawed. On the flying island of Laputa, scientists and social planners pursue extravagant and useless schemes while neglecting the practical needs of the people below. And the Houyhnhnm who live in perfectly logical harmony have no tolerance for the imperfections of actual human beings. With his novel, Swift established a blueprint for dystopia, imagining a world where certain trends in contemporary society are taken to extremes, exposing their underlying flaws. And the next few centuries would provide plenty of material. Industrial technology that promised to free laborers imprisoned them in slums and factories, instead, while tycoons grew richer than kings. By the late 1800's, many feared where such conditions might lead. H. G. Wells's "The Time Machine" imagined upper classes and workers evolving into separate species, while Jack London's "The Iron Heel" portrayed a tyrannical oligarchy ruling over impoverished masses. The new century brought more exciting and terrifying changes. Medical advances made it possible to transcend biological limits while mass media allowed instant communication between leaders and the public. In Aldous Huxley's "Brave New World", citizens are genetically engineered and conditioned to perform their social roles. While propaganda and drugs keep the society happy, it's clear some crucial human element is lost. But the best known dystopias were not imaginary at all. As Europe suffered unprecedented industrial warfare, new political movements took power. Some promised to erase all social distinctions, while others sought to unite people around a mythical heritage. The results were real-world dystopias where life passed under the watchful eye of the State and death came with ruthless efficiency to any who didn't belong. Many writers of the time didn't just observe these horrors, but lived through them. In his novel "We", Soviet writer Yevgeny Zamyatin described a future where free will and individuality were eliminated. Banned in the U.S.S.R., the book inspired authors like George Orwell who fought on the front lines against both fascism and communism. While his novel "Animal Farm" directly mocked the Soviet regime, the classic "1984" was a broader critique of totalitarianism, media, and language. And in the U.S.A., Sinclair Lewis's "It Can't Happen Here" envisioned how easily democracy gave way to fascism. In the decades after World War II, writers wondered what new technologies like atomic energy, artificial intelligence, and space travel meant for humanity's future. Contrasting with popular visions of shining progress, dystopian science fiction expanded to films, comics, and games. Robots turned against their creators while TV screens broadcast deadly mass entertainment. Workers toiled in space colonies above an Earth of depleted resources and overpopulated, crime-plagued cities. Yet politics was never far away. Works like "Dr. Strangelove" and "Watchmen" explored the real threat of nuclear war, while "V for Vendetta" and "The Handmaid's Tale" warned how easily our rights could disappear in a crisis. And today's dystopian fiction continues to reflect modern anxieties about inequality, climate change, government power, and global epidemics. So why bother with all this pessimism? Because at their heart, dystopias are cautionary tales, not about some particular government or technology, but the very idea that humanity can be molded into an ideal shape. Think back to the perfect world you imagined. Did you also imagine what it would take to achieve? How would you make people cooperate? And how would you make sure it lasted? Now take another look. Does that world still seem perfect?
TedEd_History
History_vs_Cleopatra_Alex_Gendler.txt
"Order, order. So who do we have here?" "Your Honor, this is Cleopatra, the Egyptian queen whose lurid affairs destroyed two of Rome's finest generals and brought the end of the Republic." "Your Honor, this is Cleopatra, one of the most powerful women in history whose reign brought Egypt nearly 22 years of stability and prosperity." "Uh, why don't we even know what she looked like?" "Most of the art and descriptions came long after her lifetime in the first century BCE, just like most of the things written about her." "So what do we actually know? Cleopatra VII was the last of the Ptolemaic dynasty, a Macedonian Greek family that governed Egypt after its conquest by Alexander the Great. She ruled jointly in Alexandria with her brother- to whom she was also married- until he had her exiled." "But what does all this have to do with Rome?" "Egypt had long been a Roman client state, and Cleopatra's father incurred large debts to the Republic. After being defeated by Julius Caesar in Rome's civil war, the General Pompey sought refuge in Egypt but was executed by Cleopatra's brother instead." "Caesar must have liked that." "Actually, he found the murder unseemly and demanded repayment of Egypt's debt. He could have annexed Egypt, but Cleopatra convinced him to restore her to the throne instead." "We hear she was quite convincing." "And why not? Cleopatra was a fascinating woman. She commanded armies at 21, spoke several languages, and was educated in a city with the world's finest library and some of the greatest scholars of the time." "Hmm." "She kept Caesar lounging in Egypt for months when Rome needed him." "Caesar did more than lounge. He was fascinated by Egypt's culture and knowledge, and he learned much during his time there. When he returned to Rome, he reformed the calendar, commissioned a census, made plans for a public library, and proposed many new infrastructure projects." "Yes, all very ambitious, exactly what got him assassinated." "Don't blame the Queen for Rome's strange politics. Her job was ruling Egypt, and she did it well. She stabilized the economy, managed the vast bureaucracy, and curbed corruption by priests and officials. When drought hit, she opened the granaries to the public and passed a tax amnesty, all while preserving her kingdom's stability and independence with no revolts during the rest of her reign." "So what went wrong?" "After Caesar's death, this foreign Queen couldn't stop meddling in Roman matters." "Actually, it was the Roman factions who came demanding her aid. And of course she had no choice but to support Octavian and Marc Antony in avenging Caesar, if only for the sake of their son." "And again, she provided her particular kind of support to Marc Antony." "Why does that matter? Why doesn't anyone seem to care about Caesar or Antony's countless other affairs? Why do we assume she instigated the relationships? And why are only powerful women defined by their sexuality?" "Order." "Cleopatra and Antony were a disaster. They offended the Republic with their ridiculous celebrations sitting on golden thrones and dressing up as gods until Octavian had all of Rome convinced of their megalomania." "And yet Octavian was the one who attacked Antony, annexed Egypt, and declared himself Emperor. It was the Roman's fear of a woman in power that ended their Republic, not the woman herself." "How ironic." Cleopatra's story survived mainly in the accounts of her enemies in Rome, and later writers filled the gaps with rumors and stereotypes. We may never know the full truth of her life and her reign, but we can separate fact from rumor by putting history on trial.
TedEd_History
역사_대_블리디미르_레닌알렉스_젠들러_Alex_Gendler.txt
He was one of the most influential figures of the 20th century, forever changing the course of one of the world's largest countries. But was he a hero who toppled an oppressive tyranny or a villain who replaced it with another? It's time to put Lenin on the stand in History vs. Lenin. "Order, order, hmm. Now, wasn't it your fault that the band broke up?" "Your honor, this is Vladimir Ilyich Ulyanov, AKA Lenin, the rabblerouser who helped overthrow the Russian tsar Nicholas II in 1917 and founded the Soviet Union, one of the worst dictatorships of the 20th century." "Ohh." "The tsar was a bloody tyrant under whom the masses toiled in slavery." "This is rubbish. Serfdom had already been abolished in 1861." "And replaced by something worse. The factory bosses treated the people far worse than their former feudal landlords. And unlike the landlords, they were always there. Russian workers toiled for eleven hours a day and were the lowest paid in all of Europe." "But Tsar Nicholas made laws to protect the workers." "He reluctantly did the bare minimum to avert revolution, and even there, he failed. Remember what happened in 1905 after his troops fired on peaceful petitioners?" "Yes, and the tsar ended the rebellion by introducing a constitution and an elected parliament, the Duma." "While retaining absolute power and dissolving them whenever he wanted." "Perhaps there would've been more reforms in due time if radicals, like Lenin, weren't always stirring up trouble." "Your Honor, Lenin had seen his older brother Aleksandr executed by the previous tsar for revolutionary activity, and even after the reforms, Nicholas continued the same mass repression and executions, as well as the unpopular involvement in World War I, that cost Russia so many lives and resources." "Hm, this tsar doesn't sound like such a capital fellow." "Your Honor, maybe Nicholas II did doom himself with bad decisions, but Lenin deserves no credit for this. When the February 1917 uprisings finally forced the tsar to abdicate, Lenin was still exiled in Switzerland." "Hm, so who came to power?" "The Duma formed a provisional government, led by Alexander Kerensky, an incompetent bourgeois failure. He even launched another failed offensive in the war, where Russia had already lost so much, instead of ending it like the people wanted." "It was a constitutional social democratic government, the most progressive of its time. And it could have succeeded eventually if Lenin hadn't returned in April, sent by the Germans to undermine the Russian war effort and instigate riots." "Such slander! The July Days were a spontaneous and justified reaction against the government's failures. And Kerensky showed his true colors when he blamed Lenin and arrested and outlawed his Bolshevik party, forcing him to flee into exile again. Some democracy! It's a good thing the government collapsed under their own incompetence and greed when they tried to stage a military coup then had to ask the Bolsheviks for help when it backfired. After that, all Lenin had to do was return in October and take charge. The government was peacefully overthrown overnight." "But what the Bolsheviks did after gaining power wasn't very peaceful. How many people did they execute without trial? And was it really necessary to murder the tsar's entire family, even the children?" "Russia was being attacked by foreign imperialists, trying to restore the tsar. Any royal heir that was rescued would be recognized as ruler by foreign governments. It would've been the end of everything the people had fought so hard to achieve. Besides, Lenin may not have given the order." "But it was not only imperialists that the Bolsheviks killed. What about the purges and executions of other socialist and anarchist parties, their old allies? What about the Tambov Rebellion, where peasants, resisting grain confiscation, were killed with poison gas? Or sending the army to crush the workers in Kronstadt, who were demanding democratic self-management? Was this still fighting for the people?" "Yes! The measures were difficult, but it was a difficult time. The new government needed to secure itself while being attacked from all sides, so that the socialist order could be established." "And what good came of this socialist order? Even after the civil war was won, there were famines, repression and millions executed or sent to die in camps, while Lenin's successor Stalin established a cult of personality and absolute power." "That wasn't the plan. Lenin never cared for personal gains, even his enemies admitted that he fully believed in his cause, living modestly and working tirelessly from his student days until his too early death. He saw how power-hungry Stalin was and tried to warn the party, but it was too late." "And the decades of totalitarianism that followed after?" "You could call it that, but it was Lenin's efforts that changed Russia in a few decades from a backward and undeveloped monarchy full of illiterate peasants to a modern, industrial superpower, with one of the world's best educated populations, unprecedented opportunities for women, and some of the most important scientific advancements of the century. Life may not have been luxurious, but nearly everyone had a roof over their head and food on their plate, which few countries have achieved." "But these advances could still have happened, even without Lenin and the repressive regime he established." "Yes, and I could've been a famous rock and roll singer. But how would I have sounded?" We can never be sure how things could've unfolded if different people were in power or different decisions were made, but to avoid the mistakes of the past, we must always be willing to put historical figures on trial.
TedEd_History
교회이자_모스크인_아야_소피아_성당_켈리_월.txt
They say that if walls could talk, each building would have a story to tell, but few would tell so many fascinating stories in so many different voices as the Hagia Sophia, or holy wisdom. Perched at the crossroads of continents and cultures, it has seen massive changes from the name of the city where it stands, to its own structure and purpose. And today, the elements from each era stand ready to tell their tales to any visitor who will listen. Even before you arrive at the Hagia Sophia, the ancient fortifications hint at the strategic importance of the surrounding city, founded as Byzantium by Greek colonists in 657 BCE. And successfully renamed as Augusta Antonia, New Rome and Constantinople as it was conquered, reconquered, destroyed and rebuilt by various Greek, Persian and Roman rulers over the following centuries. And it was within these walls that the first Megale Ekklesia, or great church, was built in the fourth century. Though it was soon burned to the ground in riots, it established the location for the region's main religious structure for centuries to come. Near the entrance, the marble stones with reliefs are the last reminders of the second church. Built in 415 CE, it was destroyed during the Nika Riots of 532 when angry crowds at a chariot race nearly overthrew the emperor, Justinian the First. Having barely managed to retain power, he resolved to rebuild the church on a grander scale, and five years later, the edifice you see before you was completed. As you step inside, the stones of the foundation and walls murmur tales from their homelands of Egypt and Syria, while columns taken from the Temple of Artemis recall a more ancient past. Runic inscriptions carved by the Vikings of the emperor's elite guard carry the lore of distant northern lands. But your attention is caught by the grand dome, representing the heavens. Reaching over 50 meters high and over 30 meters in diameter and ringed by windows around its base, the golden dome appears suspended from heaven, light reflecting through its interior. Beneath its grandiose symbolism, the sturdy reinforcing Corinthian columns, brought from Lebanon after the original dome was partially destroyed by an earthquake in 558 CE, quietly remind you of its fragility and the engineering skills such a marvel requires. If a picture is worth a thousand words, the mosaics from the next several centuries have the most to say not only about their Biblical themes, but also the Byzantine emperors who commissioned them, often depicted along with Christ. But beneath their loud and clear voices, one hears the haunting echoes of the damaged and missing mosaics and icons, desecrated and looted during the Latin Occupation in the Fourth Crusade. Within the floor, the tomb inscription of Enrico Dandolo, the Venetian ruler who commanded the campaign, is a stark reminder of those 57 years that Hagia Sophia spent as a Roman Catholic church before returning to its orthodox roots upon the Byzantine Reconquest. But it would not remain a church for long. Weakened by the Crusades, Constantinople fell to the Ottomans in 1453 and would be known as Istanbul thereafter. After allowing his soldiers three days of pillage, Sultan Mehmed the Second entered the building. Though heavily damaged, its grandeur was not lost on the young sultan who immediately rededicated it to Allah, proclaiming that it would be the new imperial mosque. The four minarets built over the next century are the most obvious sign of this era, serving as architectural supports in addition to their religious purpose. But there are many others. Ornate candle holders relate Suleiman's conquest of Hungary, while giant caligraphy discs hung from the ceiling remind visitors for the first four caliphs who followed Muhammad. Though the building you see today still looks like a mosque, it is now a museum, a decision made in 1935 by Kemal Ataturk, the modernizing first president of Turkey following the Ottoman Empire's collapse. It was this secularization that allowed for removal of the carpets hiding the marble floor decorations and the plaster covering the Christian mosaics. Ongoing restoration work has allowed the multiplicity of voices in Hagia Sophia's long history to be heard again after centuries of silence. But conflict remains. Hidden mosaics cry out from beneath Islamic calligraphy, valuable pieces of history that cannot be uncovered without destroying others. Meanwhile, calls sound from both Muslim and Christian communities to return the building to its former religious purposes. The story of the divine wisdom may be far from over, but one can only hope that the many voices residing there will be able to tell their part for years to come.
TedEd_History
Where_did_Russia_come_from_Alex_Gendler.txt
Where did Russia come from, why is it so big, and what are the differences between it and its neighbors? The answers lie in an epic story of seafaring warriors, nomadic invaders, and the rise and fall of a medieval state known as Kievan Rus. In the first millennium, a large group of tribes spread through the dense woodlands of Eastern Europe. Because they had no writing system, much of what we know about them comes from three main sources: archaeological evidence, accounts from literate scholars of the Roman Empire and the Middle East, and, lastly, an epic history called the Primary Chronicle compiled in the 12th century by a monk named Nestor. What they tell us is that these tribes who shared a common Slavic language and polytheistic religion had by the 7th century split into western, southern and eastern branches, the latter stretching from the Dniester River to the Volga and the Baltic Sea. As Nestor's story goes, after years of subjugation by Vikings from the north, who, by the way, did not wear horned helmets in battle, the region's tribes revolted and drove back the Northmen, but left to their own devices, they turned on each other. Such chaos ensued that, ironically, the tribes reached out to the foreigners they had just expelled, inviting them to return and establish order. The Vikings accepted, sending a prince named Rurik and his two brothers to rule. With Rurik's son, Oleg, expanding his realm into the south, and moving the capitol to Kiev, a former outpost of the Khazar Empire, the Kievan Rus was born, "Rus" most likely deriving from an old Norse word for "the men who row." The new princedom had complex relations with its neighbors, alternating between alliance and warfare with the Khazar and Byzantine Empires, as well as neighboring tribes. Religion played an important role in politics, and as the legend goes, in 987, the Rus prince Vladamir I decided it was time to abandon Slavic paganism, and sent emissaries to explore neighboring faiths. Put off by Islam's prohibition on alcohol and Judaism's expulsion from its holy land, the ruler settled on Orthodox Christianity after hearing odd accounts of its ceremonies. With Vladimir's conversion and marriage to the Byzantine emperor's sister, as well as continued trade along the Volga route, the relationship between the two civilizations deepened. Byzantine missionaries created an alphabet for Slavic languages based on a modified Greek script while Rus Viking warriors served as the Byzantine Emperor's elite guard. For several generations, the Kievan Rus flourished from its rich resources and trade. Its noblemen and noblewomen married prominent European rulers, while residents of some cities enjoyed great culture, literacy, and even democratic freedoms uncommon for the time. But nothing lasts forever. Fratricidal disputes over succession began to erode central power as increasingly independent cities ruled by rival princes vied for control. The Fourth Crusade and decline of Constantinople devastated the trade integral to Rus wealth and power, while Teutonic crusaders threatened northern territories. The final blow, however, would come from the east. Consumed by their squabbles, Rus princes paid little attention to the rumors of a mysterious unstoppable hoard until 1237, when 35,000 mounted archers led by Batu Khan swept through the Rus cities, sacking Kiev before continuing on to Hungary and Poland. The age of Kievan Rus had come to an end, its people now divided. In the east, which remained under Mongol rule, a remote trading post, known as Moscow, would grow to challenge the power of the Khans, conquering parts of their fragmenting empire, and, in many ways, succeeding it. As it absorbed other eastern Rus territories, it reclaimed the old name in its Greek form, Ruscia. Meanwhile, the western regions whose leaders had avoided destruction through political maneuvering until the hoard withdrew came under the influence of Poland and Lithuania. For the next few centuries, the former lands of Kievan Rus populated by Slavs, ruled by Vikings, taught by Greeks, and split by Mongols would develop differences in society, culture and language that remain to the present day.
TedEd_History
History_vs_Napoleon_Bonaparte_Alex_Gendler.txt
After the French Revolution erupted in 1789, Europe was thrown into chaos. Neighboring countries' monarchs feared they would share the fate of Louis XVI, and attacked the New Republic, while at home, extremism and mistrust between factions lead to bloodshed. In the midst of all this conflict, a powerful figure emerged to take charge of France. But did he save the revolution or destroy it? "Order, order, who's the defendant today? I don't see anyone." "Your Honor, this is Napoléon Bonaparte, the tyrant who invaded nearly all of Europe to compensate for his personal stature-based insecurities." "Actually, Napoléon was at least average height for his time. The idea that he was short comes only from British wartime propaganda. And he was no tyrant. He was safeguarding the young Republic from being crushed by the European monarchies." "By overthrowing its government and seizing power himself?" "Your Honor, as a young and successful military officer, Napoléon fully supported the French Revolution, and its ideals of liberty, equality, and fraternity. But the revolutionaries were incapable of real leadership. Robespierre and the Jacobins who first came to power unleashed a reign of terror on the population, with their anti-Catholic extremism and nonstop executions of everyone who disagreed with them. And The Directory that replaced them was an unstable and incompetent oligarchy. They needed a strong leader who could govern wisely and justly." "So, France went through that whole revolution just to end up with another all-powerful ruler?" "Not quite. Napoléon's new powers were derived from the constitution that was approved by a popular vote in the Consulate." "Ha! The constitution was practically dictated at gunpoint in a military coup, and the public only accepted the tyrant because they were tired of constant civil war." "Be that as it may, Napoléon introduced a new constitution and a legal code that kept some of the most important achievements of the revolution in tact: freedom of religion abolition of hereditary privilege, and equality before the law for all men." "All men, indeed. He deprived women of the rights that the revolution had given them and even reinstated slavery in the French colonies. Haiti is still recovering from the consequences centuries later. What kind of equality is that?" "The only kind that could be stably maintained at the time, and still far ahead of France's neighbors." "Speaking of neighbors, what was with all the invasions?" "Great question, Your Honor." "Which invasions are we talking about? It was the neighboring empires who had invaded France trying to restore the monarchy, and prevent the spread of liberty across Europe, twice by the time Napoléon took charge. Having defended France as a soldier and a general in those wars, he knew that the best defense is a good offense." "An offense against the entire continent? Peace was secured by 1802, and other European powers recognized the new French Regime. But Bonaparte couldn't rest unless he had control of the whole continent, and all he knew was fighting. He tried to enforce a European-wide blockade of Britain, invaded any country that didn't comply, and launched more wars to hold onto his gains. And what was the result? Millions dead all over the continent, and the whole international order shattered." "You forgot the other result: the spread of democratic and liberal ideals across Europe. It was thanks to Napoléon that the continent was reshaped from a chaotic patchwork of fragmented feudal and religious territories into efficient, modern, and secular nation states where the people held more power and rights than ever before." "Should we also thank him for the rise of nationalism and the massive increase in army sizes? You can see how well that turned out a century later." "So what would European history have been like if it weren't for Napoléon?" "Unimaginably better/worse." Napoléon seemingly unstoppable momentum would die in the Russian winter snows, along with most of his army. But even after being deposed and exiled, he refused to give up, escaping from his prison and launching a bold attempt at restoring his empire before being defeated for the second and final time. Bonaparte was a ruler full of contradictions, defending a popular revolution by imposing absolute dictatorship, and spreading liberal ideals through imperial wars, and though he never achieved his dream of conquering Europe, he undoubtedly left his mark on it, for better or for worse.
TedEd_History
역사_대_크리스토퍼_콜롬버스_알렉스_젠들러.txt
Many people in the United States and Latin America have grown up celebrating the anniversary of Christopher Columbus's voyage, but was he an intrepid explorer who brought two worlds together or a ruthless exploiter who brought colonialism and slavery? And did he even discover America at all? It's time to put Columbus on the stand in History vs. Christopher Columbus. "Order, order in the court. Wait, am I even supposed to be at work today?" Cough "Yes, your Honor. From 1792, Columbus Day was celebrated in many parts of the United States on October 12th, the actual anniversary date. But although it was declared an official holiday in 1934, individual states aren't required to observe it. Only 23 states close public services, and more states are moving away from it completely." Cough "What a pity. In the 70s, we even moved it to the second Monday in October so people could get a nice three-day weekend, but I guess you folks just hate celebrations." "Uh, what are we celebrating again?" "Come on, Your Honor, we all learned it in school. Christopher Columbus convinced the King of Spain to send him on a mission to find a better trade route to India, not by going East over land but sailing West around the globe. Everyone said it was crazy because they still thought the world was flat, but he knew better. And when in 1492 he sailed the ocean blue, he found something better than India: a whole new continent." "What rubbish. First of all, educated people knew the world was round since Aristotle. Secondly, Columbus didn't discover anything. There were already people living here for millennia. And he wasn't even the first European to visit. The Norse had settled Newfoundland almost 500 years before." "You don't say, so how come we're not all wearing those cow helmets?" "Actually, they didn't really wear those either." Cough "Who cares what some Vikings did way back when? Those settlements didn't last, but Columbus's did. And the news he brought back to Europe spread far and wide, inspiring all the explorers and settlers who came after. Without him, none of us would be here today." "And because of him, millions of Native Americans aren't here today. Do you know what Columbus did in the colonies he founded? He took the very first natives he met prisoner and wrote in his journal about how easily he could conquer and enslave all of them." "Oh, come on. Everyone was fighting each other back then. Didn't the natives even tell Columbus about other tribes raiding and taking captives?" "Yes, but tribal warfare was sporadic and limited. It certainly didn't wipe out 90% of the population." "Hmm. Why is celebrating this Columbus so important to you, anyway?" "Your Honor, Columbus's voyage was an inspiration to struggling people all across Europe, symbolizing freedom and new beginnings. And his discovery gave our grandparents and great-grandparents the chance to come here and build better lives for their children. Don't we deserve a hero to remind everyone that our country was build on the struggles of immigrants?" "And what about the struggles of Native Americans who were nearly wiped out and forced into reservations and whose descendants still suffer from poverty and discrimination? How can you make a hero out of a man who caused so much suffering?" "That's history. You can't judge a guy in the 15th century by modern standards. People back then even thought spreading Christianity and civilization across the world was a moral duty." "Actually, he was pretty bad, even by old standards. While governing Hispaniola, he tortured and mutilated natives who didn't bring him enough gold and sold girls as young as nine into sexual slavery, and he was brutal even to the other colonists he ruled, to the point that he was removed from power and thrown in jail. When the missionary, Bartolomé de las Casas, visited the island, he wrote, 'From 1494 to 1508, over 3,000,000 people had perished from war, slavery and the mines. Who in future generations will believe this?'" "Well, I'm not sure I believe those numbers." "Say, aren't there other ways the holiday is celebrated?" "In some Latin American countries, they celebrate the same date under different names, such as Día de la Raza. In these places, it's more a celebration of the native and mixed cultures that survived through the colonial period. Some places in the U.S. have also renamed the holiday, as Native American Day or Indigenous People's Day and changed the celebrations accordingly." "So, why not just change the name if it's such a problem?" "Because it's tradition. Ordinary people need their heroes and their founding myths. Can't we just keep celebrating the way we've been doing for a century, without having to delve into all this serious research? It's not like anyone is actually celebrating genocide." "Traditions change, and the way we choose to keep them alive says a lot about our values." "Well, it looks like giving tired judges a day off isn't one of those values, anyway." Traditions and holidays are important to all cultures, but a hero in one era may become a villain in the next as our historical knowledge expands and our values evolve. And deciding what these traditions should mean today is a major part of putting history on trial.
TedEd_History
미국_대법관은_어떻게_임명될까_피터_파콘Peter_Paccone.txt
There's a job out there with a great deal of power, pay, prestige, and near-perfect job security. And there's only one way to be hired: get appointed to the US Supreme Court. If you want to become a justice on the Supreme Court, the highest federal court in the United States, three things have to happen. You have to be nominated by the president of the United States, your nomination needs to be approved by the Senate, and finally, the president must formally appoint you to the court. Because the Constitution doesn't specify any qualifications, in other words, that there's no age, education, profession, or even native-born citizenship requirement, a president can nominate any individual to serve. So far, six justices have been foreign-born, at least one never graduated from high school, and another was only 32 years old when he joined the bench. Most presidents nominate individuals who broadly share their ideological view, so a president with a liberal ideology will tend to appoint liberals to the court. Of course, a justice's leanings are not always so predictable. For example, when President Eisenhower, a Republican, nominated Earl Warren for Chief Justice, Eisenhower expected him to make conservative decisions. Instead, Warren's judgements have gone down as some of the most liberal in the Court's history. Eisenhower later remarked on that appointment as "the biggest damned-fool mistake" he ever made. Many other factors come up for consideration, as well, including experience, personal loyalties, ethnicity, and gender. The candidates are then thoroughly vetted down to their tax records and payments to domestic help. Once the president interviews the candidate and makes a formal nomination announcement, the Senate leadership traditionally turns the nomination over to hearings by the Senate Judiciary Committee. Depending on the contentiousness of the choice, that can stretch over many days. Since the Nixon administration, these hearings have averaged 60 days. The nominee is interviewed about their law record, if applicable, and where they stand on key issues to discern how they might vote. And especially in more recent history, the committee tries to unearth any dark secrets or past indiscretions. The Judiciary Committee votes to send the nomination to the full Senate with a positive or negative recommendation, often reflective of political leanings, or no recommendation at all. Most rejections have happened when the Senate majority has been a different political party than the president. When the Senate does approve, it's by a simple majority vote, with ties broken by the vice president. With the Senate's consent, the president issues a written appointment, allowing the nominee to complete the final steps to take the constitutional and judicial oaths. In doing so, they solemnly swear to administer justice without respect to persons and do equal right to the poor and the rich and faithfully and impartially discharge and perform all the duties incumbent upon a US Supreme Court justice. This job is for life, barring resignation, retirement, or removal from the court by impeachment. And of the 112 justices who have held the position, not one has yet been removed from office as a result of an impeachment. One of their roles is to protect the fundamental rights of all Americans, even as different parties take power. With the tremendous impact of this responsibility, it's no wonder that a US Supreme Court justice is expected to be, in the words of Irving R. Kaufman, "a paragon of virtue, an intellectual Titan, and an administrative wizard." Of course, not every member of the Court turns out to be an exemplar of justice. Each leaves behind a legacy of decisions and opinions to be debated and dissected by the ultimate judges, time and history.
TedEd_History
역사에서_잊혀지면_안되는_파라오_케이트_나레브_Kate_Narev.txt
Three and a half thousand years ago in Egypt, a noble pharaoh was the victim of a violent attack. But the attack was not physical. This royal had been dead for 20 years. The attack was historical, an act of damnatio memoriae, the damnation of memory. Somebody smashed the pharaoh's statues, took a chisel and attempted to erase the pharaoh's name and image from history. Who was this pharaoh, and what was behind the attack? Here's the key: the pharaoh Hatshepsut was a woman. In the normal course of things, she should never have been pharaoh. Although it was legal for a woman to be a monarch, it disturbed some essential Egyptian beliefs. Firstly, the pharaoh was known as the living embodiment of the male god Horus. Secondly, disturbance to the tradition of rule by men was a serious challenge to Maat, a word for "truth," expressing a belief in order and justice, vital to the Egyptians. Hatshepsut had perhaps tried to adapt to this belief in the link between order and patriarchy through her titles. She took the name Maatkare, and sometimes referred to herself as Hatshepsu, with a masculine word ending. But apparently, these efforts didn't convince everyone, and perhaps someone erased Hatshepsut's image so that the world would forget the disturbance to Maat, and Egypt could be balanced again. Hatshepsut, moreover, was not the legitimate heir to the thrown, but a regent, a kind of stand-in co-monarch. The Egyptian kingship traditionally passed from father to son. It passed from Thutmose I to his son Thutmose II, Hatshepsut's husband. It should have passed from Thutmose II directly to his son Thutmose III, but Thutmose III was a little boy when his father died. Hatshepsut, the dead pharaoh’s chief wife and widow, stepped in to help as her stepson's regent but ended up ruling beside him as a fully fledged pharaoh. Perhaps Thutmose III was angry about this. Perhaps he was the one who erased her images. It's also possible that someone wanted to dishonor Hatshepsut because she was a bad pharaoh. But the evidence suggests she was actually pretty good. She competently fulfilled the traditional roles of the office. She was a great builder. Her mortuary temple, Djeser-Djeseru, was an architectural phenomenon at the time and is still admired today. She enhanced the economy of Egypt, conducting a very successful trade mission to the distant land of Punt. She had strong religious connections. She even claimed to be the daughter of the state god, Amun. And she had a successful military career, with a Nubian campaign, and claims she fought alongside her soldiers in battle. Of course, we have to be careful when we assess the success of Hatshepsut's career, since most of the evidence was written by Hatshepsut herself. She tells her own story in pictures and writing on the walls of her mortuary temple and the red chapel she built for Amun. So who committed the crimes against Hatshepsut's memory? The most popular suspect is her stepson, nephew, and co-ruler, Thutmose III. Did he do it out of anger because she stole his throne? This is unlikely since the damage wasn't done until 20 years after Hatshepsut died. That's a long time to hang onto anger and then act in a rage. Maybe Thutmose III did it to make his own reign look stronger. But it is most likely that he or someone else erased the images so that people would forget that a woman ever sat on Egypt's throne. This gender anomaly was simply too much of a threat to Maat and had to be obliterated from history. Happily, the ancient censors were not quite thorough enough. Enough evidence survived for us to piece together what happened, so the story of this unique powerful woman can now be told.
TedEd_History
징기스칸_대_역사_알렉스_젠들러.txt
He was one of the most fearsome warlords who ever lived, waging an unstoppable conquest across the Eurasian continent. But was Genghis Khan a vicious barbarian or a unifier who paved the way for the modern world? We'll see in "History vs. Genghis Khan." "Order, order. Now who's the defendant today? Khan!" "I see Your Honor is familiar with Genghis Khan, the 13th century warlord whose military campaigns killed millions and left nothing but destruction in their wake." "Objection. First of all, it's pronounced Genghis Kahn." "Really?" "In Mongolia, yes. Regardless, he was one of the greatest leaders in human history. Born Temüjin, he was left fatherless and destitute as a child but went on to overcome constant strife to unite warring Mongol clans and forge the greatest empire the world had seen, eventually stretching from the Pacific to Europe's heartland." "And what was so great about invasion and slaughter? Northern China lost 2/3 of its population." "The Jin Dynasty had long harassed the northern tribes, paying them off to fight each other and periodically attacking them. Genghis Khan wasn't about to suffer the same fate as the last Khan who tried to unite the Mongols, and the demographic change may reflect poor census keeping, not to mention that many peasants were brought into the Khan's army." "You can pick apart numbers all you want, but they wiped out entire cities, along with their inhabitants." "The Khan preferred enemies to surrender and pay tribute, but he firmly believed in loyalty and diplomatic law. The cities that were massacred were ones that rebelled after surrendering, or killed as ambassadors. His was a strict understanding of justice." "Multiple accounts show his army's brutality going beyond justice: ripping unborn children from mothers' wombs, using prisoners as human shields, or moat fillers to support siege engines, taking all women from conquered towns--" "Enough! How barbaric!" "Is that really so much worse than other medieval armies?" "That doesn't excuse Genghis Khan's atrocities." "But it does make Genghis Khan unexceptional for his time rather than some bloodthirsty savage. In fact, after his unification of the tribes abolished bride kidnapping, women in the Mongol ranks had it better than most. They controlled domestic affairs, could divorce their husbands, and were trusted advisors. Temüjin remained with his first bride all his life, even raising her possibly illegitimate son as his own." "Regardless, Genghis Khan's legacy was a disaster: up to 40 million killed across Eurasia during his descendents' conquests. 10% of the world population. That's not even counting casualties from the Black Plague brought to Europe by the Golden Horde's Siege of Kaffa." "Surely that wasn't intentional." "Actually, when they saw their own troops dying of the Plague, they catapulted infected bodies over the city walls." "Blech." "The accounts you're referencing were written over a hundred years after the fact. How reliable do you think they are? Plus, the survivors reaped the benefits of the empire Genghis Khan founded." "Benefits?" "The Mongol Empire practiced religious tolerance among all subjects, they treated their soldiers well, promoted based on merit, rather than birth, established a vast postal system, and inforced universal rule of law, not to mention their contribution to culture." "You mean like Hulagu Khan's annihilation of Baghdad, the era's cultural capital? Libraries, hospitals and palaces burned, irrigation canals buried?" "Baghdad was unfortunate, but its Kalif refused to surrender, and Hulagu was later punished by Berke Khan for the wanton destruction. It wasn't Mongol policy to destroy culture. Usually they saved doctors, scholars and artisans from conquered places, and transferred them throughout their realm, spreading knowledge across the world." "What about the devastation of Kievan Rus, leaving its people in the Dark Ages even as the Renaissance spread across Western Europe?" "Western Europe was hardly peaceful at the time. The stability of Mongol rule made the Silk Road flourish once more, allowing trade and cultural exchange between East and West, and its legacy forged Russia and China from warring princedoms into unified states. In fact, long after the Empire, Genghis Khan's descendants could be found among the ruling nobility all over Eurasia." "Not surprising that a tyrant would inspire further tyrants." "Careful what you call him. You may be related." "What?" "16 million men today are descended from Genghis Khan. That's one in ever 200." For every great conqueror, there are millions of conquered. Whose stories will survive? And can a leader's historical or cultural signifigance outweigh the deaths they caused along the way? These are the questions that arise when we put history on trial.
TedEd_History
How_misused_modifiers_can_hurt_your_writing_Emma_Bryce.txt
This just in: "Thief robs town with world's largest chocolate bunny." Wait, so are we talking about this, or this? That's a classic case of a misplaced modifier, a common grammatical mistake that can dramatically change the meaning of a sentence. And lest you think this is a bit far-fetched, confusing headlines like this appear all the time. Modifiers are words, phrases, and clauses that add information about other parts of a sentence, which is usually helpful. But when modifiers aren't linked clearly enough to the words they're actually referring to, they can create unintentional ambiguity. That happens because the modifying words, in this case, "with world's largest chocolate bunny," modify the wrong thing, the robber's actions instead of the town. To correct this particular sentence, we simply rephrase to make it clearer what the modifying phrase is talking about. "Town with world's largest chocolate bunny robbed by thief." Now, at least it's clear that the thief wasn't armed with a giant chocolate animal. Sometimes, modifying words, phrases, or clauses don't appear to be modifying anything at all. That's called a dangling modifier. "Having robbed the bank in record time, it was possible to make off with the town's chocolate rabbit as well." The modifying phrase in this sentence seems unrelated to anything else, and so we're clueless about who the chocolate-loving criminal could possibly be. Giving the modifier something to modify will solve the problem. Then there's another group called the squinting modifiers because they're stuck between two things and could feasibly refer to either. Often, these modifiers are adverbs, like the one in this sentence: "Robbers who steal chocolate bunnies rapidly attract the outrage of onlookers." "Rapidly" is the modifier, here, but what's not clear is whether it's referring to the speed of the chocolate thievery, or how quickly it alerts the furious onlookers. To clarify, we can either put the modifier closer to its intended phrase, which works in some cases, or we can entirely reword the sentence so that the modifier no longer squints, but clearly applies to only one part. "Chocolate bunny-thieving robbers rapidly attract the outrage of onlookers." Justice will eventually come to the chocolate thief, but in the meantime, our task is to avoid verbal ambiguity by making it clear which parts of the sentences modifiers belong to. That way, we can at least maintain grammatical law and order.
TedEd_History
흑사병의_과거_현재_그리고_미래.txt
Imagine if half the people in your neighborhood, your city, or even your whole country were wiped out. It might sound like something out of an apocalyptic horror film, but it actually happened in the 14th century during a disease outbreak known as the Black Death. Spreading from China through Asia, the Middle East, Africa, and Europe, the devastating epidemic destroyed as much as 1/5 of the world's population, killing nearly 50% of Europeans in just four years. One of the most fascinating and puzzling things abut the Black Death is that the illness itself was not a new phenomenon but one that has affected humans for centuries. DNA analysis of bone and tooth samples from this period, as well as an earlier epidemic known as the Plague of Justinian in 541 CE, has revealed that both were caused by Yersinia pestis, the same bacterium that causes bubonic plague today. What this means is that the same disease caused by the same pathogen can behave and spread very differently throughout history. Even before the use of antibiotics, the deadliest oubreaks in modern times, such as the ones that occurred in early 20th century India, killed no more than 3% of the population. Modern instances of plague also tend to remain localized, or travel slowly, as they are spread by rodent fleas. But the medieval Black Death, which spread like wildfire, was most likely communicated directly from one person to another. And because genetic comparisons of ancient to modern strains of Yersinia pestis have not revealed any significantly functional genetic differences, the key to why the earlier outbreak was so much deadlier must lie not in the parasite but the host. For about 300 years during the High Middle Ages, a warmer climate and agricultural improvements had led to explosive population growth throughout Europe. But with so many new mouths to feed, the end of this warm period spelled disaster. High fertility rates combined with reduced harvest, meant the land could no longer support its population, while the abundant supply of labor kept wages low. As a result, most Europeans in the early 14th century experienced a steady decline in living standards, marked by famine, poverty and poor health, leaving them vulnerable to infection. And indeed, the skeletal remains of Black Death victims found in London show telltale signs of malnutrition and prior illness. The destruction caused by the Black Death changed humanity in two important ways. On a societal level, the rapid loss of population led to important changes in Europe’s economic conditions. With more food to go around, as well as more land and better pay for the surviving farmers and workers, people began to eat better and live longer as studies of London cemeteries have shown. Higher living standards also brought an increase in social mobility, weakening feudalism and eventually leading to political reforms. But the plague also had an important biological impact. The sudden death of so many of the most frail and vulnerable people left behind a population with a significantly different gene pool, including genes that may have helped survivors resist the disease. And because such mutations often confer immunities to multiple pathogens that work in similar ways, research to discover the genetic consequences of the Black Death has the potential to be hugely beneficial. Today, the threat of an epidemic on the scale of the Black Death has been largely eliminated thanks to antibiotics. But the bubonic plague continues to kill a few thousand people worldwide every year, and the recent emergence of a drug-resistant strain threatens the return of darker times. Learning more about the causes and effects of the Black Death is important, not just for understanding how our world has been shaped by the past. It may also help save us from a similar nightmare in the future.
TedEd_History
가부키Kabuki_서민들의_연극_예술.txt
Many elements of traditional Japanese culture, such as cuisine and martial arts, are well-known throughout the world. Kabuki, a form of classical theater performance, may not be as well understood in the West but has evolved over 400 years to still maintain influence and popularity to this day. The word Kabuki is derived from the Japanese verb kabuku, meaning out of the ordinary or bizarre. Its history began in early 17th century Kyoto, where a shrine maiden named Izumo no Okuni would use the city's dry Kamo Riverbed as a stage to perform unusual dances for passerby, who found her daring parodies of Buddhist prayers both entertaining and mesmerizing. Soon other troops began performing in the same style, and Kabuki made history as Japan's first dramatic performance form catering to the common people. By relying on makeup, or keshou, and facial expressions instead of masks and focusing on historical events and everyday life rather than folk tales, Kabuki set itself apart from the upper-class dance theater form known as Noh and provided a unique commentary on society during the Edo period. At first, the dance was practiced only by females and commonly referred to as Onna-Kabuki. It soon evolved to an ensemble performance and became a regular attraction at tea houses, drawing audiences from all social classes. At this point, Onna-Kabuki was often risque as geishas performed not only to show off their singing and dancing abilities but also to advertise their bodies to potential clients. A ban by the conservative Tokugawa shogunate in 1629 led to the emergence of Wakashu-Kabuki with young boys as actors. But when this was also banned for similar reasons, there was a transition to Yaro-Kabuki, performed by men, necessitating elaborate costumes and makeup for those playing female roles, or onnagata. Attempts by the government to control Kabuki didn't end with bans on the gender or age of performers. The Tokugawa military group, or Bakufu, was fueled by Confucian ideals and often enacted sanctions on costume fabrics, stage weaponry, and the subject matter of the plot. At the same time, Kabuki became closely associated with and influenced by Bunraku, an elaborate form of puppet theater. Due to these influences, the once spontaneous, one-act dance evolved into a structured, five-act play often based on the tenets of Confucian philosophy. Before 1868, when the Tokugawa shogunate fell and Emperor Meiji was restored to power, Japan had practiced isolation from other countries, or Sakoku. And thus, the development of Kabuki had mostly been shaped by domestic influences. But even before this period, European artists, such as Claude Monet, had become interested in and inspired by Japanese art, such as woodblock prints, as well as live performance. After 1868, others such as Vincent van Gogh and composer Claude Debussy began to incorporate Kabuki influences in their work, while Kabuki itself underwent much change and experimentation to adapt to the new modern era. Like other traditional art forms, Kabuki suffered in popularity in the wake of World War II. But innovation by artists such as director Tetsuji Takechi led to a resurgence shortly after. Indeed, Kabuki was even considered a popular form of entertainment amongst American troops stationed in Japan despite initial U.S. censorship of Japanese traditions. Today, Kabuki still lives on as an integral part of Japan's rich cultural heritage, extending its influence beyond the stage to television, film, and anime. The art form pioneered by Okuni continues to delight audiences with the actors' elaborate makeup, extravagant and delicately embroidered costumes, and the unmistakable melodrama of the stories told on stage.
TedEd_History
How_to_use_rhetoric_to_get_what_you_want_Camille_A_Langston.txt
How do you get what you want using just your words? Aristotle set out to answer exactly that question over 2,000 years ago with the Treatise on Rhetoric. Rhetoric, according to Aristotle, is the art of seeing the available means of persuasion. And today we apply it to any form of communication. Aristotle focused on oration, though, and he described three types of persuasive speech. Forensic, or judicial, rhetoric establishes facts and judgements about the past, similar to detectives at a crime scene. Epideictic, or demonstrative, rhetoric makes a proclamation about the present situation, as in wedding speeches. But the way to accomplish change is through deliberative rhetoric, or symbouleutikon. Rather than the past or the present, deliberative rhetoric focuses on the future. It's the rhetoric of politicians debating a new law by imagining what effect it might have, like when Ronald Regan warned that the introduction of Medicare would lead to a socialist future spent telling our children and our children's children what it once was like in America when men were free. But it's also the rhetoric of activists urging change, such as Martin Luther King Jr's dream that his children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character. In both cases, the speaker's present their audience with a possible future and try to enlist their help in avoiding or achieving it. But what makes for good deliberative rhetoric, besides the future tense? According to Aristotle, there are three persuasive appeals: ethos, logos, and pathos. Ethos is how you convince an audience of your credibility. Winston Churchill began his 1941 address to the U.S. Congress by declaring, "I have been in full harmony all my life with the tides which have flowed on both sides of the Atlantic against privilege and monopoly," thus highlighting his virtue as someone committed to democracy. Much earlier, in his defense of the poet Archias, Roman consul Cicero appealed to his own practical wisdom and expertise as a politician: "Drawn from my study of the liberal sciences and from that careful training to which I admit that at no part of my life I have ever been disinclined." And finally, you can demonstrate disinterest, or that you're not motivated by personal gain. Logos is the use of logic and reason. This method can employ rhetorical devices such as analogies, examples, and citations of research or statistics. But it's not just facts and figures. It's also the structure and content of the speech itself. The point is to use factual knowledge to convince the audience, as in Sojourner Truth's argument for women's rights: "I have as much muscle as any man and can do as much work as any man. I have plowed and reaped and husked and chopped and mowed and can any man do more than that?" Unfortunately, speakers can also manipulate people with false information that the audience thinks is true, such as the debunked but still widely believed claim that vaccines cause autism. And finally, pathos appeals to emotion, and in our age of mass media, it's often the most effective mode. Pathos is neither inherently good nor bad, but it may be irrational and unpredictable. It can just as easily rally people for peace as incite them to war. Most advertising, from beauty products that promise to relieve our physical insecurities to cars that make us feel powerful, relies on pathos. Aristotle's rhetorical appeals still remain powerful tools today, but deciding which of them to use is a matter of knowing your audience and purpose, as well as the right place and time. And perhaps just as important is being able to notice when these same methods of persuasion are being used on you.
TedEd_History
History_through_the_eyes_of_the_potato_Leo_BearMcGuinness.txt
Baked or fried, boiled or roasted, as chips or fries. At some point in your life, you've probably eaten a potato. Delicious, for sure, but the fact is potatoes have played a much more significant role in our history than just that of the dietary staple we have come to know and love today. Without the potato, our modern civilization might not exist at all. 8,000 years ago in South America, high atop the Andes, ancient Peruvians were the first to cultivate the potato. Containing high levels of proteins and carbohydrates, as well as essential fats, vitamins and minerals, potatoes were the perfect food source to fuel a large Incan working class as they built and farmed their terraced fields, mined the Rocky Mountains, and created the sophisticated civilization of the great Incan Empire. But considering how vital they were to the Incan people, when Spanish sailors returning from the Andes first brought potatoes to Europe, the spuds were duds. Europeans simply didn't want to eat what they considered dull and tasteless oddities from a strange new land, too closely related to the deadly nightshade plant belladonna for comfort. So instead of consuming them, they used potatoes as decorative garden plants. More than 200 years would pass before the potato caught on as a major food source throughout Europe, though even then, it was predominantly eaten by the lower classes. However, beginning around 1750, and thanks at least in part to the wide availability of inexpensive and nutritious potatoes, European peasants with greater food security no longer found themselves at the mercy of the regularly occurring grain famines of the time, and so their populations steadily grew. As a result, the British, Dutch and German Empires rose on the backs of the growing groups of farmers, laborers, and soldiers, thus lifting the West to its place of world dominion. However, not all European countries sprouted empires. After the Irish adopted the potato, their population dramatically increased, as did their dependence on the tuber as a major food staple. But then disaster struck. From 1845 to 1852, potato blight disease ravaged the majority of Ireland's potato crop, leading to the Irish Potato Famine, one of the deadliest famines in world history. Over a million Irish citizens starved to death, and 2 million more left their homes behind. But of course, this wasn't the end for the potato. The crop eventually recovered, and Europe's population, especially the working classes, continued to increase. Aided by the influx of Irish migrants, Europe now had a large, sustainable, and well-fed population who were capable of manning the emerging factories that would bring about our modern world via the Industrial Revolution. So it's almost impossible to imagine a world without the potato. Would the Industrial Revolution ever have happened? Would World War II have been lost by the Allies without this easy-to-grow crop that fed the Allied troops? Would it even have started? When you think about it like this, many major milestones in world history can all be at least partially attributed to the simple spud from the Peruvian hilltops.
TedEd_History
A_brief_history_of_goths_Dan_Adams.txt
What do fans of atmospheric post-punk music have in common with ancient barbarians? Not much. So why are both known as goths? Is it a weird coincidence or a deeper connection stretching across the centuries? The story begins in Ancient Rome. As the Roman Empire expanded, it faced raids and invasions from the semi-nomadic populations along its borders. Among the most powerful were a Germanic people known as Goths who were composed of two tribal groups, the Visigoths and Ostrogoths. While some of the Germanic tribes remained Rome's enemies, the Empire incorporated others into the imperial army. As the Roman Empire split in two, these tribal armies played larger roles in its defense and internal power struggles. In the 5th century, a mercenary revolt lead by a soldier named Odoacer captured Rome and deposed the Western Emperor. Odoacer and his Ostrogoth successor Theoderic technically remained under the Eastern Emperor's authority and maintained Roman traditions. But the Western Empire would never be united again. Its dominions fragmented into kingdoms ruled by Goths and other Germanic tribes who assimilated into local cultures, though many of their names still mark the map. This was the end of the Classical Period and the beginning of what many call the Dark Ages. Although Roman culture was never fully lost, its influence declined and new art styles arose focused on religious symbolism and allegory rather than proportion and realism. This shift extended to architecture with the construction of the Abbey of Saint Denis in France in 1137. Pointed arches, flying buttresses, and large windows made the structure more skeletal and ornate. That emphasized its open, luminous interior rather than the sturdy walls and columns of Classical buildings. Over the next few centuries, this became a model for Cathedrals throughout Europe. But fashions change. With the Italian Renaissance's renewed admiration for Ancient Greece and Rome, the more recent style began to seem crude and inferior in comparison. Writing in his 1550 book, "Lives of the Artists," Giorgio Vasari was the first to describe it as Gothic, a derogatory reference to the Barbarians thought to have destroyed Classical civilization. The name stuck, and soon came to describe the Medieval period overall, with its associations of darkness, superstition, and simplicity. But time marched on, as did what was considered fashionable. In the 1700s, a period called the Enlightenment came about, which valued scientific reason above all else. Reacting against that, Romantic authors like Goethe and Byron sought idealized visions of a past of natural landscapes and mysterious spiritual forces. Here, the word Gothic was repurposed again to describe a literary genre that emerged as a darker strain of Romanticism. The term was first applied by Horace Walpole to his own 1764 novel, "The Castle of Otranto" as a reference to the plot and general atmosphere. Many of the novel's elements became genre staples inspiring classics and the countless movies they spawned. The gothic label belonged to literature and film until the 1970s when a new musical scene emerged. Taking cues from artists like The Doors and The Velvet Underground, British post-punk groups, like Joy Division, Bauhaus, and The Cure, combined gloomy lyrics and punk dissonance with imagery inspired by the Victorian era, classic horror, and androgynous glam fashion. By the early 1980s, similar bands were consistently described as Gothic rock by the music press, and the stye's popularity brought it out of dimly lit clubs to major labels and MTV. And today, despite occasional negative media attention and stereotypes, Gothic music and fashion continue as a strong underground phenomenon. They've also branched into sub-genres, such as cybergoth, gothabilly, gothic metal, and even steampunk. The history of the word gothic is embedded in thousands of years worth of countercultural movements, from invading outsiders becoming kings to towering spires replacing solid columns to artists finding beauty in darkness. Each step has seen a revolution of sorts and a tendency for civilization to reach into its past to reshape its present.
TedEd_History
Why_wasnt_the_Bill_of_Rights_originally_in_the_US_Constitution_James_Coll.txt
Take a moment to think about the US Constitution. What's the first thing that comes to mind? Freedom of speech? Protection from illegal searches? The right to keep and bear arms? These passages are cited so often that we can hardly imagine the document without them, but that's exactly what the writers of the Constitution did. The list of individual freedoms known as the Bill of Rights was not in the original text and wasn't added for another three years. So does this mean the founders didn't consider them? The answer goes back to the very origins of the Constitution itself. Even prior to the first shots of the American Revolution, the Thirteen Colonies worked together through a provisional government called the Continental Congress. During the war in 1781, the Articles of Confederation were ratified as the first truly national government. But establishing a new nation would prove easier than running it. Congress had no power to make the states comply with their laws. When the national government proved unable to raise funds, enforce foreign treaties, or suppress rebellions, it was clear reform was needed. So in May 1787, all the states but Rhode Island sent delegates to Philidelphia for a constitutional convention. A majority of these delegates favored introducing a new national constitution to create a stronger federal government. Thanks to compromises on issues like state representation, taxation power, and how to elect the president, their proposal gradually gained support. But the final text drafted in September still had to be approved by conventions held in the states. So over the next few months, ratification would be debated across the young nation. Among those who championed the new document were leading statesmen Alexander Hamilton, James Madison, and John Jay. Together, they laid out eloquent philosophical arguments for their positions in a series of 85 essays now known as the Federalist Papers. But others felt the Constitution was overreaching and that more centralized authority would return the states to the sort of tyranny they had just escaped. These Anti-Federalists were especially worried by the text's apparent lack of protections for individual liberties. As the conventions proceeded, many of these critics shifted from opposing the Constitution entirely to insisting on adding an explicit declaration of rights. So what was the Federalists problem with this idea? While their opponents accused them of despotism, wanting to maintain absolute power in the central government, their real motives were mostly practical. Changing the constitution when it had already been ratified by some states could complicate the entire process. More importantly, Madison felt that people's rights were already guaranteed through the democratic process, while adding extra provisions risked misinterpretation. And some feared that creating an explicit list of things the government can't do would imply that it can do everything else. After the first five states ratified the Constitution quickly, the debate grew more intense. Massachusetts and several other states would only ratify if they could propose their own amendments for consideration. Leading Federalists recognized the need to compromise and promised to give them due regard. Once ratification by nine states finally brought the Constitution into legal force, they made good on their promise. During a meeting of the first United States Congress, representative James Madison stood on the House floor to propose the very amendments he had previously believed to be unnecessary. After much debate and revision, first in the Congress, and then in the states, ten amendments were ratified on December 15, 1791, over three years after the US Constitution had become law. Today, every sentence, word, and punctuation mark in the Bill of RIghts is still considered fundamental to the freedoms Americans enjoy, even though the original framers left them out.
TedEd_History
아테네에서_민주주의의_진실한_의미는_무엇인가_Melissa_Schwartzberg.txt
Hey, congratulations! You've just won the lottery, only the prize isn't cash or a luxury cruise. It's a position in your country's national legislature. And you aren't the only lucky winner. All of your fellow lawmakers were chosen in the same way. This might strike you as a strange way to run a government, let alone a democracy. Elections are the epitome of democracy, right? Well, the ancient Athenians who coined the word had another view. In fact, elections only played a small role in Athenian democracy, with most offices filled by random lottery from a pool of citizen volunteers. Unlike the representative democracies common today, where voters elect leaders to make laws and decisions on their behalf, 5th Century BC Athens was a direct democracy that encouraged wide participation through the principle of ho boulomenos, or anyone who wishes. This meant that any of its approximately 30,000 eligible citizens could attend the ecclesia, a general assembly meeting several times a month. In principle, any of the 6,000 or so who showed up at each session had the right to address their fellow citizens, propose a law, or bring a public lawsuit. Of course, a crowd of 6,000 people trying to speak at the same time would not have made for effective government. So the Athenian system also relied on a 500 member governing council called the Boule, to set the agenda and evaluate proposals, in addition to hundreds of jurors and magistrates to handle legal matters. Rather than being elected or appointed, the people in these positions were chosen by lot. This process of randomized selection is know as sortition. The only positions filled by elections were those recognized as requiring expertise, such as generals. But these were considered aristocratic, meaning rule by the best, as opposed to democracies, rule by the many. How did this system come to be? Well, democracy arose in Athens after long periods of social and political tension marked by conflict among nobles. Powers once restricted to elites, such as speaking in the assembly and having their votes counted, were expanded to ordinary citizens. And the ability of ordinary citizens to perform these tasks adequately became a central feature of the democratice ideology of Athens. Rather than a privilege, civic participation was the duty of all citizens, with sortition and strict term limits preventing governing classes or political parties from forming. By 21st century standards, Athenian rule by the many excluded an awful lot of people. Women, slaves and foreigners were denied full citizenship, and when we filter out those too young to serve, the pool of eligible Athenians drops to only 10-20% of the overall population. Some ancient philosophers, including Plato, disparaged this form of democracy as being anarchic and run by fools. But today the word has such positive associations, that vastly different regimes claim to embody it. At the same time, some share Plato's skepticism about the wisdom of crowds. Many modern democracies reconcile this conflict by having citizens elect those they consider qualified to legislate on their behalf. But this poses its own problems, including the influence of wealth, and the emergence of professional politicians with different interests than their constituents. Could reviving election by lottery lead to more effective government through a more diverse and representative group of legislatures? Or does modern political office, like Athenian military command, require specialized knowledge and skills? You probably shouldn't hold your breath to win a spot in your country's government. But depending on where you live, you may still be selected to participate in a jury, a citizens' assembly, or a deliberative poll, all examples of how the democratic principle behind sortition still survives today.
TedEd_History
어떻게_세미콜론을_사용하는가_엠마_브라이스Emma_Bryce.txt
It may seem like the semicolon is struggling with an identity crisis. It looks like a comma crossed with a period. Maybe that's why we toss these punctuation marks around like grammatical confetti. We're confused about how to use them properly. In fact, it's the semicolon's half-half status that makes it useful. It's stronger than a comma, and less final than a period. It fills the spaces in between, and for that reason, it has some specific and important tasks. For one, it can clarify ideas in a sentence that's already festooned with commas. "Semicolons: At first, they may seem frightening, then, they become enlightening, finally, you'll find yourself falling for these delightful punctuation marks." Even though the commas separate different parts of the sentence, it's easy to lose track of what belongs where. But then the semicolon edges in to the rescue. In list-like sentences, it can exert more force than commas do, cutting sentences into compartments and grouping items that belong together. The semicolon breaks things up, but it also builds connections. Another of its tasks is to link together independent clauses. These are sentences that can stand on their own, but when connected by semicolons, look and sound better because they're related in some way. "Semicolons were once a great mystery to me. I had no idea where to put them." Technically, there's nothing wrong with that. These two sentences can stand alone. But imagine they appeared in a long list of other sentences, all of the same length, each separated by periods. Things would get monotonous very fast. In that situation, semicolons bring fluidity and variation to writing by connecting related clauses. But as beneficial as they are, semicolons don't belong just anywhere. There are two main rules that govern their use. Firstly, unless they're being used in lists, semicolons should only connect clauses that are related in some way. You wouldn't use one here, for instance: "Semicolons were once a great mystery to me; I'd really like a sandwich." Periods work best here because these are two totally different ideas. A semicolon's job is to reunite two independent clauses that will benefit from one another's company because they refer to the same thing. Secondly, you'll almost never find a semicolon willingly stationed before coordinating conjunctions: the words, "and," "but," "for," "nor," "or," "so," and "yet." That's a comma's place, in fact. But a semicolon can replace a conjunction to shorten a sentence or to give it some variety. Ultimately, this underappreciated punctuation mark can give writing clarity, force, and style, all encompassed in one tiny dot and squiggle that's just waiting to be put in the right place.
TedEd_History
When_to_use_me_myself_and_I_Emma_Bryce.txt
Me, myself, and I. You may be tempted to use these words interchangeably because they all refer to the same thing, but in fact, each one has a specific role in a sentence. "I" is a subject pronoun, "me" is an object pronoun, and "myself" is a reflexive or intensive pronoun. So what does that reveal about where each word belongs? Let's start with the difference between subject and object. Imagine the subject as the actor in a sentence and the object as the word that is acted upon. "I invited her but she invited me." The object can also be the object of a preposition. "She danced around me, while he shimmied up to me." In some languages, like Latin and Russian, most nouns have different forms that distinguish subjects from objects. However, in English, that's only true of pronouns. But so long as you know how to distinguish subjects from objects, you can figure out what belongs where. And when you encounter a more complicated sentence, say one that involves multiple subjects or objects, and you're not sure whether to use "I" or "me," just temporarily eliminate the other person, and once again distinguish subject from object. Here's another. You wouldn't say, "Me heard gossip," but sub in "I" and you're good to go. Then what about "myself?" This grand character is often substituted for "me" and "I" because it seems more impressive. "Please tell Jack or myself" may sound elegant, but in fact, "me" is the right pronoun here. So where should you use "myself"? In its function as a reflexive pronoun, "myself" only works if it's the object of a sentence whose subject is "I." "I consider myself the most important pronoun at this year's party." "Myself" can also add emphasis as an intensive pronoun. "I, myself, have heard others agree." The sentence works without it, but that extra pronoun gives it oomph. To check if "myself" belongs in a sentence, simply ensure that there's also an "I" that it's reflecting or intensifying. So that's "me," "myself," and "I," ever ready to represent you, yourself, and you.
TedEd_History
부패_부와_사치_베니스_곤돌라의_역사_라우라_모렐리.txt
If I say, "Venice," do you imagine yourself gliding down the Grand Canal, serenaded by a gondolier? There's no doubt that the gondola is a symbol of Venice, Italy, but how did this curious banana-shaped black boat get its distinctive look? The origins of the Venetian gondola are lost to history, but by the 1500s, some 10,000 gondolas transported dignitaries, merchants and goods through the city's canals. In fact, Venice teemed with many types of handmade boats, from utilitarian rafts to the Doge's own ostentatious gilded barge. Like a modern day taxi system, gondolas were leased to boatmen who made the rounds of the city's ferry stations. Passengers paid a fare to be carried from one side of the Grand Canal to the other, as well as to other points around the city. But gondoliers soon developed a bad rap. Historical documents describe numerous infractions involving boatmen, including cursing, gambling, extorting passengers -- even occasional acts of violence. To minimize the unpredictability of canal travel, Venetian citizens who could afford it purchased their own gondolas, just as a celebirty might use a private car and driver today. These wealthy Venetians hired two private gondoliers to ferry them around the city and maintain their boats. The gondolas soon became a status symbol, much like an expensive car, with custom fittings, carved and gilded ornamentation, and seasonal fabrics, like silk and velvet. However, the majority of gondolas seen today are black because in 1562, Venetian authorities decreed that all but ceremonial gondolas be painted black in order to avoid sinfully extravagant displays. Apparently, Venetian authorities did not believe in "pimping their rides." Still, some wealthy Venetians chose to pay the fines in order to maintain their ornamental gondolas, a small price to keep up appearances. The distinctive look of the gondola developed over many centuries. Each gondola was constructed in a family boatyard called a squero. From their fathers and grandfathers, sons learned how to select and season pieces of beech, cherry, elm, fir, larch, lime, mahogany, oak and walnut. The gondola makers began with a wooden template that may have been hammered into the workshop floor generations earlier. From this basic form, they attached fore and aft sterns, then formed the longitudinal planks and ribs that made up the frame of a boat designed to glide through shallow, narrow canals. A gondola has no straight lines or edges. Its familiar profile was achieved through an impressive fire and water process that involved warping the boards with torches made of marsh reeds set ablaze. However, the majority of the 500 hours that went into building a gondola involved the final stages: preparing surfaces and applying successive coats of waterproof varnish. The varnish was a family recipe, as closely guarded as one for risotto or a homemade sauce. Yet even with the woodwork finished, the gondola was still not complete. Specialized artisans supplied their gondola-making colleagues with elaborate covered passenger compartments, upholstery and ornaments of steel and brass. Oar makers became integral partners to the gondola makers. The Venetian oarlock, or fórcola, began as a simple wooden fork, but evolved into a high-precision tool that allowed a gondolier to guide the oar into many positions. By the late 1800s, gondola makers began to make the left side of the gondola wider than the right as a counter balance to the force created by a single gondolier. This modification allowed rowers to steer from the right side only, and without lifting the oar from the water. While these modifications improved gondola travel, they were not enough to keep pace with motorized boats. Today, only about 400 gondolas glide through the waterways of Venice, and each year, fewer authentic gondolas are turned out by hand. But along the alleys, street signs contain words in Venetian dialect for the locations of old boatyards, oar makers and ferry stations, imprinting the memory of the boat-building trades that once kept life in the most serene republic gliding along at a steady clip.
TedEd_History
미라를_만드는_법ㅣ렌_블로치_Len_Bloch.txt
Death and taxes are famously inevitable, but what about decomposition? As anyone who's seen a mummy knows, ancient Egyptians went to a lot of trouble to evade decomposition. So, how successful were they? Living cells constantly renew themselves. Specialized enzymes decompose old structures, and the raw materials are used to build new ones. But what happens when someone dies? Their dead cells are no longer able to renew themselves, but the enzymes keep breaking everything down. So anyone looking to preserve a body needed to get ahead of those enzymes before the tissues began to rot. Neurons die quickly, so brains were a lost cause to Ancient Egyptian mummifiers, which is why, according to Greek historian Herodotus, they started the process by hammering a spike into the skull, mashing up the brain, flushing it out the nose and pouring tree resins into the skull to prevent further decomposition. Brains may decay first, but decaying guts are much worse. The liver, stomach and intestines contain digestive enzymes and bacteria, which, upon death, start eating the corpse from the inside. So the priests removed the lungs and abdominal organs first. It was difficult to remove the lungs without damaging the heart, but because the heart was believed to be the seat of the soul, they treated it with special care. They placed the visceral organs in jars filled with a naturally occurring salt called natron. Like any salt, natron can prevent decay by killing bacteria and preventing the body's natural digestive enzymes from working. But natron isn't just any salt. It's mainly a mixture of two alkaline salts, soda ash and baking soda. Alkaline salts are especially deadly to bacteria. And they can turn fatty membranes into a hard, soapy substance, thereby maintaining the corpse's structure. After dealing with the internal organs, the priest stuffed the body cavity with sacks of more natron and washed it clean to disinfect the skin. Then, the corpse was set in a bed of still more natron for about 35 days to preserve its outer flesh. By the time of its removal, the alkaline salts had sucked the fluid from the body and formed hard brown clumps. The corpse wasn't putrid, but it didn't exactly smell good, either. So, priests poured tree resin over the body to seal it, massaged it with a waxy mixture that included cedar oil, and then wrapped it in linen. Finally, they placed the mummy in a series of nested coffins and sometimes even a stone sarcophagus. So how successful were the ancient Egyptians at evading decay? On one hand, mummies are definitely not intact human bodies. Their brains have been mashed up and flushed out, their organs have been removed and salted like salami, and about half of their remaining body mass has been drained away. Still, what remains is amazingly well-preserved. Even after thousands of years, scientists can perform autopsies on mummies to determine their causes of death, and possibly even isolate DNA samples. This has given us new information. For example, it seems that air pollution was a serious problem in ancient Egypt, probably because of indoor fires used to bake bread. Cardiovascular disease was also common, as was tuberculosis. So ancient Egyptians were somewhat successful at evading decay. Still, like death, taxes are inevitable. When some mummies were transported, they were taxed as salted fish.
TedEd_History
The_scientific_origins_of_the_Minotaur_Matt_Kaplan.txt
Far beneath the palace of the treacherous King Minos, in the damp darkness of an inescapable labryinth, a horrific beast stalks the endless corridors of its prison, enraged with a bloodlust so intense that its deafening roar shakes the Earth. It is easy to see why the Minotaur myth has a long history of being disregarded as pure fiction. However, there's a good chance that the Minotaur and other monsters and gods were created by our early ancestors to rationalize the terrifying things that they saw in the natural world but did not understand. And while we can't explain every aspect of their stories, there may be some actual science that reveals itself when we dissect them for clues. So, as far as we know, there have never been human-bull hybrids. But the earliest material written about the Minotaur doesn't even mention its physical form. So that's probably not the key part of the story. What the different tellings do agree upon, however, is that the beast lives underground, and when it bellows, it causes tremendous problems. The various myths are also specific in stating that genius inventor Daedalus, carved out the labyrinth beneath the island of Crete. Archeological attempts to find the fabled maze have come up empty handed. But Crete itself has yielded the most valuable clue of all in the form of seismic activity. Crete sits on a piece of continental crust called the Aegean Block, and has a bit of oceanic crust known as the Nubian Block sliding right beneath it. This sort of geologic feature, called a subduction zone, is common all over the world and results in lots of earthquakes. However, in Crete the situation is particularly volatile as the Nubian Block is attached to the massive buoyant continental crust that is Africa. When the Nubian Block moves, it does not go down nearly as easily or as steeply as oceanic crust does in most other subduction zones. Instead, it violently and abruptly forces sections of the Mediterranean upwards in an event called uplift, and Crete is in uplift central. In the year 2014, Crete had more than 1300 earthquakes of magnitude 2.0 or higher. By comparison, in the same period of time, Southern California, a much larger area, experienced a mere 255 earthquakes. Of course, we don't have detailed seismic records from the days of King Minos, but we do know from fossil records and geologic evidence that Crete has experienced serious uplift events that sometimes exceeded 30 feet in a single moment. Contrast this for a moment with the island of Hawaii, where earthquakes and volcanic activity were tightly woven to legends surrounding Pele, a goddess both fiery and fair. Like the Minotaur, her myths included tales of destruction, but they also contained elements of dance and creation. So why did Hawaii end up with Pele and Crete end up with the Minotaur? The difference likely comes down to the lava that followed many of Hawaii's worst earthquakes. The lava on Hawaii is made of basalt, which once cooled, is highly fertile. Within a couple of decades of terrible eruptions, Islanders would have seen vibrant green life thriving on new peninsulas made of lava. So it makes sense that the mythology captured this by portraying Pele as creator as well as a destroyer. As for the people of Crete, their earthquakes brought only destruction and barren lands, so perhaps for them the unnatural and deadly Minotaur was born. The connections between mythical stories and the geology of the regions where they originated teach us that mythology and science are actually two sides of the same coin. Both are rooted in explaining and understanding the world. The key difference is that where mythology uses gods, monsters and magic, science uses measurements, records and experiments.
TedEd_History
무엇이_프랑스_혁명을_일어나게_했을까_톰_뮬레이니Tom_Mullaney.txt
What rights do people have, and where do they come from? Who gets to make decisions for others and on what authority? And how can we organize society to meet people's needs? These questions challenged an entire nation during the upheaval of the French Revolution. By the end of the 18th century, Europe had undergone a profound intellectual and cultural shift known as the Enlightenment. Philosophers and artists promoted reason and human freedom over tradition and religion. The rise of a middle class and printed materials encouraged political awareness, and the American Revolution had turned a former English colony into an independent republic. Yet France, one of the largest and richest countries in Europe was still governed by an ancient regime of three rigid social classes called Estates. The monarch King Louis XVI based his authority on divine right and granted special privileges to the First and Second Estates, the Catholic clergy, and the nobles. The Third Estate, middle class merchants and craftsmen, as well as over 20 million peasants, had far less power and they were the only ones who paid taxes, not just to the king, but to the other Estates as well. In bad harvest years, taxation could leave peasants with almost nothing while the king and nobles lived lavishly on their extracted wealth. But as France sank into debt due to its support of the American Revolution and its long-running war with England, change was needed. King Louis appointed finance minister Jacques Necker, who pushed for tax reforms and won public support by openly publishing the government's finances. But the king's advisors strongly opposed these initiatives. Desperate for a solution, the king called a meeting of the Estates-General, an assembly of representatives from the Three Estates, for the first time in 175 years. Although the Third Estate represented 98% of the French population, its vote was equal to each of the other Estates. And unsurprisingly, both of the upper classes favored keeping their privileges. Realizing they couldn't get fair representation, the Third Estate broke off, declared themselves the National Assembly, and pledged to draft a new constitution with or without the other Estates. King Louis ordered the First and Second Estates to meet with the National Assembly, but he also dismissed Necker, his popular finance minister. In response, thousands of outraged Parisians joined with sympathetic soldiers to storm the Bastille prison, a symbol of royal power and a large storehouse of weapons. The Revolution had begun. As rebellion spread throughout the country, the feudal system was abolished. The Assembly's Declaration of the Rights of Man and Citizen proclaimed a radical idea for the time -- that individual rights and freedoms were fundamental to human nature and government existed only to protect them. Their privileges gone, many nobles fled abroad, begging foreign rulers to invade France and restore order. And while Louis remained as the figurehead of the constitutional monarchy, he feared for his future. In 1791, he tried to flee the country but was caught. The attempted escape shattered people's faith in the king. The royal family was arrested and the king charged with treason. After a trial, the once-revered king was publicly beheaded, signaling the end of one thousand years of monarchy and finalizing the September 21st declaration of the first French republic, governed by the motto "liberté, égalité, fraternité." Nine months later, Queen Marie Antoinette, a foreigner long-mocked as "Madame Déficit" for her extravagant reputation, was executed as well. But the Revolution would not end there. Some leaders, not content with just changing the government, sought to completely transform French society -- its religion, its street names, even its calendar. As multiple factions formed, the extremist Jacobins lead by Maximilien Robespierre launched a Reign of Terror to suppress the slightest dissent, executing over 20,000 people before the Jacobin's own downfall. Meanwhile, France found itself at war with neighboring monarchs seeking to strangle the Revolution before it spread. Amidst the chaos, a general named Napoleon Bonaparte took charge, becoming Emperor as he claimed to defend the Revolution's democratic values. All in all, the Revolution saw three constitutions and five governments within ten years, followed by decades alternating between monarchy and revolt before the next Republic formed in 1871. And while we celebrate the French Revolution's ideals, we still struggle with many of the same basic questions raised over two centuries ago.
TedEd_History
The_history_of_chocolate_Deanna_Pucciarelli.txt
If you can't imagine life without chocolate, you're lucky you weren't born before the 16th century. Until then, chocolate only existed in Mesoamerica in a form quite different from what we know. As far back as 1900 BCE, the people of that region had learned to prepare the beans of the native cacao tree. The earliest records tell us the beans were ground and mixed with cornmeal and chili peppers to create a drink - not a relaxing cup of hot cocoa, but a bitter, invigorating concoction frothing with foam. And if you thought we make a big deal about chocolate today, the Mesoamericans had us beat. They believed that cacao was a heavenly food gifted to humans by a feathered serpent god, known to the Maya as Kukulkan and to the Aztecs as Quetzalcoatl. Aztecs used cacao beans as currency and drank chocolate at royal feasts, gave it to soldiers as a reward for success in battle, and used it in rituals. The first transatlantic chocolate encounter occurred in 1519 when Hernán Cortés visited the court of Moctezuma at Tenochtitlan. As recorded by Cortés's lieutenant, the king had 50 jugs of the drink brought out and poured into golden cups. When the colonists returned with shipments of the strange new bean, missionaries' salacious accounts of native customs gave it a reputation as an aphrodisiac. At first, its bitter taste made it suitable as a medicine for ailments, like upset stomachs, but sweetening it with honey, sugar, or vanilla quickly made chocolate a popular delicacy in the Spanish court. And soon, no aristocratic home was complete without dedicated chocolate ware. The fashionable drink was difficult and time consuming to produce on a large scale. That involved using plantations and imported slave labor in the Caribbean and on islands off the coast of Africa. The world of chocolate would change forever in 1828 with the introduction of the cocoa press by Coenraad van Houten of Amsterdam. Van Houten's invention could separate the cocoa's natural fat, or cocoa butter. This left a powder that could be mixed into a drinkable solution or recombined with the cocoa butter to create the solid chocolate we know today. Not long after, a Swiss chocolatier named Daniel Peter added powdered milk to the mix, thus inventing milk chocolate. By the 20th century, chocolate was no longer an elite luxury but had become a treat for the public. Meeting the massive demand required more cultivation of cocoa, which can only grow near the equator. Now, instead of African slaves being shipped to South American cocoa plantations, cocoa production itself would shift to West Africa with Cote d'Ivoire providing two-fifths of the world's cocoa as of 2015. Yet along with the growth of the industry, there have been horrific abuses of human rights. Many of the plantations throughout West Africa, which supply Western companies, use slave and child labor, with an estimation of more than 2 million children affected. This is a complex problem that persists despite efforts from major chocolate companies to partner with African nations to reduce child and indentured labor practices. Today, chocolate has established itself in the rituals of our modern culture. Due to its colonial association with native cultures, combined with the power of advertising, chocolate retains an aura of something sensual, decadent, and forbidden. Yet knowing more about its fascinating and often cruel history, as well as its production today, tells us where these associations originate and what they hide. So as you unwrap your next bar of chocolate, take a moment to consider that not everything about chocolate is sweet.
TedEd_History
시위로_강력한_변화를_이끌어내는_방법에릭_리우_Eric_Liu.txt
We live in an age of protest. On campuses and public squares, on streets and social media, protesters around the world are challenging the status quo. Protest can thrust issues onto the national or global agenda, it can force out tyrants, it can activate people who have long been on the sidelines of civic life. While protest is often necessary, is it sufficient? Consider the Arab Spring. All across the Middle East, citizen protesters were able to topple dictators. Afterwards, though, the vacuum was too often filled by the most militant and violent. Protest can generate lasting positive change when it's followed by an equally passionate effort to mobilize voters, to cast ballots, to understand government, and to make it more inclusive. So here are three core strategies for peacefully turning awareness into action and protest into durable political power. First, expand the frame of the possible, second, choose a defining fight, and third, find an early win. Let's start with expanding the frame of the possible. How often have you heard in response to a policy idea, "That's just never going to happen"? When you hear someone say that, they're trying to define the boundaries of your civic imagination. The powerful citizen works to push those boundaries outward, to ask what if - what if it were possible? What if enough forms of power - people power, ideas, money, social norms - were aligned to make it happen? Simply asking that question and not taken as given all the givens of conventional politics is the first step in converting protest to power. But this requires concreteness about what it would look like to have, say, a radically smaller national government, or, by contrast, a big single-payer healthcare system, a way to hold corporations accountable for their misdeeds, or, instead, a way to free them from onerous regulations. This brings us to the second strategy, choosing a defining fight. All politics is about contrasts. Few of us think about civic life in the abstract. We think about things in relief compared to something else. Powerful citizens set the terms of that contrast. This doesn't mean being uncivil. It simply means thinking about a debate you want to have on your terms over an issue that captures the essence of the change you want. This is what the activists pushing for a $15 minimum wage in the U.S. have done. They don't pretend that $15 by itself can fix inequality, but with this ambitious and contentious goal, which they achieved first in Seattle and then beyond, they have forced a bigger debate about economic justice and prosperity. They've expanded the frame of the possible, strategy one, and created a sharp emblematic contrast, strategy two. The third key strategy, then, is to seek and achieve an early win. An early win, even if it's not as ambitious as the ultimate goal, creates momentum, which changes what people think is possible. The solidarity movement, which organized workers in Cold War Poland emerged just this way, first, with local shipyard strikes in 1980 that forced concessions, then, over the next decade, a nationwide effort that ultimately helped topple Poland's communist government. Getting early wins sets in motion a positive feedback loop, a contagion, a belief, a motivation. It requires pressuring policymakers, using the media to change narrative, making arguments in public, persuading skeptical neighbors one by one by one. None of this is as sexy as a protest, but this is the history of the U.S. Civil Rights Movement, of Indian Independence, of Czech self-determination. Not the single sudden triumph, but the long, slow slog. You don't have to be anyone special to be part of this grind, to expand the frame of the possible, to pick a defining fight, or to secure an early win. You just have to be a participant and to live like a citizen. The spirit of protest is powerful. So is showing up after the protest. You can be the co-creator of what comes next.
TedEd_History
The_Egyptian_Book_of_the_Dead_A_guidebook_for_the_underworld_Tejal_Gala.txt
Ani stands before a large golden scale where the jackal-headed god Anubis is weighing his heart against a pure ostrich feather. Ani was a real person, a scribe from the Egyptian city of Thebes who lived in the 13th century BCE. And depicted here is a scene from his Book of the Dead, a 78-foot papyrus scroll designed to help him attain immortality. Such funerary texts were originally written only for Pharaohs, but with time, the Egyptians came to believe regular people could also reach the afterlife if they succeeded in the passage. Ani's epic journey begins with his death. His body is mummified by a team of priests who remove every organ except the heart, the seat of emotion, memory, and intelligence. It's then stuffed with a salt called natron and wrapped in resin-soaked linen. In addition, the wrappings are woven with charms for protection and topped with a heart scarab amulet that will prove important later on. The goal of the two-month process is to preserve Ani's body as an ideal form with which his spirit can eventually reunite. But first, that spirit must pass through the duat, or underworld. This is a realm of vast caverns, lakes of fire, and magical gates, all guarded by fearsome beasts - snakes, crocodiles, and half-human monstrosities with names like "he who dances in blood." To make things worse, Apep, the serpent god of destruction, lurks in the shadows waiting to swallow Ani's soul. Fortunately, Ani is prepared with the magic contained within his book of the dead. Like other Egyptians who could afford it, Ani customized his scroll to include the particular spells, prayers, and codes he thought his spirit might need. Equipped with this arsenal, our hero traverses the obstacles, repels the monsters' acts, and stealthily avoids Apep to reach the Hall of Ma'at, goddess of truth and justice. Here, Ani faces his final challenge. He is judged by 42 assessor gods who must be convinced that he has lived a righteous life. Ani approaches each one, addressing them by name, and declaring a sin he has not committed. Among these negative confessions, or declarations of innocence, he proclaims that he has not made anyone cry, is not an eavesdropper, and has not polluted the water. But did Ani really live such a perfect life? Not quite, but that's where the heart scarab amulet comes in. It's inscribed with the words, "Do not stand as a witness against me," precisely so Ani's heart doesn't betray him by recalling the time he listened to his neighbors fight or washed his feet in the Nile. Now, it's Ani's moment of truth, the weighing of the heart. If his heart is heavier than the feather, weighed down by Ani's wrongdoings, it'll be devoured by the monstrous Ammit, part crocodile, part leopard, part hippopotamus, and Ani will cease to exist forever. But Ani is in luck. His heart is judged pure. Ra, the sun god, takes him to Osiris, god of the underworld, who gives him final approval to enter the afterlife. In the endless and lush field of reeds, Ani meets his deceased parents. Here, there is no sadness, pain, or anger, but there is work to be done. Like everyone else, Ani must cultivate a plot of land, which he does with the help of a Shabti doll that had been placed in his tomb. Today, the Papyrus of Ani resides in the British Museum, where it has been since 1888. Only Ani, if anyone, knows what really happened after his death. But thanks to his Book of the Dead, we can imagine him happily tending his crops for all eternity.
TedEd_History
How_to_write_descriptively_Nalo_Hopkinson.txt
We read fiction for many reasons. To be entertained, to find out who done it, to travel to strange, new planets, to be scared, to laugh, to cry, to think, to feel, to be so absorbed that for a while we forget where we are. So, how about writing fiction? How do you suck your readers into your stories? With an exciting plot? Maybe. Fascinating characters? Probably. Beautiful language? Perhaps. "Billie's legs are noodles. The ends of her hair are poison needles. Her tongue is a bristly sponge, and her eyes are bags of bleach." Did that description almost make you feel as queasy as Billie? We grasp that Billie's legs aren't actually noodles. To Billie, they feel as limp as cooked noodles. It's an implied comparison, a metaphor. So, why not simply write it like this? "Billie feels nauseated and weak." Chances are the second description wasn't as vivid to you as the first. The point of fiction is to cast a spell, a momentary illusion that you are living in the world of the story. Fiction engages the senses, helps us create vivid mental simulacra of the experiences the characters are having. Stage and screen engage some of our senses directly. We see and hear the interactions of the characters and the setting. But with prose fiction, all you have is static symbols on a contrasting background. If you describe the story in matter of fact, non-tactile language, the spell risks being a weak one. Your reader may not get much beyond interpreting the squiggles. She will understand what Billie feels like, but she won't feel what Billie feels. She'll be reading, not immersed in the world of the story, discovering the truths of Billie's life at the same time that Billie herself does. Fiction plays with our senses: taste, smell, touch, hearing, sight, and the sense of motion. It also plays with our ability to abstract and make complex associations. Look at the following sentence. "The world was ghost-quiet, except for the crack of sails and the burbling of water against hull." The words, "quiet," "crack," and "burbling," engage the sense of hearing. Notice that Buckell doesn't use the generic word sound. Each word he chooses evokes a particular quality of sound. Then, like an artist laying on washes of color to give the sense of texture to a painting, he adds anoter layer, motion, "the crack of sails," and touch, "the burbling of water against hull." Finally, he gives us an abstract connection by linking the word quiet with the word ghost. Not "quiet as a ghost," which would put a distancing layer of simile between the reader and the experience. Instead, Buckell creates the metaphor "ghost-quiet" for an implied, rather than overt, comparison. Writers are always told to avoid cliches because there's very little engagement for the reader in an overused image, such as "red as a rose." But give them, "Love...began on a beach. It began that day when Jacob saw Anette in her stewed-cherry dress," and their brains engage in the absorbing task of figuring out what a stewed-cherry dress is like. Suddenly, they're on a beach about to fall in love. They're experiencing the story at both a visceral and a conceptual level, meeting the writer halfway in the imaginative play of creating a dynamic world of the senses. So when you write, use well-chosen words to engage sound, sight, taste, touch, smell, and movement. Then create unexpected connotations among your story elements, and set your readers' brushfire imaginations alight.
TedEd_History
티코_브라헤_Tycho_Brahe_스캔들많은_천문학자_댄_웬클Dan_Wenkel.txt
How do you imagine the life of a scientist? Boring and monotonous, spending endless hours in the lab with no social interaction? Maybe for some but not Tycho Brahe. The 16th century scholar who accurately predicted planetary motion and cataloged hundreds of stars before the telescope had been invented also had a cosmic-sized personal life. Tycho Brahe was born in 1546 to Danish nobles, but at age two was kidnapped to be raised by his uncle instead. His parents didn't seem to mind. Tycho was supposed to have a career in law, but after witnessing a solar eclipse at thirteen, he began spending more time with mathematics and science professors, who taught him the art of celestial observation. By the time Tycho's uncle sent him off to Germany a few years later, he had lost interest in his law studies, instead reading astronomy books, improving his instruments, and taking careful notes of the night skies. It wasn't long before his own measurements were more accurate than those in his books. While in Germany, Tycho got into a bit of an argument with another student at a party over a mathematical formula, resulting in a sword duel and Tycho losing a good-sized chunk of his nose. After that, he was said to have worn a realistic prosthetic of gold and silver that he would glue onto his face. Fortunately, Tycho didn't need his nose to continue his astronomical work. He kept studying the night sky and creating all sorts of instruments, including a building-sized quadrant for measuring the angles of stars. After months of careful observation, Tycho discovered a new star in the constellation Cassiopeia. The publication of this discovery granted him rock star status and offers of scientific positions all over Europe. Wanting to keep him at home, the King of Denmark offered to give Tycho his own personal island with a state of the art observatory. Called Uraniborg and costing about 1% of Denmark's entire budget, this observatory was more of a castle, containing formal gardens, rooms for family, staff and visiting royalty, and an underground section just for all the giant instruments. Tycho also built a papermill and printing press for publishing his papers, and a lab for studying alchemy. And since no castle would be complete without entertainment, Tycho employed a clairvoyant dwarf named Jep as court jester. Tycho lived on his island, studying and partying for about 20 years. But after falling out with the new Danish King, he took up an invitation from the Holy Roman Emperor to become the official imperial astronomer in Prague. There, he met another famous astronomer Johannes Kepler, who became his assistant. While Kepler's work interested him, Tycho was protective of his data, and the two often got into heated arguments. In 1601, Tycho attended a formal banquet where he drank quite a lot but was too polite to leave the table to relieve himself, deciding to tough it out instead. This proved to be a bad idea, as he quickly developed a bladder infection and died a few days later. But over 400 years after his death, Tycho still had a few surprises up his sleeve. When his body was exhumed and studied in 2010, the legendary gold and silver nose was nowhere to be found, with chemical traces suggesting that he wore a more casual brass nose instead. Tycho's mustache hair was also found to contain unusually high levels of toxic mercury. Was it from a medicine used to treat his bladder infection? A residue from his alchemy experiments? Or did his quarrelsome coworker Johannes Kepler poison him to acquire his data? We may never know, but the next time you think scientists lead boring lives, dig a little deeper. A fascinating story may be just beyond the tip of your nose.
TedEd_History
비극_작가들의_대결_멜라니_시로프Melanie_Sirof.txt
Good afternoon, ladies and gentlemen. Let us welcome you to the final day of dramatic battle between great tragedians. It is a spring day here in Ancient Greece. Nearly 17,000 patrons are filing into the Theatre of Dionysus to watch top playwrights, including favorites Aeschylus and Sophocles duke it out to see whose hero may be deemed most tragic, whose story most awful. Well Seacrestopolis, in last week's battle of the choruses, all 50 members of each playwright's chorus traveled back and forth across the stage, singing the strophe and antistrophe, telling misbegotten tales of woe. Today's first chorus is entering through the parados, taking their positions in the orchestra at the bottom of the stage. Mario Lopedokia, this is nothing we haven't seen before. All 50 members speaking from the depths of their souls. Wait, what is this? I've not seen this before, Seacrestopolis. There is one actor stepping out of choral formation, assuming an independent role in this play. Can you make out who it is? That looks like Thespis. It seems he's changing his mask, and taking on the role of another character. Incredible. Surely, Thespis will go down in history as the very first actor. He has changed the face of theater forever. And that was just the warm-up act. On to the main attraction. Aeschylus will have the stage first. Let's see what he does. We expect great things. Last competition, Sophocles beat him by a smidge, but Aeschylus is still considered the Father of Tragedy. Now, Aeschylus frequently competes at this festival, the city Dionysia. Though his plays are violent, the bloodshed is never seen by the audience, which allows the dramatic tension to take center stage. Let's see what he does today to try to win his title back. Here comes Aeschylus's chorus, but they seem to be missing quite a few people. What is going on here? Not only are they down a few people. There are two actors taking center stage. This is absolutely unheard of. He has build on Thespis's idea and added a second actor to the mix. Aeschylus is relying on the two individuals to tell the story. The dialogue possible in tragedy now has taken precedence over the chorus. No wonder he drastically shrunk its size. This applause is well deserved. The crowd has hushed. Sophocles's actors and chorus are taking the stage for the play, "Oedipus Rex." As usual, the chorus is set up in the orchestra. And what's this? Sophocles has added a third actor. Will this one-upmanship never end? Three actors, and they are changing their masks to take on several different roles as they weave the tale of Oedipus, a nice fellow who kills his father and marries his mother. Kills his father and marries his mother. That sounds pretty tragic to me. It is most tragic, Mario Lopedokia. Call me crazy, but I'm willing to bet that future generations will hold this play up as the perfect example of tragedy. Excuse me, Seacrestopolis. Oedipus has left the stage after realizing Jocasta was his wife and also his mother. Where has he gone? I can't even imagine. Wait. The messenger has stepped on stage and is telling us of the great king's actions. He says that Oedipus, upon finding his mother, wife, whatever, Jocasta, dead of her own hand in their incestuous bedroom, took the broaches from her dress and stabbed his eyes repeatedly. You can't blame the guy, can you? Bedded his mother, killed his father, is father and brother to his children. I might do the same. My friend, I do believe we've seen it all. Indeed, we have. There is nothing more tragic than Oedipus. And sure enough, the judges who have been chosen by lot from all over Greece are ready to announce the winner. Oh, folks! This is one for the history books. Dark horse playwright, Philocles, has taken first prize. What an upset. What a tragedy. What a night, folks. We have witnessed the laying of the foundation of modern theater and some great innovations: the shrinking of the chorus, the addition of three actors, and such catharsis. Doesn't a great tragedy just make you feel renewed and cleansed? It sure does, but now we are out of time. I'm Seacrestopolis, and I'm Mario Lopedokia. Peace, love and catharsis.
TedEd_History
독립선언서에_대해_여러분이_모를_수도_있는_것_키네스_C_데이비스_Kenneth_C_Davis.txt
"All men are created equal and they are endowed with the rights to life, liberty and the pursuit of happiness." Not so fast, Mr. Jefferson! These words from the Declaration of Independence, and the facts behind them, are well known. In June of 1776, a little more than a year after the war against England began with the shots fired at Lexington and Concord, the Continental Congress was meeting in Philadelphia to discuss American independence. After long debates, a resolution of independence was approved on July 2, 1776. America was free! And men like John Adams thought we would celebrate that date forever. But it was two days later that the gentlemen in Congress voted to adopt the Declaration of Independence, largely written by Thomas Jefferson, offering all the reasons why the country should be free. More than 235 years later, we celebrate that day as America's birthday. But there are some pieces of the story you may not know. First of all, Thomas Jefferson gets the credit for writing the Declaration, but five men had been given the job to come up with a document explaining why America should be independent: Robert Livingston, Roger Sherman, Benjamin Franklin and John Adams were all named first. And it was Adams who suggested that the young, and little known, Thomas Jefferson join them because they needed a man from the influential Virginia Delegation, and Adams thought Jefferson was a much better writer than he was. Second, though Jefferson never used footnotes, or credited his sources, some of his memorable words and phrases were borrowed from other writers and slightly tweaked. Then, Franklin and Adams offered a few suggestions. But the most important change came after the Declaration was turned over to the full Congress. For two days, a very unhappy Thomas Jefferson sat and fumed while his words were picked over. In the end, the Congress made a few, minor word changes, and one big deletion. In the long list of charges that Jefferson made against the King of England, the author of the Declaration had included the idea that George the Third was responsible for the slave trade, and was preventing America from ending slavery. That was not only untrue, but Congress wanted no mention of slavery in the nation's founding document. The reference was cut out before the Declaration was approved and sent to the printer. But it leaves open the hard question: How could the men, who were about to sign a document, celebrating liberty and equality, accept a system in which some people owned others? It is a question that would eventually bring the nation to civil war and one we can still ask today.
TedEd_History
예술에서의_종교의_짧은_역사_TEDEd.txt
It's only been the last few hundreds years or so that Western civilization has been putting art in museums, at least museums resembling the public institutions we know today. Before this, for most, art served other purposes. What we call fine art today was, in fact, primarily how people experienced an aesthetic dimension of religion. Paintings, sculpture, textiles and illuminations were the media of their time, supplying vivid imagery to accompany the stories of the day. In this sense, Western art shared a utilitarian purpose with other cultures around the world, some of whose languages incidentally have no word for art. So how do we define what we call art? Generally speaking, what we're talking about here is work that visually communicates meaning beyond language, either through representation or the arrangement of visual elements in space. Evidence of this power of iconography, or ability of images to convey meaning, can be found in abundance if we look at art from the histories of our major world religions. Almost all have, at one time or another in their history, gone through some sort of aniconic phase. Aniconism prohibits any visual depiction of the divine. This is done in order to avoid idolatry, or confusion between the representation of divinity and divinity itself. Keeping it real, so to speak, in the relationship between the individual and the divine. However, this can be a challenge to maintain, given that the urge to visually represent and interpret the world around us is a compulsion difficult to suppress. For example, even today, where the depiction of Allah or the Prophet Muhammad is prohibited, an abstract celebration of the divine can still be found in arabesque patterns of Islamic textile design, with masterful flourishes of brushwork and Arabic calligraphy, where the words of the prophet assume a dual role as both literature and visual art. Likewise, in art from the early periods of Christianity and Buddhism, the divine presence of the Christ and the Buddha do not appear in human form but are represented by symbols. In each case, iconographic reference is employed as a form of reverence. Anthropomorphic representation, or depiction in human form, eventually became widespread in these religions only centuries later, under the influence of the cultural traditions surrounding them. Historically speaking, the public appreciation of visual art in terms other than traditional, religious or social function is a relatively new concept. Today, we fetishize the fetish, so to speak. We go to museums to see art from the ages, but our experience of it there is drastically removed from the context in which it was originally intended to be seen. It might be said that the modern viewer lacks the richness of engagement that she has with contemporary art, which has been created relevant to her time and speaks her cultural language. It might also be said that the history of what we call art is a conversation that continues on, as our contemporary present passes into what will be some future generation's classical past. It's a conversation that reflects the ideologies, mythologies, belief systems and taboos and so much more of the world in which it was made. But this is not to say that work from another age made to serve a particular function in that time is dead or has nothing to offer the modern viewer. Even though in a museum setting works of art from different places and times are presented alongside each other, isolated from their original settings, their juxtaposition has benefits. Exhibits are organized by curators, or people who've made a career out of their ability to recontextualize or remix cultural artifacts in a collective presentation. As viewers, we're then able to consider the art in terms of a common theme that might not be apparent in a particular work until you see it alongside another, and new meanings can be derived and reflected upon. If we're so inclined, we might even start to see every work of art as a complementary part of some undefined, unified whole of past human experience, a trail that leads right to our doorstep and continues on with us, open to anyone who wants to explore it.
TedEd_History
This_is_Sparta_Fierce_warriors_of_the_ancient_world_Craig_Zimmer.txt
In ancient Greece, violent internal conflict between bordering neighbors and war with foreign invaders was a way of life, and Greeks were considered premier warriors. Most Greek city-states surrounded themselves with massive defensive walls for added protection. Sparta in its prime was a different story, finding walls unnecessary when it had an army of the most feared warriors in the ancient world. So what was Sparta doing differently than everyone else to produce such fierce soldiers? To answer that question, we turn to the written accounts of that time. There are no surviving written accounts from Spartans themselves, as it was forbidden for Spartans to keep records, so we have to rely on those of non-Spartan ancient historians, like Herodotus, Thucydides, and Plutarch. These stories may be embellished and depict Sparta at the apex of its power, so take them with a grain of salt. For Spartans, the purpose for their existence was simple: to serve Sparta. On the day of their birth, elder Spartan leaders examined every newborn. The strong healthy babies were considered capable of fulfilling this purpose, and the others may have been left on Mount Taygetus to die. Every Spartan, boy or girl, was expected to be physically strong, mentally sharp, and emotionally resilient. And it was their absolute duty to defend and promote Sparta at all costs. So in the first years of their lives, children were raised to understand that their loyalty belonged first to Sparta, and then to family. This mindset probably made it easier for the Spartan boys, who upon turning seven, were sent to the agoge, a place with one main purpose: to turn a boy into a Spartan warrior through thirteen years of relentless, harsh, and often brutal training. The Spartans prized physical perfection above all else, and so the students spent a great deal of their time learning how to fight. To ensure resilience in battle, boys were encouraged to fight among themselves, and bullying, unlike today, was acceptable. In order to better prepare the boys for the conditions of war, the boys were poorly fed, sometimes even going days without eating. They also were given little in the way of clothing so that they could learn to deal with different temperatures. Spartan boys were encouraged to steal in order to survive, but if they were caught, they would be disciplined, not because they stole, but because they were caught in the act. During the annual contest of endurance in a religious ritual known as the diamastigosis, teenage boys were whipped in front of an altar at the Sanctuary of Artemis Orthia. It was common for boys to die on the altar of the goddess. Fortunately, not everything was as brutal as that. Young Spartans were also taught how to read, write, and dance, which taught them graceful control of their movements and helped them in combat. While the responsibilities for the girls of Sparta were different, the high standards of excellence and expectation to serve Sparta with their lives remained the same. Spartan girls lived at home with their mothers as they attended school. Their curriculum included the arts, music, dance, reading, and writing. And to stay in peak physical condition, they learned a variety of sports, such as discus, javelin, and horseback riding. In Sparta, it was believed that only strong and capable women could bear children that would one day become strong and capable warriors. To all Spartans, men and women, perhaps the most important lesson from Spartan school was allegiance to Sparta. To die for their city-state was seen as the completion of one's duty to Sparta. Upon their death, only men who died in battle and women who died in childbirth were given tombstones. In the eyes of their countrymen, both died so that Sparta could live.
TedEd_History
고대_올림픽의_기원_아마드_디안고어_Armand_DAngour.txt
Thousands of years in the making, what began as part of a religious festival honoring the Greek god Zeus in the rural Greek town of Olympia has today become the greatest show of sporting excellence on Earth. The inception date in 776 BC became the basis for the Greek's earliest calendar, where time was marked in four-year increments called olympiads. What could it be? Why, it's the Olympic games, of course. Competition fosters excellence, or so thought the Ancient Greeks. In addition to sporting events, contests were held for music, singing, and poetry. You can read about them all yourself in classical literary works, like Homer's "Iliad" and Virgil's "Aeneid." Even mythical heroes appreciate a good contest every now and then, wouldn't you say? For the first thirteen games, the Ancient Greek Olympics featured just one event, the two hundred yard dash. But over time, new exciting contests, like boxing, chariot and mule racing, and even a footrace where the competitors wore a full suit of armor enticed many hopeful champions into the Olympic stadium. The combined running, jumping, wrestling, javelin throwing, and hurling the discus events known as the pentathlon inspired world-class competition, and the pankration, a no holds barred fight where only biting and eye-gouging were prohibited, ensured the toughest men were victorious. And victorious they were. Nobody tops the local baker Coroebus, who 776 BC became the very first Olympic champion. And we'll never forget Orsippus of Megara, the 720 BC Olympic victor tore away his loincloth so he could race unimpeded, inaugurating the Ancient Greek tradition of competing in the nude. Now there's a winning streak, if ever we've seen one. But all good things must end. In 391 AD, the Christian Roman Emperor Theodosius banned pagan practices, so the world soon bid a fond farewell to the Olympic games. But just like those early pankration athletes, you can't keep a good one down, and 1500 years later in 1896, the modern Olympic games kicked off in Athens, Greece. Today, the Summer and Winter Olympics bring international world-class athletes together by the thousands, uniting fans by the billions for the world's foremost sporting competition. Citius, Altius, Fortius. Three cheers for the Olympics.