playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
Principles_of_Economics_Macroeconomics
Human_Capital_Conditional_Convergence.txt
♪ [music] ♪ - [Alex] In our previous videos, we showed how capital accumulation can generate growth in the short run, but in the long run, we always end up at a steady-state where all of investment is used to make up for depreciation. What about human capital? -- represented here by the labor force, "L", times their education level, "e." Well, there's no doubt that higher levels of education correlate with higher levels of economic output. But just like physical capital, human capital is subject to diminishing returns. The United States has a well-educated workforce, and that's good, but it's possible for a country to invest too much in education. It helps an economy to have some PhDs -- at least I hope it does -- but how much extra growth would we get if we required everyone to have a PhD? Probably not that much. It's a good investment to teach people to read and write and do some math, but would it pay to train everyone to understand the general theory of relativity? I don't think so. So education is subject to diminishing returns. And what about depreciation? Yeah. Unfortunately, human capital -- it wears out too. Think about all of the current human capital in the world. Where is it going to be in 100 years? Unfortunately I know. First we go into retirement, and after that, it's just depreciation, depreciation, depreciation. Moreover, it takes a lot of investment in schools and universities and time and effort to build human capital. At some point, we're going to need all of that investment just to keep the population as educated as it is now. So the accumulation of capital, whether it's physical capital or human capital, it can only get us so far. Now let's turn to an important prediction of the Solow Model. Poor countries should grow faster than rich countries. Now, that's a pretty bold prediction. If it were completely true, then all poor countries -- they'd be catching up to the rich countries. And all countries would be approaching similar levels of steady-state output -- perhaps with some differences due to differences in savings rates. Now as we saw before, there are growth miracles. Some countries like China and Korea -- they're clearly catching up. But there's also growth disasters. Countries like Nigeria and Argentina, which are falling further and further behind, or at least not catching up. Indeed, broadly speaking, over the last several hundred years, what we've seen isn't convergence, but divergence -- big time. But let's step back and remember that the factors of production in the Solow Model -- they're just one piece of the puzzle. When it comes to explaining prosperity, we also need to remember the importance of institutions, the institutions that create the incentives to accumulate and to use the factors of production in socially beneficial ways. Two countries with really different institutions -- they're not going to converge. But, if we focus in on countries with similar institutions, then the Solow Model predicts that the poorer countries should grow faster, and all countries with similar institutions -- they should converge to similar levels of output. We call this "conditional convergence." Conditional on institutions and other factors being similar, we'd expect poor countries to grow faster. Is it true? Let's take a look at the 20 founding members of the OECD, basically the Western developed economies. It seems reasonable to say that they've got similar institutions, so according to the Solow Model, they should have similar steady-state levels of output. Here we're going to plot the growth rate of these countries over 40 years on the vertical axis, and real GDP per capita in 1960 on the horizontal axis. Remember, the Solow Model predicts that the countries which were poorer in 1960 -- they should have grown faster over the next 40 years than the countries which were wealthier in 1960. And that's exactly what we see. The countries which were relatively poor in 1960 -- they grew faster than the countries which were relatively wealthy in 1960. So among countries with similar institutions, there is convergence -- conditional convergence. The Super Simple Solow Model, however, makes another prediction: zero growth in the steady-state. But clearly that's not what we see. The growth rates for the wealthier countries, they're lower than for the poorer countries, but they're not zero. The United States -- it's been growing consistently for 200 years, and we're still growing. That doesn't sound like zero growth at all. It's useful, however, to bring back the two types of growth that we discussed earlier: catching up; and cutting edge growth. When you're catching up, when you're poor relative to your steady-state, that's when the Solow Model predicts that you grow quickly as capital accumulates. But then you slow down as you approach the steady-state. However, for the wealthiest countries in the world -- those are the cutting edge -- this model of capital accumulation, it fails to explain how you keep growing, albeit at a slower pace. So how do we explain growth at the cutting edge? Well, let's not forget about our last variable: Ideas. Ideas is going to be the focus of our next video, and we'll see how new ideas can keep us growing on the cutting edge. - [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
The_Idea_Equation.txt
♪ [music] ♪ [Alex] In our final video on growth and the economics of ideas, we're going to look deep into our crystal ball. Given that ideas are the key to long-term economic growth, what are the prospects for future growth? Is the future bright or dim? To help answer, let's summarize the economics of ideas in a simple formula. Ideas are equal to Population times Incentives, times Ideas per hour. Now this isn't a precise formula, but it's more of a tool to help us think about the key factors in idea creation. We're going to start with what economist Julian Simon called "The Ultimate Resource" -- people. World population is increasing. But even more importantly, the population of idea creators is rapidly increasing. As I pointed out in my TED Talk, as China, India, and other parts of the world become rich, they contribute more and more to the production of new ideas, ideas which benefit everyone. Here's some data showing the population of researchers in a sample of countries around the world. In the United States today, about 4 people in every thousand are involved directly in idea creation. Not that many people really, especially when you think that a large fraction of economic growth comes from new ideas. The number of idea creators per thousand -- it's a little bit higher in a few other countries like Finland and Japan, and about the same in other developed countries, such as Canada, Germany, and Australia. Now take a look at China. In China, only 1 in every 1,000 people is a researcher. Still, combined with China's total population, that makes China a research leader. But what's even more remarkable is how quickly the number of researchers in China is increasing. In 2000, China only had about half a researcher per 1,000. So, China has doubled the number of idea creators in just about 10 years. As China and other countries become increasingly well-educated and wealthy, the number of idea creators in these countries is increasing to the benefit of everyone. Remember, ideas are built to be shared. Next up in our equation is a measure of institutions and incentives. How well are countries incentivizing idea creation? The news here -- it's also pretty good. As I discuss in my TED Talk, we're globalizing markets, making markets larger, and that increases the profit from R&D. In addition, institutions across the world -- they're getting better. There's positive movements towards better property rights, honest government, political stability, and a dependable legal system. All of this helps to incentivize the production of good ideas. We can see the effect of better institutions in a remarkable fact. As late as 1990, just seven nations accounted for 92% of the world's spending on Research & Development. Just seven nations! But today, those same nations account for only 56% of R&D. Overall, the world is investing much more in Research & Development than ever before, because the original seven -- they've now been joined by other R&D powerhouses, like China, Korea, and Brazil. The last factor in the equation is the number of ideas per hour. Assuming that we've got lots of idea creators and the right incentives, how productive can those idea creators be in producing new ideas? Now, this variable is the most mysterious and uncertain. Is it possible that idea production could run into diminishing returns? Diminishing returns in idea production would mean that we need more researchers and better incentives just to keep up. And it does seem that today we rely less on lone geniuses and more on big teams to make new discoveries. There's some evidence that it's becoming more expensive to produce new ideas, at least in some fields. But it's also possible that we'll see the opposite, Increasing Returns. We now have lots of technologies that make idea production easier. Take the internet, for example. It makes it easier to find and build upon old ideas. And how about online education? That makes education much faster and better. The internet has also opened up the world to more idea creators. Today, someone in Zambia can get access to a large fraction of the world's knowledge using just their cell phone. That's incredible! And a new technology may soon revolutionize the production of ideas. Artificial intelligence could speed up the production of new ideas dramatically. So, nothing is guaranteed, but there are good reasons to be optimistic about the future of economic growth, especially if we continue to educate the world, globalize markets, and produce new technologies that make idea production even easier. [Narrator] If you want to test yourself, click “Practice Questions.” Or, if you're ready to move on, you can click “Go to the Next Video.” You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Zimbabwe_and_Hyperinflation_Who_Wants_to_Be_a_Trillionaire.txt
♪ [music] ♪ [Narrator] It's not easy being a dictator. For one, there's a lot of other people around you who would love to be you, so you're constantly worrying about staying in power. Navigating the gray area between political rivals and political allies is a total headache. Plus, there's all those pesky people who you're supposed to be in charge of. How to give them as little as possible without inciting rebellion is a never-ending balancing act. Robert Mugabe, the president of Zimbabwe, was facing these problems around 2000. He needed money to bribe his enemies and reward his allies. Unfortunately, he had taxed pretty much everything there was to tax, and his policies had scared away investors. The economy wasn't doing well and his people were unemployed and hungry. So, where did he get the money? Well, one of the perks of running a country is that you get your very own money making machine -- the printing presses. So, in a pinch, you can just print more money, which is exactly what Mugabe did. The newly-printed money didn't increase productivity in the Zimbabwean economy and there was no new investment. So the economy couldn't produce more goods. In effect, you had more money chasing the same goods. More money chasing the same goods meant that the purchasing power of the Zimbabwean dollar fell. You needed more dollars to buy the same stuff as before. In other words, as the newly-printed money began flooding the market, prices began to rise. Prices began to increase at a rate of about 50% a year. And that was only the beginning. As prices rose, the government had to print even more money to buy just as many goods as before. And so they did. And that is how things got out of control. The faster prices rose, the more money the government printed, and the faster prices rose: a feedback loop. By 2001, prices were rising at a rate of 100% per year. By 2002, 200% per year. 2003 -- 600% per year. By 2006, prices were rising at over 1,000% per year and it cost 417 Zimbabwean dollars to buy toilet paper. No, not per roll, Z$417 per sheet. Money was devaluing so quickly that the money you had in the morning would be worth quite a bit less by the evening. So people were trying to get rid of currency as soon as they got it. Zimbabweans became millionaires, but unfortunately, a million Zimbabwe dollars might buy you a chicken, if you were lucky. And still the government kept printing money, and in higher and higher denominations: Z$1,000,000 notes, 100 million, 10 billion, 100 billion dollar notes. In 2008, prices started rising by thousands of percent a month and the government started printing 100 trillion dollar notes. At the height of this feedback loop, prices were increasing at an astronomical rate of 7.6 billion percent a month, and one US dollar would get you, well, we're not trying to say it, but this many Zimbabwean dollars. By the end of 2008, the Zimbabwean dollar had effectively ceased to exist, and Mugabe had no choice but to legalize transactions in foreign currencies. The Zimbabwe hyperinflation was over. Hyperinflations have occurred in other countries, such as Yugoslavia in 1994, China in 1949, and Germany in 1923. As in Zimbabwe, these hyperinflations were caused by governments that were desperate for cash, but with few means to raise funds except the printing presses. The Zimbabwe hyperinflation also illustrates a more general principle that we will be exploring and testing in greater detail in upcoming videos. And that is -- inflation is caused by increases in the supply of money. You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Changes_in_Velocity.txt
♪ [music] ♪ [Alex] Welcome back to the aggregate demand - aggregate supply model. In the previous video, we focused on what happens when the aggregate demand curve shifts due to a change in the money growth rate. Now we're going to ask what happens when AD shifts because of a change in "V". This time, we'll assume the money supply is constant. You can think of velocity as how often money changes hands. So, an increase in V means that spending has increased because money is changing hands at a faster rate. It's useful to recall the National Income Spending Identity -- Y = C + I + G + NX Now all this does -- it just breaks spending down into different categories. So if V changes, then the growth rate of either C, or I, or G, or NX must change. For example, suppose that the government starts spending a lot more -- say a big increase in defense spending. The increase in spending shifts out the aggregate demand curve. So, in the short run, the economy moves from point A to point B, creating a higher growth rate, a boom from all that defense spending, and also a higher inflation rate, as that spending pushes up prices. That's the short run. Now, what happens in the long run? Well, in the long run, government spending growth has to return to normal. Government can't keep growing spending at a higher rate forever, because, in the long run, government spending can't grow faster than the economy grows. So, in the long run, the aggregate demand curve shifts back, as the growth rate of government spending returns to normal. So, after the boost in spending works its way through the economy, we return to point A. So changes in M and V shift the aggregate demand curve in slightly different ways. The money supply growth rate can be increased or decreased permanently, but changes in V are always temporary. The best way to remember this is that, in the long run, we have to be on the long-run aggregate supply curve. And, in addition, in the long run, the inflation rate is determined by the money supply growth rate. So if the money supply growth rate hasn't changed, then the inflation rate can't change in the long run either. Now even though changes in V are always temporary -- an increase in V shifts the AD curve out, and then back. A decrease in V shifts the AD curve in, and then back. Even though these changes are temporary, they can still cause business fluctuations. And if the changes are big and very negative, these changes can even create a recession. Let's illustrate with another example. Suppose that "C," consumption growth, decreases. Now why might this happen? Fear. Imagine that consumers suddenly become pessimistic and fearful about the economy, as they did in 2008, when it looked like the banking system was on the edge of collapse. Workers and consumers, fearing that they might lose their jobs -- they cut back on their spending. They try to hold onto their money. Instead of buying a new car, they hold onto their old car. They decide that now is not a good time to remodel the kitchen. They start saving more money -- just in case. The "animal spirits" -- to use John Maynard Keynes' famous phrase -- they've "turned negative." The decrease in consumption growth shifts the AD curve inwards. In the short run, the decrease in spending moves the economy from point A to point B, where inflation and real growth are both reduced. In the long run, however, fear recedes, prices and wages adjust, and the economy moves back to point A. Now, by the way, if you're thinking, "Why don't we try to offset the fall in C with an increase in G?" Good thinking! That's the idea behind fiscal policy, which we'll take up in greater depth in a later video in this course. For now, let's sum up. Let's review some of the causes of the shifts in the AD curve. Since M is money supply growth, and V can be broken down into C, I, G, or NX, the causes boil down to anything that can affect these factors. So, for example, faster money growth shifts the AD curve outwards. Slower money growth, especially slower than expected -- that shifts the AD curve in. For consumption, confidence and fear -- that could increase or decrease consumption growth. Confidence and fear -- animal spirits -- these are also big drivers of investment. Big changes in tax rates could have similar effects, with lower taxes increasing spending, and higher taxes reducing spending. As we've already discussed, increases in government spending shift the AD curve out, and decreases in government spending shift the AD curve in. Finally, since NX is equal to exports minus imports, an export boom would increase domestic aggregate demand, as would a cut in imports. For the same reasons, a decrease in exports, or a boom in imports, would reduce domestic spending, and shift the AD curve in. Now don't try and memorize this list. As always, it's better to understand the logic. Forces that increase spending -- that shifts the AD curve out. It causes a positive shock to the economy. Forces that decrease spending -- that shifts the AD curve in, and causes a temporary negative shock to the economy. Okay. That's our aggregate demand- aggregate supply model. We can now use the model to understand the most devastating economic fluctuation in U.S. history: the Great Depression. [Narrator] You're on your way to mastering economics. Make sure this video sticks, by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Measuring_Inflation.txt
♪ [music] ♪ [Alex] In today's video, we're going to take a closer look at what inflation is and how it's measured. Now, shifts in supply and demand -- they're pushing some prices up and other prices down all the time. Let's think about each of these prices like ping pong balls -- ping pong balls in an elevator. Now inflation is when the average price is going up. Inflation is when the elevator is going up. We measure the average level of prices using a price index, the average price from a large and representative basket of goods and services. There are different price indexes that are based upon different baskets. The consumer price index, or CPI -- it's based on a basket of thousands of goods and services which are bought by consumers in the United States. And, it's a weighted average, so that an increase in the price of a major item, like housing, that counts for more than an increase in the price of a minor item like toothbrushes. The inflation rate can then be measured as the percentage change in the index over a period of time, say a year. So let's take a look at the inflation rate in the United States, as measured by the CPI. If we google "Inflation United States FRED," we'll find a graph like this. The graph shows us the CPI. Now this index is defined, so that the average price in the years 1982 to 1984 -- that's set equal to 100. In mid-2016, the index was 239. So that means that over the past 33 years, prices on average have more than doubled. Now that doesn't mean that we're necessarily worse off today than in the past because wages have also gone up over this time period. And, in fact, wages have gone up, on average, by more than prices. By clicking on edit data series, we can change to an annual series. Now we can see that in 1973 the CPI was 44.425. And in 1974, the average price of the CPI basket -- it had risen to 49.317. We can now calculate that the rate of inflation over this year was 11.01%. The calculations can be a little bit tedious. So let's have FRED do the work. We'll change the units to percent change from one year ago. We now see the annual inflation rate in the United States from 1948 to 2016. Notice that in 1974 the inflation rate was 11.01%, just as we calculated. You can see that the inflation rate increased in the United States in the 1960s and the 1970s, peaking around 1980 at a little over 14% per year. After 1980, inflation rates fell to an average of about 2.5% for many years. Inflation even turned negative, a little bit of deflation, very briefly during the 2009 recession. Even in the 1970s, the United States has had a relatively low inflation rate by world standards. As a point of comparison, let's consider Venezuela today. In Venezuela, the inflation rate in 2015 hit 180%. And it didn't stop! It's estimated that in 2016, the inflation rate in Venezuela will hit 500%, or even higher. Now even Venezuela has a long way to go before it competes with a hyperinflation leader like Zimbabwe, which as we know from our previous video, Zimbabwe hit rates of billions of percent per month at its peak hyperinflation. Okay. Now that we have a better idea about what inflation is and how it's measured, we're going to look in more detail at the causes and the consequences of inflation. That's up next. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Econ_Duel_Rent_or_Buy.txt
♪ [music] ♪ [Tyler] So Alex, we're economists. Often people ask me, well what is it that we know about investment and investment advice anyway? Now, we've already talked about equities, but as economists do we have anything else to tell people? [Alex] Let's talk a little bit about housing, because housing is one of these areas where there's a lot of myths around it. People in America, it's part of the American Dream. And one thing I think people don't realize is that over the long run, house prices are certainly not guaranteed to go up. I mean we saw in the financial crisis that house prices came way down. But there's still this kind of idea that people have in their heads that they're not making any more land, so in the long run house prices have to go up. And we know from our understanding of assets that you should not expect your house to be a great financial investment. But isn't there a tax reason to buy a home? So if I borrow money for a mortgage, and I'm paying income tax and itemizing my deductions, I can write off a lot of that mortgage interest and get some of the money back. And that means it could be cheaper or more advantageous to buy the home because of taxes. But don't forget, a lot of that tax advantage is going to be captured not by the buyer, but by the seller. What the tax advantage means is it pushes up the prices of homes. It's already built into a higher price, so you, as the buyer, don't always get that gain. Often it's the seller. Well, but think of this in terms of elasticity. Say I live in an area, like many parts of Texas or Florida, where I get this tax break that increases the demand for homes, yes. But then suppliers build more homes and they drive the price back down again. And it seems that in a lot of states actually, the buyer should be reaping a lot of that tax break. Yes, No? If only there were more places in the United States like that, I would tend to agree. But one of the big problems we have in the United States today is that the demand for housing goes up and you're in some place like San Francisco, or New York, or Boston where it's impossible to get permits to build more housing. And when the demand goes up, all that means is the price of the house goes up. So all of those gains, whether it's from the tax system, or whether it's from people wanting to move to San Francisco, all of those gains go to the land owners. And that's actually a big problem we have in the United States today. But I do see a stability reason to own a home. Say you're 37 years old, you have 2 kids in school, you want them to go to a good high school district. You don't want to be told, "You're going to have to move." You want to arrange your backyard the way you see fit. And you don't want to have to renegotiate a rent contract. All of those factors militate in favor of buying a home. So I think you're right for those people. But one thing you've got to keep in mind is that on average, houses are not going to be a great financial investment. So you have to be exactly one of those people who wants extra stability, more stability than the average person. That's when you're going to gain from buying a house. And, keep in mind, that when you buy a house and that heating system collapses and you've got to repair that, that's a big problem. When you're hit by a lightning strike and you've got to repair the roof, that's a big expense as well. So when people say, "I'm only worried about the rent going up." Well that's fine. They may be worried about that, but don't forget, you've got to be worried about replacing the roof every 20 years, as well. So it's really a marginal question, you're saying. Like at the margin, do you need the tax break more than the typical buyer? At the margin, do you value the good high school district more than the typical buyer? And at the margin, are you better at fixing the broken roof or hiring someone to do it than the average buyer? And those things may or may not apply to you, but that's the right way to think about it? Exactly right, so if you want that tax advantage, you've got to be earning more income than average. You've got to be itemizing your deductions. If you're not one of those people in the upper middle class, you're not going to get that tax advantage. If you've got one kid, maybe the school is not so important. Maybe you have to have two before you really get that advantage from the schooling. So yeah, you've got to be thinking about how you're different from the average. If you really want to buy a home, you've got to love buying a home. I think of people as needing to save more typically, that we're programmed to think about the here and now, we're a bit impatient. Perhaps we haven't evolved to think well enough about the more distant future. If you buy a home, pay off your mortgage at the end of 30, or one hopes 15 years, you own something. In the meantime, you're saving. and you get into a routine that doesn't even feel like saving. It's more savings than if you're writing a rent check every month. So maybe it's our own imperfections. We need to lock ourselves into a higher savings regime, and that's another possible reason to buy a home. Yeah, I do think the forth savings argument has got something to it. But there's a big problem, especially in the United States today, and that is there's such an encouragement to buy houses with no money down, even after the financial crisis. If you really are concerned about savings, the key point is to have a 20% down payment. So save up for that down payment. And that is really what is going to be the forth savings aspect of buying the house. You know, I think the biggest piece of advice I'd give to people is just to be on that wealthier side of the equation, so that owning a home makes sense for you. Don't forget our earlier investment rule: Diversify, diversify, diversify. And yet, when it comes to housing, people are encouraged to put a huge amount of their wealth into one asset, in one place in the country. That could be a terrible decision. If you're in a small town with only one employer, and you have a house and that employer goes bust, well your house price is going to fall, you might lose your job, your income is all going to fall, and it's all going to be happening concentrated, all in one place all at one time. [Announcer] What do you think? To see previous episodes of Econ Duel, check out our playlists. Or if you're craving more financial advice, click to find out if mutual funds are a good investment. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Defining_the_Unemployment_Rate.txt
♪ [music] ♪ [Alex] Let's begin today by taking a look at the unemployment rate in the United States. If we Google "unemployment rate United States FRED," we'll get this graph from the St. Louis Federal Reserve economic database. We can see from the graph that the unemployment rate fluctuates. Since 1950, it's averaged about 6% per year, but it dipped below 3% once, and it had highs of 10.8% in December of 1982, and almost as high, 10%, in October of 2009, the Great Recession. Not surprisingly, the unemployment rate increases during recessions, and those are shown by the shaded areas. Perhaps a little bit more surprisingly, the unemployment rate is never zero. Not even in a boom. In a growing economy, lots of things are changing. And even as some firms are expanding, others are shutting down. Workers -- they're moving about, they're entering the work force, they're looking for new jobs and so forth. We'll talk more about that in an upcoming video. Now let's look at how unemployment is defined. A person with a job is employed. But in the official definition, not everyone without a job is unemployed. Is a six-year-old unemployed? Is a prisoner unemployed? What about a retiree? In each of these cases, the official answer is no. A person is counted as unemployed only if they're an adult, non-institutionalized civilian without a job, and actively looking for work. The most important part of this definition is that to be considered unemployed, a person must be out of a job, but actively looking for work. And that means that they must have taken some action to find a job in the last four weeks. The unemployment rate is the number of people who are unemployed divided by the number of people in the civilian labor force -- the employed plus the unemployed. Let's add the civilian labor force and the number of unemployed people to our graph to see all of these relationships, and get an idea of the magnitudes. We can see, for example, that in February of 2015, there were 157 million people in the labor force. Of these, 5.5% were unemployed, which means that there were 8.6 million unemployed people. Now in case you don't want to check those calculations, let's make it easy. Let's go to January of 1978. At that time there were about 100 million people in the labor force. The unemployment rate was 6.4%, and there were 6.4 million people unemployed, just as expected. In the next video, we're going to look at a common criticism of the unemployment rate: Is unemployment undercounted? Is there a conspiracy to undercount the unemployment rate? [Narrator] If you want to test yourself, click “Practice Questions.” Or, if you're ready to move on, you can click “Go to the Next Video.” You can also visit MRUniversity.com to see our entire library of videos and resources.
Principles_of_Economics_Macroeconomics
Real_GDP_Per_Capita_and_the_Standard_of_Living.txt
♪ [music] ♪ - [Alex] Is Real GDP per capita a good measure of the standard of living? People tell me all the time, "You economists, you're too materialistic." Doesn't Real GDP per capita just measure the things we buy? What about our health, our happiness, education? Well, Real GDP per capita -- it's not a perfect measure. But I want to show you why it's probably the best single measure of the average standard of living in a country. And that's not because material goods are the most important goods. It's because Real GDP per capita is correlated with many of the other things that we care about. Let's start with life expectancy. Here we show Real GDP per capita along the horizontal axis and life expectancy along the vertical axis. As you can see, there's a positive correlation. Countries that have a higher GDP per capita also have a higher life expectancy. Perhaps that's not too surprising. Let's take a look at happiness. Maybe this is a more surprising fact. This chart shows GDP per capita on the horizontal axis and now a measure of happiness on the vertical axis. Again, we see a positive correlation. Countries with a higher Real GDP per capita also tend to have happier people, on average. Here's a data set from the United Nations. It's called the Human Development Index. It combines measures of life expectancy, education, and standard of living. Overall you can see, in general, as GDP per capita increases, so does human development -- at least as measured by this index. The basic story -- it's pretty simple. When we have more goods and services, we can usually afford more of the other good things in life. So the good things in life -- they tend to go together. However, GDP per capita is far from perfect. Here's one problem. GDP per capita misses the distribution of income. For example, let's compare the Real GDP per capita of Nigeria, Pakistan, and Honduras. It's actually pretty similar. So you might think that all three countries have similar living standards. And yet, in Nigeria, about 80% of the population lives on less than $2 a day. In Pakistan, it's only 60% In Honduras, it's only 33%. How can the number of people living in abject poverty be so different, when Real GDP per capita is about the same? The reason is that income in Nigeria is much more unequally distributed than in Pakistan or Honduras. Nigeria has many poor people, but also some very rich people. So average income -- it's about the same in Nigeria, Pakistan, or Honduras, even though there are more poor people in Nigeria. Over time, however, growth in Real GDP per capita, whether in Nigeria, Pakistan, or Honduras, usually does indicate growth in everyone's incomes, including the incomes of the very poor. So this graph shows growth in per capita incomes along the horizontal axis, with growth in the incomes of the poorest 20% on the vertical axis. Once again you see, as average per capita income increases, you also see increases in income of the very poor. Overall, Real GDP and Real GDP per capita have proven to be useful measures for comparing the standard of living of two different countries, or for comparing the same country at different points in time. Okay. So now that you know that Real GDP per capita -- it's a good measure of the standard of living, we get to the really crucial question. How do we increase the standard of living? How do we grow an economy? How do we increase Real GDP per capita? That is a big question, the big question of development. We'll be tackling it in a number of future videos. But before you go, take a moment to let us know how we're doing. What do you think of the videos? How can we improve? Drop us an email or leave us some feedback on our website. Thanks. - [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Causes_of_Inflation.txt
♪ [music] ♪ [Alex] Today, we're going to explain the primary cause of inflation. And we're going to do so using the quantity theory of money. Let's start by rewriting our equation slightly. We'll divide both sides by Y, so we get this. What this equation tells us is that if prices are changing, there are three possible causes -- changes in M, V, or Y. Now remember that P -- prices -- they can change quite a bit in a short period of time. There are many times and places, for example, when prices have doubled or tripled in a year. On the other hand, V and Y are pretty stable. Consider Y -- that's real GDP. Real GDP -- it doesn't vary that much within a year. An increase of 10% in a single year -- that would be astonishing growth. And a fall of 10% -- that would be a very unusual, great depression. So changes in real GDP -- they don't seem like a plausible candidate for explaining large and sustained changes in prices. What about V -- the velocity of money? The velocity of money is the average number of times that the dollar is used to purchase final goods and services in a year. In the U.S. economy in recent years, V -- it's been about seven. And it's determined by the same kinds of factors that might determine your personal V, factors like whether you're paid weekly or biweekly, or how long it takes to clear a check. As we'll discuss later, V can change in the short run, but it might go up to eight or down to six. Usually, usually not much more than that. So, again, V doesn't seem like it can change enough to explain large and sustained changes in prices. So if Y and V are relatively stable, which we'll note by adding a bar over top, then it follows immediately that the only thing that can cause an increase in P is an increase in M. In other words, increases in prices are caused by increases in the money supply. It's changes in the money supply that are driving the speed and the height of our inflation elevator. We can summarize this by writing the quantity theory of money in a nutshell. Here's our equation written in the earlier form. Now what this equation says is very simple and intuitive. When more money chases the same amount of goods and services, prices must rise. Okay. How well does the theory hold up? In this figure, we plot the price level and the money supply from Peru during its hyperinflation. A product with a price of one Peruvian intis in 1980 -- it would have cost 10 million intis by 1995. Now what caused this massive increase in prices? Well, just as the quantity theory would predict, we also see at this time a massive increase in the money supply. M skyrocketed and so did P. We can also write the quantity theory in terms of growth rates, which we'll indicate with a little arrow above the variable. What the growth form of the quantity theory tells us is that if V and Y, if they're not growing too much, then the growth rate of M should be equal to the growth rate of prices. And remember, the growth rate of prices is the inflation rate. Here's the same data from Peru as before, except now we're looking at the growth rate of the money supply and the growth rate of prices. As the growth rate of the money supply increased, so did the inflation rate. Amazingly, the money supply was growing at a rate of 6,000% per year in 1990. And as the quantity theory predicts, the inflation rate -- it was about 6,000% per year in 1990. Okay -- so the theory works pretty well for Peru in 1990. What about other times and places? Here we show inflation rates on the vertical axis and money growth rates on the horizontal axis. This is for about 110 countries between 1960 and 1990. You can see that, on average, the relationship is close to perfectly linear, with a one percentage point increase in the money supply growth rate leading to a one percentage point increase in the inflation rate. Now what this tells us is three very important principles. First, in the long run, money is neutral. A doubling of the money supply will, in the long run, lead to a doubling of prices. Second, if we're thinking about a significant and sustained inflation rate, then Milton Friedman had it exactly right when he said, "Inflation is always and everywhere a monetary phenomena." Third, since central banks often have significant control over a nation's money supply, they also often have significant control over a nation's inflation rate. Okay. Keep those three principles in mind. We'll be referring to them in future videos. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Physical_Capital_and_Diminishing_Returns.txt
♪ [music] ♪ - [Alex] In our last video, we introduced the variables in our Super Simple Solow Model. We have physical capital, represented by "K," human capital, represented by "e" times "L," and ideas, represented by "A." In this video, we're going to hold human capital and ideas constant. That will let us focus in on K so we can show what happens to output when the amount of physical capital changes. Since capital is the only input, output is a function just of the quantity of capital. Let's write output with the letter "Y." Then we can say that Y is a function of K. Output is a function of the quantity of capital. What properties should our production function have? First, it makes sense that more K increases output. Recall from our earlier video, our farmer. A farmer with a tractor can produce a lot more output than a farmer with just a shovel. Similarly, a farmer with two tractors can produce more output than a farmer with just one tractor. If we graph capital on the horizontal axis and output on the vertical axis, we're going to see a positive relationship. As capital goes up, output goes up. That seems pretty straightforward. The second property our production function should have is that while more capital produces more output, it should do so at a diminishing rate. What do I mean by that? Let's go back to our farmer. The first tractor he gets is the most productive. It helps him grow a lot more wheat. The second tractor he might use if the first tractor -- it breaks down. So the second tractor is less productive than the first. The third tractor is maybe just a spare in case both break down. So the third tractor will boost his output even less than did the second. Said another way, the farmer will allocate his tractors so that the first tractor, he's going to allocate to the most important, the most productive task. Meaning that subsequent tractors -- the farmer will allocate them to less and less productive tasks. We call this the Iron Logic of Diminishing Returns. To represent both of these properties, we can use a simple production function, one which we're already familiar with: the square root function. Output equals the square root of the capital inputs. So if we input 1 unit of capital, output is 1. If we input 4 units of capital, output is 2. If we input 9 units of capital, output is… 3. The marginal product of capital describes how much additional output is produced with each additional unit of capital. Notice that the marginal product of the first unit of capital is really high. But as the capital stock grows, the marginal product of capital is less and less and less. Already, we can explain one of our puzzles. Recall that growth was fast in Germany and Japan after World War II. That makes sense, because after the war, those countries -- they didn't have a lot of capital. So that meant that the first units of capital had a very high marginal product. The first road between two cities or the first tractor on a farm, or the first new steel factory -- that gets you a lot of additional output. Capital's very productive when you don't have a lot of it. But don't forget that Germany and Japan were growing from a low base. You can grow fast when you don't have a lot, but all else being the same, you'd rather have more and grow slower. So, capital can drive growth, but because of the iron logic of diminishing returns, the same additions to the capital stock may get you less and less output. Unfortunately for K, in the next video we'll show that capital has another problem to deal with. - [Announcer] If you want to test yourself, click “Practice Questions.” Or, if you're ready to move on, you can click “Go to the Next Video.” ♪ [music] ♪ You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
When_the_Fed_Does_Too_Much.txt
♪ [music] ♪ [Alex] The economy is complex and it operates according to uncertain rules. This makes monetary policy difficult and sometimes the Fed's actions have made things worse rather than better. Let's take a look at the Great Recession. In an earlier video, we showed how increased leverage, mortgage securitization, and overconfidence contributed to the Great Recession. In this video, we're going to take a look at how the Fed's actions before the recession -- how they might have promoted the housing bubble, making the eventual recession worse. In the late 1990s, the American economy was booming, with low unemployment and low inflation. The recession in 2001 appeared to be mild, but it was troubling that the unemployment rate remained high even after the recession officially ended. In an effort perhaps to bring back the 1990s, and to reduce unemployment, the Fed continued to try to increase aggregate demand, even after the recession had ended. In particular, the Fed kept the federal funds rate very low. A low federal funds rate makes credit cheaper and cheap credit can fuel an asset bubble. A bubble is when asset prices rise far higher and faster than can be explained by the fundamentals. Irrational exuberance, rather than analysis, begins to drive prices higher and higher and higher. By keeping interest rates low, the Fed's policy in the early to mid 2000s encouraged people to buy more homes. Housing construction increased and that did generate lots of jobs. But as housing prices increased, year after year after year, it also made buyers and lenders overconfident. The Fed kept the federal funds rate very low until mid 2005. Housing prices peaked in 2006, shortly after the rate began to increase. And housing prices started to crash in 2007. When housing prices started to fall, homeowners felt poor and they spent less. Home construction slowed down and halted. Aggregate demand fell. The Fed probably did keep interest rates too low for too long, but they also underestimated the effects that a decline in the housing sector would have on the overall economy. In fact, few people at the time understood how large the shadow banking system was, or how tied it was to the housing sector through mortgage-backed securities. We should also recognize that bubbles -- they're much easier to see in hindsight than they are in real time. Every bubble comes with a story about why this time is different. The trouble is, sometimes the times -- they really are different. Even if the Fed knew that housing prices were too high, and even if they had wanted to restrain prices, the Fed has limited tools. Monetary policy -- it's simply a crude way to pop a bubble. Monetary policy can influence aggregate demand, but by reducing aggregate demand, the Fed is slowing down the entire economy, not just the housing sector. That's not a very efficient way to manage an asset price bubble. Now the Fed does have the power to regulate banks, and it could have used some of that power to restrain some of the abuses in subprime mortgage lending. That would have been a more targeted attack on the sector that was overheating. So the Fed's actions may have contributed to the housing bubble and the Great Recession, but failing to act can also have disastrous consequences. For example, most economists agree that the Fed's inaction during the 1930s -- they made the Great Depression much worse. During that time, the U.S. money supply fell by about a third -- the largest drop in aggregate demand in American history. And the Fed mostly watched from the sidelines, when instead very strong actions to increase the money supply were called for. Rather than hoping that wise policymakers will make the right decision at just the right time, some economists argue that the Fed should follow a rule rather than relying on discretion. Milton Friedman, for example, suggested a money supply rule, a rule that would have M1 or M2 grow at a constant rate -- say 3% a year -- to match the growth rate of real GDP. Money supply rules work best when velocity is constant. But when there are large shocks to the economy, such as the Great Depression and the Great Recession, velocity usually falls. So these rules can mislead, just when you need them most. To avoid some of the pitfalls of a strict money supply rule, other economists have suggested targeting inflation, or nominal GDP. A nominal GDP rule, for example, would keep nominal GDP -- M times V -- growing at a constant rate. If the Fed had followed a nominal GDP rule, for example, then the recession of 2008 -- it might have been much milder. But it's not clear that it could have followed the rule. Beginning in late 2008, the Fed doubled the monetary base in just four months -- the largest increase in history. But keeping nominal GDP growing would have required injecting even more money into the economy. It's not clear that the Fed could have done that, as that kind of action is unprecedented in monetary history. And we don't really know how the economy, or the political system, how they would have responded to such unprecedented policies. The bottom line is this -- the Federal Reserve has some powerful tools at its command, so designing monetary institutions and rules is important if the Fed's power is to help more than to harm. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Quantity_Theory_of_Money.txt
♪ [music] ♪ [Narrator] Today we're going to introduce an important tool for thinking about issues in macroeconomics -- the quantity theory of money. Let's imagine the journey a dollar bill might take in a year. Imagine that the dollar bill starts with Tyler, who buys a pupusa from Don, the street vendor. Don gives it to his daughter, who spends it on a pony ride at the fair. It ends up in the hands of Alex, who, after losing it and then finding it in his couch cushions, buys a cup of coffee while on a road trip to see his favorite polka band. So in a year, this dollar has been spent three times -- on the pupusa, the pony ride, and on a cup of coffee. Okay, we've got the building blocks to understand the quantity theory of money already. Our dollar bill -- well, that's money, which we represent with the letter "M" -- how many times that dollar gets used in a year is called the “velocity of money,” which we'll label with a "V." In this case, V is 3, as our bill was spent three times in a year. The pupusa, pony ride, and coffee are real goods and services, which we'll call "Y." And the price of those goods and services, we'll call "P." These are the variables in the quantity theory of money. Now, let's think about this for an economy as a whole. M would be the money supply -- all the money in the economy. V would be how many times a dollar is spent purchasing finished goods and services. Some people hoard cash under their mattress so their dollars have a low velocity, while others spend or invest their money quickly and their dollars have a high velocity. V is the number of times the average dollar is spent. P would the price level of all finished goods and services in an economy, and Y would be all the finished goods and services sold in an economy. So Y is real GDP, and when you multiply it by a measure of the price level, you get nominal GDP. Same thing over here. You take all our money, M, and multiply it by how many times the average dollar was spent, or V, and you get nominal GDP as well. Both sides represent nominal GDP in a different way. So, these are equal by definition, which is why we call this equation an identity. One way to think about it is that how much money we have in total times how many times the money is spent covers the actions of buyers. The stuff we sell times the prices we charge covers the actions of sellers. Given that everything that is sold is, by definition, bought by someone, this equation is true by definition. There are important questions about the variables and how they are measured. How do we measure M, for example? However, the core identity, that M times V must equal P times Y, gives us a lot of insight, and a way of organizing our thought about important macroeconomic issues. So, we'll return to this tool often. For example, what can this identity equation tell us about the causes of inflation? That's the topic we'll turn to next. You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
The_Money_Multiplier.txt
♪ [music] ♪ [Alex] Now that we know how money is defined, we'll learn how banks can affect the supply of money through fractional reserve banking. Let's imagine that you graduate from college and your grandma gives you $1,000 in cash -- that she's been saving under her mattress since the 1970s. And you deposit this cash in your checking account. What does the bank do with your money? Does it sit in a vault with your name on it? No. Banks lend most of your money to people who want to borrow. Banks keep in reserve only a fraction of your money -- money they keep in cash for the ATM or to meet withdrawal demands. This is why this system is known as "fractional reserve banking." So what fraction of your deposit do banks keep in reserve? Well, large banks in the United States must keep in reserve at least $1 for every $10 in deposits. Or we say, large banks are required to have a reserve ratio of at least 10%. But banks often have higher reserve ratios depending upon how liquid that they want to be. If a bank is worried that its customers might withdraw most of their money or if bank loans are just not that profitable, banks will hold more reserves. So the reserve ratio can be greater than 10% and it can change over time. Because of fractional reserve banking, the banking system has a big effect on the supply of money. Let's see how. Suppose that your bank keeps 10% of your $1,000 deposit, or $100, as reserve. And suppose it lends out 90%, or $900, say to Tyler, who's interested in starting a business. That $900 loan is credited to Tyler's checking account. So now there's $1,900 in new deposits. And since checkable deposits are part of the money supply, the money supply has increased. And it doesn't stop there. Suppose that the bank holds 10% of Tyler's deposit in reserve and it lends out 90%, or $810, say, to Janet. Now deposits have increased by $2,710. And suppose that 10% of Janet's money is held in reserve and the rest is lent out. And so this process continues. And as the banks make more loans -- that increases the number of deposits, which increases the number of loans, which increases the number of deposits. So how much money do we ultimately end up with? You can figure that out using what's called the "money multiplier." The money multiplier tells us how many dollars' worth of deposits are created with each additional dollar of reserves. And the money multiplier is simple. It's just 1 divided by the reserve ratio. So if the reserve ratio is 10%, the money multiplier is 1 divided by 0.1, or 10. And what that means is that $1 in new reserves will ultimately lead, through the multiplier process, to $10 in additional money, as measured by, say, M1 or M2. Now let's clarify our previous example and why it was key that Grandma was pulling cash from under her mattress. If Grandma had instead given you a check for $1,000, she'd simply be transferring money from her account to yours, which would not be creating new reserves -- and so we wouldn't see this multiplier effect. And, actually, the key player here isn't Grandma -- it's Uncle Sam. The Federal Reserve can, with the click of a computer button, create new money, new money, which it can use to buy financial assets, thus injecting new reserves into the banking system. But the Fed's control over the money-supply process is indirect. If banks hold the minimum amount of required reserves -- 10%, as we assumed earlier -- then the money multiplier will be close to 10. And if this is the case, the Fed will have a lot of leverage to move M1 and M2 with a small change in reserves. But in normal circumstances, the actual money multiplier is closer to 3. How come? Well, remember, banks can't hold less than 10% in reserve. They can always hold more. And the more banks hold in reserve, the lower the money multiplier. So it's important to understand that the money multiplier isn't a fixed number. And the multiplier process isn't a mechanical relation. Here's another factor. If Tyler had stashed some of his loan under his mattress instead of depositing it into a bank, then his bank -- it wouldn't have had the money to lend out his deposit, and the money multiplier would have been lower. And during a recession, both of these things can happen at the same time. Banks may be reluctant to lend and they'll maybe put more cash in reserve. Plus people tend to hold more cash and not deposit their money in banks during a recession. Both of these factors cause the money multiplier to fall. So the Federal Reserve may have to push harder to increase the money supply during a recession than during a boom. We're going to dive further into how the Fed controls the money supply, and how that's changed since the Great Recession in our next video. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Structural_Unemployment.txt
♪ [music] ♪ [Alex] Structural unemployment is persistent, long-term unemployment. Isn't it redundant to say that unemployment is persistent and long-term? Not quite. What we have in mind is when a large share of the unemployed have been unemployed for a long time, and this has been true for many years. Consider the following data from some leading European economies and the United States. In each case the average unemployment rate in the European economies between 1980 and 2004 was higher than in the United States, sometimes markedly so. In Italy and France, unemployment rates have hovered around 10% for several decades, while Spain has had long spans of unemployment near 20%. Now look at the fraction of the unemployed who are unemployed for more than a year. In most of these economies, between 40 to 50% of the unemployed were long-term unemployed, compared to the United States at just 12.7%. In the United States, the fraction of the unemployed who were long-term -- it did shoot up in the 2008, 2009 recession, but has since fallen. This persistent, long-term unemployment is structural unemployment. One of the causes of structural unemployment is large, quick-hitting and relatively permanent shocks that change the number, location, and types of jobs. The 1970s oil shocks, the opening of trade with China in the 1990s, or the rapid rise of the internet would all be examples. It can take time to adjust to these kinds of big changes in the economy. These shocks, however, they've hit both the United States and Europe. So why is structural unemployment a worse problem in Europe? The most likely answer is that European labor regulations have increased structural unemployment, in part by making it more difficult to respond to shocks. In the United States, for example, the most basic employment law is the At-Will Doctrine, which says that an employee can quit and an employer can fire at any time and for any reason. Now there are many exceptions to the law. Employers cannot fire due to race, religion, and sex, for example. And, at-will employment can be changed by contract. Nevertheless, at-will employment remains the default U.S. employment law. The situation in Europe, in contrast, is often very different. In Portugal, for example, the constitution makes at-will employment illegal. Dismissing a worker instead requires “just cause.” Now, that may seem like a good thing. Who could object to just cause? But in practice, what this means is that dismissing a worker in Portugal requires a complex and often lengthy process that requires union and government approval. Because it's difficult to fire workers in these countries, employers are also reluctant to hire. Imagine how hard it would be to get a date if every date required marriage. In the same way, it's more difficult to get a job when every job requires a long-term commitment from the employer. Let's look at some data at how these labor market regulations affect unemployment. On the horizontal axis, we'll put the Rigidity of Employment Index, which the World Bank calculates to summarize hiring and firing costs and how easy it is for a firm to adjust hours of work. On the vertical axis, we have the percentage of the unemployed that are unemployed for long periods of time. You can see a clear trend. The share of long-term unemployment increases with greater labor market rigidity. European labor law also differs from American law in another respect. European unemployment benefits are typically much more generous than American benefits. In the United States, for example, unemployment insurance might pay, say 30% of what a worker was earning in their job. It's not that much. In Sweden, Portugal, or Spain, a typical worker would get at least twice that much. Now, a worker who's being paid only 30% of his or her previous wage -- they're going to be much more eager to find a new job than a worker who is being paid 60% of their previous wage. Now that's not necessarily a bad thing. Maybe it gives workers time to find better jobs, but it does mean that unemployment rates tend to be higher and are longer-lasting in Europe than in the United States. The European economies, because of these issues -- they've been trying to move towards more flexible labor markets since the 1990s, but the process is slow and difficult. We can illustrate why with, "A Tale of Two Riots." In November of 2005, angry, predominantly immigrant and minority youth, rioted in Paris. The riots were triggered by accusations of police brutality, but they also reflected underlying problems in the labor market, most especially that 30% of the immigrant youth were unemployed. French firms were reluctant to hire young minority workers. Perhaps in some cases because of outright discrimination, but also because young workers are especially risky. Labor law made it difficult to fire and so firms were reluctant to hire. Once again, who wants to go out on a blind date if a date means forever? Now the French government was aware of these problems and they proposed to change labor law, so that employment would be at-will, but just for workers under the age of 26. Such a law would have been good for immigrant youth, but the idea that they might be fired at will -- that offended more elite French students. So these elite students, well, they started riots of their own. They took over university offices, they shut down universities all across France, and they were joined by hundreds of thousands of people who protested the proposed law. There were numerous clashes between the police and the protestors. The government backed down. "The Tale of Two Riots" illustrates how restrictive labor law can create two very different groups: the Insiders -- who for the most part enjoy the protection of long-term stable employment, and the Outsiders -- the people who are frozen out of the regular labor market. They end up having high unemployment rates, and only intermittent and temporary employment. Dealing with structural unemployment and creating a labor market that's open to all workers -- this is one of the most serious issues facing some of the economies in the European Union. Okay. In our next video, we're going to turn to another type of unemployment: cyclical unemployment. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to The Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources.
Principles_of_Economics_Macroeconomics
Fiscal_Policy_and_Crowding_Out.txt
♪ [music] ♪ - [Alex] In order to work well, fiscal policy must be timely, targeted, and temporary, as we discuss in our previous video. But fiscal policy can also be fully or partially offset depending on how central banks, businesses, and consumers respond to fiscal policy. First: central banks. When a government increases spending, the aggregate demand curve shifts out, which increases inflation. Now, central banks often try to stabilize prices. So if fiscal spending shifts the aggregate demand curve out, increasing inflation, the central bank might choose to contract the money supply. Contracting the money supply means shifting the aggregate demand curve inwards. In other words, monetary policy could offset or reverse the fiscal policy expansion. When a central bank responds to expansionary fiscal policy with contractionary monetary policy, we call this a monetary offset. But it's not just the central banks that respond to changes in fiscal policy -- businesses could also act in ways that partially offset a fiscal stimulus. For example, if the government increases spending by borrowing, this will tend to increase the interest rate in the loanable funds market. And if the interest rate increases, businesses may scale back on their investment. So remember that real GDP is consumption plus investment plus government spending and net exports. So when "G" increases, we may see a decrease in "I," investment, offsetting the fiscal stimulus and weakening the effects of the multiplier. Consumers could also respond to fiscal policy in ways that make fiscal policy less effective. If the government cuts taxes to stimulate the economy, people might then choose to save the tax cut. Now, saving money from a tax cut actually makes a lot of sense if people expect that tax cuts today will be matched by tax increases tomorrow. However, if people save their tax cuts instead of spending them, then the aggregate demand curve never shifts out. The multiplier will be zero, and there will be no systematic macroeconomic effects. Now this scenario is sometimes called Ricardian equivalence, after the 19th-century British economist, David Ricardo. Most economists think that it's somewhat unrealistic to model everyone as fully rational and incorporating their future tax burdens when making saving and spending decisions. Tyler claims that he never behaves in this way, though I'm not so sure that's true. Some people, however -- they are very future-oriented. And most people -- they think a little bit about the future when making spending decisions. So Ricardian equivalence probably describes some people, maybe not most people. In any case, to the extent that Ricardian equivalence reflects how people plan, tax cuts will be less effective as fiscal stimulus than they otherwise would be. Okay, summing up. Fiscal policy is complicated, because it's not just a matter of increasing government spending -- we also have to take into account how central banks, investors, and consumers respond to fiscal policy. Moreover, how people respond to fiscal policy isn't mechanical. It depends upon their evaluation of the economic situation and their expectations about the future. So the same fiscal policy can have different effects in different historical situations. Good economic policy therefore requires both an understanding of the models but also an understanding and an appreciation of the actual situation. Thus, economic policy is both science and art. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Intro_to_Business_Fluctuations.txt
♪ [music] ♪ [Alex] A country's prosperity depends upon good institutions and the fundamental factors of production: Human capital, Physical capital, and Ideas. Economic growth, however, it's not a smooth process. An economy advances and recedes, it rises and falls, it booms and busts. Real GDP in the United States, for example, it's grown at an average rate of about 3.2 % per year over the past 60 years. But the economy didn't grow at this rate every day, or every month, or even every year. We call the fluctuations in real GDP around its long-term trend or normal growth rate, Business Fluctuations. Recessions are significant, widespread declines in real income and employment. Declines in employment and increases in unemployment -- they are one of the most significant economic and personal costs of a recession. More generally, during a recession not only is labor unemployed, a lot of land and capital also become unemployed or underused. And when we see a lot of unemployed resources, that suggests that resources are being wasted, it suggests that the economy is somehow operating below it's potential. We'd like to limit that waste of resources. We want everyone who wants a job to be able to get a job. We want labor and capital fully employed to produce a prosperous, growing economy. In the next set of videos, we are going to develop a model of business fluctuations called the Aggregate Demand, Aggregate Supply model. First, we'll learn the basics of the model. Then, we'll use the model to help us understand how shocks can disturb an economy and how policy might help us to reduce the size or cost of business fluctuations. Finally, we'll apply the model to explain some of the largest economic catastrophes in U.S. history, including the Great Depression. You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
What_is_Gross_Domestic_Product_GDP.txt
♪ [music] ♪ - [Narrator] What is Gross Domestic Product, otherwise known as GDP? Gross Domestic Product is the market value of all finished goods and services produced within a country in a year. Think about the economy like a giant supermarket filled with millions of goods, like dresses, and washing machines, and services, like dog walking and massages. Every time a finished good or service is sold, we ring up the price. At the end of the year, we ring up the total -- that's the GDP. Let's look more closely at some of the details. Notice that we said GDP is the market value of all finished goods and services. A finished good or service is one that will not be sold again as part of some other good. When a bakery buys flour, eggs, and butter, we don't count these sales in GDP because these goods aren't finished. They are intermediate goods that, when combined, will become a finished good -- a cake, for example. But, if a consumer buys an egg to make an omelet, the egg is a finished good because it won't be sold again as part of some other good. In other words, our GDP supermarket is like a real supermarket. At the GDP register, we ring up the eggs sold to consumers, and the cakes, but we don't ring up the eggs the baker used to make the cake. There are also goods that are used to make other goods, but are still considered finished goods. These are called capital goods. If Caterpillar produces a tractor and sells it to a farm, the tractor is considered a finished good. The tractor is finished and its value is added to the GDP. Although the tractor is used to make other goods, it won't be sold again as part of another good, so the tractor is still a finished good. ♪ [music] ♪ The GDP is the market value of all finished goods and services produced within a country in a year. GDP only counts production. If an old house is sold this year, that doesn't add to GDP since the house wasn't produced this year. Only the sale of new houses add to GDP. ♪ [music] ♪ GDP also only counts goods and services produced within a country. If you buy a bottle of wine imported from France, that adds to France's GDP, not to U.S. GDP. On the other hand, a computer produced in the United States and exported to France adds to the U.S. GDP. ♪ [music] ♪ Let's go back to the definition one more time, to see some of the limits of GDP as a measure of economic production. GDP is the market value of all finished goods and services produced within a country in a year. If a good isn't bought and sold in a market, then it's not typically counted in GDP. Why not? Counting the market value of, say, all the breakfast cereal produced in the U.S. is easy, at least in principle. Just add up the price every time a box of cereal is sold. Since market prices are observable, every statistician who counts carefully will come up with pretty much the same number. But, without market prices, there's no easy or agreed upon way to calculate how much a good is worth. Polar bears, for example, aren't counted in GDP. The statisticians and economists who calculate GDP have nothing against polar bears. The problem is that there's no easy way to calculate how valuable polar bears are. Just because GDP doesn't include polar bears doesn't mean that we can't love polar bears. And if polar bears were included in GDP, that wouldn't require us to love polar bears either. Ultimately, GDP is just a number. But it's a useful number. In the next few videos, we'll show how the GDP number can be used as a measure of the standard of living. But for that, we'll have to make a distinction between the Nominal GDP, what we have just discussed so far, and Real GDP. So stay tuned. - [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
The_Dangers_of_Fiscal_Policy.txt
♪ [music] ♪ - [Alex] Most economists agree that fiscal policy is useful when many resources are underemployed due to an aggregate demand shock, and the economy needs a short-run boost. There's less agreement when it comes to using fiscal policy to combat shifts in aggregate supply, and less agreement over the potential dangers of debt-financed fiscal policy. Let's consider. Instead of a shift in aggregate demand, suppose the economy suffers a real shock, a shift in the aggregate supply curve. The economy moves from point A to point B and falls into a recession. Fiscal policy in this situation -- it's relatively powerless. A big increase in aggregate demand could increase real growth somewhat but mostly at the cost of much higher inflation. When real growth slows due to an aggregate demand shock, the economy is operating below its potential so there's more room for fiscal policy to bring the economy back to potential. But when real growth slows due to an aggregate supply shock, it's the potential growth rate that has fallen. There's less inefficiency in the economy, and thus fiscal policy has less power. Keep in mind as well that all the earlier problems of fiscal policy -- timeliness, targeting, and crowding out -- they also apply to fiscal policy when combatting an aggregate supply shock. It's just that this time, more spending has the additional challenge that it can't really solve the underlying problem. The economy has fundamentally changed, and attempting to fix it leads mostly to higher inflation rates. Fiscal policy can also be a dangerous tool when used too much. In theory, fiscal policy is like national consumption smoothing: increase aggregate demand in bad times, and pay off the bill in good times. But in practice, politicians usually only follow half of this advice. They spend in bad times, because they have to -- and they spend in good times because they can. Like many of us, politicians find it easier to add to the credit card bill than to pay down the debt. We're just not that great at national saving. And if a government's debt continues to grow, it'll end up spending a larger and larger portion of its budget on interest payments alone, making it more difficult to act in a future recession. In other words, if we repeatedly used debt-financed fiscal policy to stimulate the economy again and again, and we never pay down that debt, then we'll eventually back ourselves into a corner with no ammunition, just when we need it most. Is it possible to have too much debt? Certainly. And here's where it gets really dangerous. Too much debt, especially when a country borrows money from another country in the other country's currency -- this can create uncertainty, and risk, and even lead to economic collapse. Take, for example, Argentina's financial crisis of 1999 to 2002. In the years leading up to the crisis, Argentina's government was spending and borrowing more and more and more, making investors and citizens a little nervous of its ability to pay off its debts. So when the economy suffered a financial crisis, and the government tried to spend even more to stimulate the economy, to get out of the crisis, citizens and investors took that as a bad sign rather than as a good one. In fact, citizens and investors drastically reduced their spending and investing -- so much so that the country experienced declines in real GDP rather than growth. In other words, consumption and investment fell by more than government spending increased -- over 100% crowding out! By 2002, Argentina's debt was 150% of GDP, and the government defaulted on its payments. This was the largest government default in the history of the world. But other developing countries -- Thailand, Indonesia, and Mexico, and even Greece -- they've experienced similar scenarios. So how much debt is too much? Well, there's plenty of room for debate on this, but it's clear that if a government's credibility is low and its debt is high, then fiscal policy can have an immediate negative effect, at least in some economic situations. Fiscal policy is a useful tool, but to be used well, it must be used wisely. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
The_Importance_of_Institutions.txt
♪ [music] ♪ [Tyler] When it comes to understanding economic growth, institutions are often critically important. First, let's define that concept of institutions. When economists talk about institutions, they mean laws and regulations, including property rights, reliable courts, and political stability. They also mean cultural institutions, including norms around honesty, trust, and cooperation. To illustrate the importance of all these institutions, let's look at a story that is both tragic and extreme. Here is the Earth at night. Even from space, you can see where people live by the clusters of lights. Look more closely at this area. In our last video, we discussed the big divergence. The divergence in this example is so dramatic, it can even be seen from outer space. This is the Korean Peninsula. Here's South Korea, a developed, modern economy, a very pleasant place to live, or visit, or work. And here's North Korea -- mostly darkness, with the exception of the capital, Pyongyang, where the ruling elite lives. So what's behind this divergence? The splitting of Korea into two distinct countries provides an almost perfect natural experiment to demonstrate the power of institutions. Originally, the two Koreas had basically the same people, the same culture, the same language, the same history, and pretty similar economies. If anything, the northern part was wealthier. But, starting after the Second World War, the two Koreas ended up on very different institutional tracks. Communism was imposed on North Korea, but South Korea, broadly speaking, ended up with capitalism, and a relatively free market economy. So what happened? It all comes down to incentives. Different institutions create different incentives. In South Korea, the prevailing incentive was for commercial cooperation. Entrepreneurs would produce goods and services, which consumers wanted. And if they succeeded, they were allowed to earn profit and keep that profit. Alternatively, businesses which were not successful were allowed to go bankrupt, so capital was reallocated to more valuable uses. And a society evolved based on cooperation and trust and honest commercial dealing. So over the next few decades, South Koreans became major car producers and exporters, such as the Hyundai, producers of music on a global scale -- I enjoy listening to K-pop. Movies from South Korea have taken Asia by storm, and it's a major location where women go to buy high-quality cosmetics. It's a pretty well-functioning, market economy responding to consumer demands, and the standard of living in South Korea is fully that of a developed country. In contrast, in the North, there's been a totalitarian state where the economy is centrally planned and directed. Those rules meant that most people didn't have the freedom to start businesses. They weren't allowed to keep their own profit. Prices are controlled. Capital is allocated by the communist party, so human energies go into trying to influence politics, to try to have people you didn't like branded as political enemies or enemies of the state. The final result was tragic. And over the last several decades, there have been periodic episodes of starvation, because prices and property rights did not give farmers the right incentives to grow enough food to keep people well-nourished. In essence, North Korea has been a militarized state where people live in fear. I've seen this difference with my own eyes. I've been to the border. When you look back to the South, it's a very pleasant, beautiful, and prosperous place. When you look ahead to the North, well, at the time, I wasn't even allowed to enter the country, because they were so afraid of outside influence. And as an economy, it's so backward that as you can see, it is literally nearly black at night. Let that sink in for a second. The economy in North Korea is so underdeveloped that at night, it is literally quite dark. Growth miracle and growth disaster, light and dark. This is an extreme example, but it's one that helps make clear the importance of institutions. Now, let's turn back to the map. We're going to consider something else you can see in this picture of the Earth at night. People aren't spread randomly across the Earth's surface. Instead, they tend to cluster in certain areas. Can you see where? [Narrator] If you want to test yourself, click "Practice Questions." Or if you're ready to move on, you can click, "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Youve_finished_Macroeconomics.txt
congratulations you've made it to the end of principles of macroeconomics we've covered quite a bit from zombie banks to hyper inflating dictators from the Great Recession to the great depression to growth miracles and the economics of ideas and pupusas of course if you want to put your econ knowledge to the test visit our website to take our macro exam passing the exam earns you a certificate but more importantly testing yourself can improve your long term retention of the material missed some videos along the way click to view our entire medical economics playlist if you'd like to hear when we release new content subscribe to our channel or sign up for our email list ready for more econ check out some of our other popular videos
Principles_of_Economics_Macroeconomics
Game_of_Theories_The_Monetarists.txt
♪ [music] ♪ - [Tyler] Monetarism is another framework for thinking about business cycles. Nobel laureate Milton Friedman of the University of Chicago -- he was the most famous proponent of monetarism. And, as the name suggests, monetarism emphasizes the importance of the money supply, and it emphasizes the decisions central banks make about what to do with the money supply. Now monetarism is based on something called the quantity theory of money. That means, in the long run, the absolute amount of money in an economy doesn't matter, doesn't influence real output or real employment. But in the short run, changes in the rate of inflation can matter. So there are two potential dangers in monetarism: too much inflation, and too little inflation. Let's think first about too much inflation, because this is a big part of how monetarism became more popular. In the 1970s, in America, rates of inflation were considered to be too high, and monetarism had a way to explain this. It said the Federal Reserve was creating too much new money for the economy, and that means prices will be rising and inflation tends to distort the allocation of economic resources. Individuals cannot tell which prices are going up because of the inflation, and which prices are going up because something is more or less valuable, and that what we should do is lower the rate of inflation and bring about more economic stability. So at the time, a lot of Keynesian economists were accepting this higher rate of inflation. But monetarism was saying that, yes, at first more inflation is going to get you higher economic output, but pretty quickly people figure out that there's inflation going on, and that inflation ceases to be effective in stimulating the economy. On the other side of the ledger, there's the danger that monetary growth will be too low, and that means the rate of price inflation will be too low or there may be deflation altogether. And in that setting, according to monetarism, aggregate demand will be too low. In this case, monetarist and Keynesian doctrine -- they're actually pretty similar. Monetarists, like Keynesians, believe that a lot of nominal wages are sticky -- that is they can't be readjusted or renegotiated all the time. This may be a matter of contract, or a matter of law, minimum wages, or maybe just a matter of workplace morale. But when you have sticky wages, and that flow of nominal purchasing power, that flow of money through an economy, when it declines, well, wages cannot just fall in tandem, and employers will lay off some workers, and you will get a business cycle downturn. So for monetarists, there's a kind of Goldilocks rule. It's desired that there be a constant rate of money supply growth. Sometimes that's been given as about 2 to 3% -- not too high, not too low. In general, monetarists believe in constraining the central bank through rules. They don't trust the central bank to have a lot of discretion, and to turn on a dime and make a lot of very complicated, precise decisions. Monetarists emphasize that lags are long and variable. The information of policy makers can be unreliable, so they simply want the stable rule, which rules out the two cases of inflation too high and inflation too low. So, so far, so good. Monetarism has had a huge impact, and because of Milton Friedman and other monetarists, economists now look much, much more at money supplies and central bank policies. But, that said, monetarism still has some important problems. First, monetarism is quite an incomplete account of business cycles. A lot of business cycles can be caused by, say, the bursting of bubbles, or problems in credit markets, or negative real shocks, or other factors. And monetarism just doesn't have a lot to say about these cases. Second, monetarism assumes that there's this notion of "the money supply" as a single, well-defined thing. But, in fact, empirically there are many different money supply measures. There are narrow measures, such as currency plus bank reserves held at the Fed. Or you could add in demand deposits, savings deposits, different kind of credit relationships. Which of those is the true real money supply? Which of those should we stabilize? It turns out those different measures of the money supply -- they don't always move together so closely. And if we stabilize one of them, well, other measures of the money supply may not be that stable at all. Finally, there's another problem with monetarism. If the central bank really does fix a rate of growth for the money supply, this can make it harder to respond to other kinds of shocks. What if there's a negative real shock, such as an oil price hike? Some economists think the central bank should then be more expansionary. What if interest rates turn volatile? Maybe then, again, the central bank should expand credit a bit more. There may be a shock to velocity. The rate at which that money turns over in the economy may change. And, at least under simple forms of monetarism, again, the central bank cannot easily adjust for that. It's the case, in fact -- there's now an offshoot doctrine of monetarism, sometimes called market monetarism or nominal GDP targeting, that says, yes, we start with monetarism, but we actually want to allow the central bank the ability to respond to those changes in velocity. Now monetarists, who generally do not trust in discretion, are willing to put up with these shocks, but in the real world there's a big debate -- many people believe the central bank actually should go beyond the confines of this very limited rule and try to offset some of these other kinds of shocks hitting an economy. So, in sum, monetarism is really important, but still it is considered a somewhat incomplete doctrine of business cycles. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Saving_and_Borrowing.txt
♪ [music] ♪ [Alex] On September 15, 2008, the world's financial system was shaken to its core when the investment bank, Lehman Brothers, filed for bankruptcy. The impact was great, not simply because Lehman was big, but also because it was an important financial intermediary, an institution that helps bridge the gap between savers and borrowers. The failure of Lehman marked the beginning of a series of events that signaled the worst economic downturn since the Great Depression. And while there's several significant angles to the Great Recession, one of them was the decreased efficacy of financial intermediation. Now, some later videos are going to go through this in more detail. But for now, we want to start with some basic observations as to why financial intermediation is so important. We'll start with the supply of savings and the demand to borrow, and the market which brings them together -- the Market for Loanable Funds. And then we'll work our way up to an examination of The Great Recession. So why do people borrow and save at all? Well, let's imagine a world without borrowing and saving. Most people's incomes don't stay flat their entire lives. They change in predictable ways. Here's a typical pattern, showing a person's income over their life, with their income on the vertical axis and time on the horizontal axis. When you're young and still in school, you might make a little bit of money, waiting tables or occasionally mowing lawns. Your first job out of school -- it's going to pay more, and after a few years of experience and hopefully a few raises along the way, you make more than you ever have. Then, as you age, you look forward to retirement, when your income falls. But you're no longer working, and you could really enjoy your golden years. [Estelle from “Seinfeld” TV series] “We're moving to Florida!” [George] “What? You're moving to Florida? Ah-hah! That's wonderful! I'm so happy! For you! I'm so happy for you!” [Alex] Now, let's imagine if your consumption followed the same path as your income and you never saved or borrowed. You'd struggle when young, and you'd be unable to invest in an education. Then, you'd spend every cent you make during your prime working years. Well, that sounds like a lot of fun. But without savings, your income will drop suddenly when you retire, and so will your consumption. Your golden years wouldn't be so golden. [Doug from “King of Queens” TV series] If you're so smart, why don't you tell them that you live in my basement? [Arthur] Why don't you tell them you're enormous? [Doug] Why don't you tell them that your total salary last year was $12? [Arthur] That was after taxes. [Alex] So instead, people tend to follow a life-cycle theory of savings. A person can start out consuming more than she makes, borrowing to fill that gap -- and to pay for things like an education. Then, during her prime working years, she makes more than she consumes, paying down her debt and saving the extra income for retirement. And when retirement comes, she can spend those savings and enjoy the golden years even without working. Now of course, many people deviate from this exact path, depending on details. Most people, for example -- they consume less in college than they do as professionals. Ramen noodles are no longer a staple of my diet. But generally speaking, many people follow a pattern of borrowing, saving, and dissaving to smooth their consumption path over their lifetime. Of course, just like some people can't wait until after dinner to reach for that cookie jar, not everyone saves and spends in the same way. How much you save and borrow depends upon your time preference. Some people -- they're more impatient than others. We all know someone who spends everything they've got and doesn't save enough. On the other hand, if you're keeping to a budget and not spending too much so that you can go to college, well, that's an example of being patient and waiting for higher consumption later. We've also learned from behavioral economics that saving is not just a matter of weighing costs and benefits. Nudges can matter. If your employer automatically enrolls you in a retirement plan, or sets a high default contribution rate, you'll probably end up saving more than if you have to choose yourself, even if choosing yourself would only take a few hours of work once in your lifetime. Another important reason to borrow is to make big investments. Just as students borrow to invest in education, businesses borrow to invest in big projects. Entrepreneurs with great ideas but not much money, they may have to borrow or sell a stake in their idea just to get their venture off the ground. For example, Howard Schultz built Starbucks into a global brand by borrowing and raising capital through several different types of financial intermediaries. We'll talk more about that in upcoming videos. As with any other good, we're going to use supply and demand to analyze the market for saving and borrowing, known as the Market for Loanable Funds. As we've seen, there are lots of good reasons to save and to borrow. But we have failed to mention one big factor -- price. What's the price of saving and borrowing? It's the interest rate. So here's the supply curve showing the supply of savings. As the interest rate goes up, the quantity of savings supplied increases. And here's the demand curve showing the demand to borrow. Lower interest rates incentivize borrowing, so as the interest rate falls, the quantity of borrowing demanded increases. As with any other supply and demand graph, different factors will shift the curves. If a lot of people decide that it'd be a good idea to increase their savings, for example, then the supply of savings will shift outward. As you can see, this would cause interest rates to fall. This is what we saw in countries like South Korea and China as their populations saved more. On the demand side, if investors, say, became less optimistic for some reason, the demand to borrow would shift inward, causing the interest rate to fall. But, if, say an investment tax credit from the government increased the demand to invest, then the demand curve will shift in the opposite direction, up and to the right, pushing interest rates up. Thinking about the Market for Loanable Funds helps us to see the big picture and understand the raw factors that determine interest rates and the quantity of borrowing and lending. But there isn't actually one market called the Market for Loanable Funds. It's not like the New York Stock Exchange. Instead, there are many, many, many markets for different kinds of borrowers and different kinds of lenders. And there are different types of institutions, like banks, bond markets, and stock markets that connect the two sides of the market. We're going to delve more deeply into the different kinds of financial intermediaries, and why they're so important, next. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Puzzle_of_Growth_Rich_Countries_and_Poor_Countries.txt
[Alex] We now return to the core question of this part of the course. Why are some countries rich and other countries poor? I'm going to lay out various pieces of the puzzle keeping in mind that it's it's a complex question with many factors at play which are still being debated. Let's start with a simple example. How does a farmer goes from this to this? The most immediate reason that some countries are rich is that their workers are very productive. So how do workers become productive? Well, they work with more and better factors of production. That's the first piece of the puzzle. Rich countries - they have a lot of physical capital and a lot of human capital, and that capital is organized using the best technological knowledge. By physical capital, economists mean tools in the broadest sense: shovels, tractors cell phones, roads, buildings... More and better tools make workers more productive. Human capital is tools in the mind or the stuff in people's heads that makes them productive. Human capital - it's not something we're born with. It's produced by an investment in education and training and experience. Technological knowledge is knowledge about how the world works, such as an understanding of genetics, soil composition, chemistry. This is the research that informs the books that our farmer reads. The final factor, a factor which is often taken for granted - is organization. Human capital, physical capital and technological knowledge - they've gotta be brought together, they've got to be organized in a way that produces valuable goods and services. In a capitalist society, it's the entrepreneurs who bring ideas, people and capital together in order to produce valuable products. So rich countries - they have a lot of factors of production. But that's a bit too easy. Why do the rich countries have more factors of production? We've got to go back to the basics. Incentives matter. That's the next piece of the puzzle. Let's give an example. In China during the Great Leap Forward of the late 1950's and early 1960's, private farms were confiscated and consolidated into collectives. Collective property meant that the incentive to invest and to work hard was low. Imagine that if you invest and you work really hard you can produce an extra bag of potatoes in, say, a day. If you're part of a 100 person collective, you don't take home an extra bag of potatoes, but only one, one-hundredth (1/100) of a bag. What would be the incentives to work hard, to invest? When effort is divorced from payment there's very little incentive to work productively. In fact, there's a incentive not to work and to free ride on the work of others. As a result of this and many other errors on the part of the Chinese leadership, some 20 to 40 million Chinese farmers and workers starved to death during this terrible time. China did not begin to take off as an economic powerhouse until farmers were allowed to keep the product of their efforts. As one Chinese farmer observed, "You can't be lazy when you work for your family and yourself." If you're curious to learn more about China, do check out our website. So, incentives are important. But now we've gotta ask, "Why?" Why do some countries have good incentives? And the answer is that they have good institutions. So which institutions create incentives that spur prosperity? Well the good news here is that there is considerable agreement about what the key institutions for economic growth are. For example, if you buy a piece of land and you build a farm, do you have an official deed of ownership? ...one that will stand up in a court if someone tries to build, say, a corporate headquarters on top of your farm? Property rights allow you to protect your investment. Our farmer also has to think about our government. She might have to bribe government officials to get permits or worry about the outright seizure of her farm. So honest government is another key institution that allows our farmer to invest. In some places the legal system is of such poor quality that it can be difficult to resolve disputes, such as collecting on a debt, or even determining the ownership of a piece of property. A dependable legal system lets our farmer enforce contracts and borrow and lend money. But our farmer still needs more. Sometimes the problem isn't too much government but too little. Political instability and the threat of anarchy are reoccurring problems in many countries. Who wants to invest in the future when civil war threatens to wash away all of your plans? Political stability is needed to give investors confidence to invest. We're almost there now, but our farmer still has to worry about inefficient and unnecessary regulations - regulations which can create monopolies and impede voluntary cooperation. Competitive and open markets let market signals do their work, and they let the farmer innovate and grow her business. So we've covered the key institutions that allow our farmer to prosper: property rights, honest government, political stability, a dependable legal system, and competitive and open markets. But now we've gotta ask, "Well, why?" Why do some countries have good institutions? This is perhaps the most actively debated question in all of development economics. And here we must answer with a mysterious combination of history, ideas, culture, geography, even a little bit of luck. Take for instance the United States. The US Constitution was fortunately written at a time when the ideas of John Locke and Adam Smith were popular. And it inherited a tendency towards a market economy and democratic institutions from its colonial parent, Great Britain. An open frontier, and plenty of freedom to try new ideas and new ways of living, to leave the old ways behind and to go to the frontier. This idea of the frontier perhaps influences America's entrepreneurial culture even today. And we were also very lucky that George Washington had the virtue to stop at two presidential terms rather than trying to become the next king. So what makes some countries rich and some countries poor? Well it's complicated, and the answer differs depending upon whether we want to look at the immediate causes or the ultimate causes. And these processes are also interacting in a dynamic and changing environment. We do know some of the things that matter, however. And the example of growth miracles, like China, Korea and Japan - that's encouraging. It is possible for very poor countries to grow very quickly and to reach their true potential once better incentives and institutions are put into place. In the next section, we're gonna dive deeper into the factors of production in order to create a simple, but useful model of economic growth. Thanks! [Announcer] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click, "Go to the next video." You can also visit MRUniversity.com to see our entire library of videos and resources.
Principles_of_Economics_Macroeconomics
The_Great_Recession.txt
♪ [music] ♪ [Tyler] A lot of ink already has been spilt discussing the Great Recession of 2008. And a full examination of that would require a lot more than just one video. So today, I'm going to limit our discussion to just one central theme of the crisis, namely financial intermediation. Let's say you're buying a home that costs $100,000. A typical down payment might have been, say, 20%, and that would mean your mortgage was for 80% of the home value, or $80,000. Now in the lead up to the crisis, many homes were being purchased with much less than 20% down -- 10% down or 5% down. Or in a lot of cases, nothing was put down at all -- zero down. When you put money down on a house, that creates a kind of protective cushion. Now, the difference between the value of the house and the unpaid amount of the mortgage -- that's called "owner's equity." So now, when you first buy a house, your down payment is your owner's equity. Over time, as you pay down your mortgage and if your home value goes up, well, in those cases, your owner's equity rises and that makes the protective cushion bigger. The ratio of debt to equity, which represents how much of a protective cushion is in a home or in a company -- that's called the "leverage ratio." So, a 5% down payment on a $100,000 house would mean you'd have $5,000 in owner's equity, which when compared to the mortgage of $95,000, would give you a leverage ratio of 19. So what's the effect of high leverage? It means there's very little room for the price on your home to drop before the value of your house is less than the unpaid mortgage amount. That is, if you needed to sell the home to pay off your mortgage, the proceeds from the house sale would not be enough to pay off the bank. Being under water is clearly not good for the individual home owner. But very importantly, it's also not good for the bank. In the case of foreclosure, say the homeowner cannot keep on paying the mortgage. Well the bank is getting a home but the home isn't worth enough. The bank loses money because the value of the home is less than what the bank was expecting to receive from the home owner in the form of mortgage payments. So again, back to the broader picture. It wasn't just home owners who were using more leverage. Banks were using more leverage. They were buying assets using more debt and less of their own cash. So what we're doing here is stacking problems: the problem of the home owner's leverage, the problem of the bank leverage. And the more problems like these you stack, the more financial fragility you're bringing into the economy. Now in 2004, the investment bank Lehman Brothers -- it had a leverage ratio of about 20. But it continued to borrow more money. And by 2007, that leverage ratio went as high as 44. Now in that setting, if Lehman Brothers sees its assets fall in value very quickly, Lehman Brothers too will in essence be under water. That is the assets of the company will be worth less than the debt the company owes. In other words, in that case, the company would be insolvent. This sounds like such a terrible state of affairs. So you have to wonder "Why would the experienced managers of a large firm like Lehman Brothers have been so risky?" There are a few reasons, but the first and most important reason was just sheer excess confidence. Those managers bought mortgage securities and they made other risky investments. But the managers, like indeed most other people, they just didn't think that American home prices could fall so much. And they also didn't understand that a fall in home prices could potentially create so much turmoil in American capital markets. Another key factor behind the failure was incentives. The managers at Lehman -- they got big bonuses based on the profits of the company. And in some cases, this can lead managers to take on too much risk. How does that work? Well think about it. Bigger profits typically meant bigger bonuses. So if you go from a leverage ratio of 20 to 44, as Lehman Brothers did, that means you can buy more than double the amount of assets with the same amount of initial capital, because you're using more debt. That means more than double the profit if asset prices rise as indeed they had been doing. But what if the assets fall in value? What if the initial risk does turn out badly? And you have to ask when the asset prices did fall and Lehman Brothers went bankrupt, did the managers also personally go bankrupt? No, they did not. They still, for the most part, had a lot of money in their bank accounts. So in this setting, Lehman managers had a lot to gain if things would go well, but they faced only limited downside in the scenario where things would go sour. Let's add another factor to this mix that ended up pushing the economy even a bit closer toward the edge of the cliff, and that additional factor was securitization. So how does securitization work? Briefly, individual mortgages are bundled together and sold to outside parties as liquid financial assets. So rather than lending a company money directly, as you would do with a bond, you buy a mortgage security, and indirectly you provide money to people who use it to buy homes. So it turned out there were all these securities out there which were very hard to value, many of them were riskier than advertised, and many of them were just bad outright, filled with too many high-risk loans. How is it that this happened? Well there were a few factors. Sometimes the problem was outright fraud in terms of how the security was sold and how it was explained. Or sometimes it was a failure of the rating agencies, which were supposed to assess risk more or less accurately, but they performed poorly. But probably the biggest single problem was again a kind of complacency. Most people incorrectly assumed American housing was really quite a safe investment, and that prices would either continue to rise, or at the very least hold fairly stable. One final factor set the stage and brought all of this together, and that's what is called the shadow banking system. So what does that mean? Well here I need to give some terminology. What you and I commonly would just call a bank is actually more technically a commercial bank. And that means a bank that takes deposits from individuals and businesses and it's insured by the government through the FDIC. Because of the government guarantee, depositors don't feel the need to run to the bank at the first sign of trouble and pull out their money. Now investment banks -- they're different. Investment banks, like Lehman Brothers, were a different kind of bank without a comparable governmental guarantee for deposits or liabilities. The money they used -- it came from investors, not from depositors. So the investors were always asking, "If I lend to an investment bank, are my funds safe? Will I get my money back?" And these investors were more watchful and sometimes even prone to panic if something seemed to be wrong with the investment bank. Now the shadow banking system as a whole is made up of investment banks along with other complex financial intermediaries, such as hedge funds, issuers of asset-backed securities like the mortgage bonds discussed earlier, money market funds, and even some parts of traditional commercial banks, which are not covered by the deposit insurance guarantee. So, in that setting, by the year 2008, the shadow banking system actually was lending considerably more than were traditional commercial banks. So we've got highly leveraged houses and banks, banks and other investors holding risky mortgage securities, and a massive shadow banking system highly dependent on short-term loans, which in turn were dependent on investor confidence. This was the proverbial case of being very close to the cliff and needing only an extra nudge to fall off. And that nudge came in 2007 when housing prices started to fall, causing many home owners to be under water. This meant that the assets owned by banks, such as mortgage-backed securities, were dropping in value. Remember, banks were highly leveraged too. So this fall in asset values pushed many banks closer to insolvency. Worse yet, the complexity of investments in mortgage-backed securities obscured how much exposure particular banks faced. The market started to think of virtually all banks as really quite risky, and this exacerbated the financial crisis. The investors who provided the short-term loans to fund the shadow banking system -- well, they fled to safety. They pulled their capital away from these short-term loans to investment banks such as Lehman Brothers, and this run on the shadow banking system was similar to the runs on traditional commercial banks by depositors, as seen in America's Great Depression. And that was a time when even bank deposits were not insured by the government. Without these short-term loans, investment banks and other financial institutions -- they were starved of the money they needed to function. They couldn't keep on making loans of their own and so they started selling their own assets to get operating funds just to stay up and running. But that leads to yet another problem. When a lot of financial institutions are all selling assets at the same time, you end up with what's called a fire sale. As they all sell, that selling pushes asset prices lower -- even lower. And those lower asset values -- that pushes even more financial institutions closer toward bankruptcy. So, financial intermediaries came crashing down and this led to a credit crunch that damaged the entire economy. In this setting, many businesses that depended on credit -- they failed or they stopped growing. Maybe they laid off workers to conserve cash and unemployment spiked. So, looking back we have to ask, "What could have been done? What should have been done?" It's now considered a general problem that short-term loans for the shadow banking system can flee rapidly in times of crisis and cause widespread financial and economic turmoil. So what to do? In response to this, some suggest a similar solution to what we did for runs on traditional commercial banks, namely a government guarantee of some, or all, of those liabilities. However, that's a pretty radical step. It would put an even larger potential burden on taxpayers, maybe trillions. And it also doesn't fix the incentive problems I mentioned earlier, namely that when there’s leverage, and especially guaranteed liabilities, the managers have an incentive to take too much risk. It would make that problem worse. Since the financial crisis, other regulations have been enacted to cover the shadow banking system, and also traditional banks. Those regulations require more equity and less leverage. And that makes sense in terms of my earlier discussion of needing a larger financial protective cushion. Still, it remains to be seen just how effective these regulations will prove. So far there's been no market turmoil comparable to the crisis of 2008. So we just don't know exactly how well the new institutions will work. There's a lot more to cover on the Great Recession. And if you're interested in learning more, please just let us know. Thanks. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go To The Next Video." ♪ [music] ♪ You can also visit MRUniversity.com to see our entire library of videos and resources.
Principles_of_Economics_Macroeconomics
Is_Unemployment_Undercounted.txt
♪ [music] ♪ [Alex] As we saw in our last video, to be defined as unemployed, a person has to be without a job and they must have actively looked for a job in the last four weeks. Now what this means is that if a person without a job gives up looking for work, then they are no longer counted as unemployed. Every now and then someone discovers this definition, and they call the unemployment rate a fraud, a big lie, even a conspiracy. These melodramatic claims are often made for political reasons, when someone wants to argue that the real unemployment rate is higher than the official unemployment rate. Do these claims hold up? Well, there is nothing sinister about the official definition of unemployment. If someone says they want a job, but they aren't actively looking for work, it's hard to count them as unemployed. For example, recently the boxer, Floyd Mayweather, he retired. Is he now unemployed? It seems he doesn't want a job. But Floyd also says that if he was paid enough he'd fight again. [Floyd] If I came back. Of course, it would have to be a nine-figure payday . . . [Alex] But lots of retired people -- they'd take a job if they were offered enough money. So, are all retired people unemployed? Maybe, but that wouldn't be a very useful definition of unemployment. So, it’s quite reasonable to define someone as unemployed only if they don't have a job and they’re actively seeking a job. At the same time, there is nothing sacrosanct about the official definition. It's quite legitimate to look at other measures of the state of the workforce, such as wage growth or labor force participation rate. We'll discuss those in future videos. It's even perfectly legitimate to look at other ways of defining unemployment. In fact, the Bureau of Labor Statistics defines and measures six unemployment rates, called U1 through U6. The official unemployment rate, the one we have defined, is U3. U1 and U2 are more stringent definitions of unemployment. U1, for example, counts someone as unemployed only if they have been out of work for 15 weeks or longer. U4, U5 and U6 are less stringent definitions. For example, the BLS defines “discouraged workers” as people who say they want a job, but although they haven't looked for work in the past four weeks, they have looked in the past year. If we add these discouraged workers to the unemployed workers, we can define a new unemployment rate: U4. Here it is. Including discouraged workers increases the unemployment rate slightly, but the two rates move together very closely. Indeed, as a general rule, most of the alternative definitions of unemployment track each other closely. So, if things are getting worse by one measure, they are usually getting worse by all measures. The same is true when things are getting better. The U4, U5, and U6 definitions of unemployment -- they do give a higher number for the unemployment rate than does the official rate. But they always give a higher number. So, if things are worse today by the alternative measure, then they were also worse in the past, in whatever golden age you want to compare with. Then using any definition consistently -- that's okay. But it’s not okay to use the official unemployment rate when your favorite president is in power and then use an alternative, higher rate when your least favorite president is in power. The bottom line is that even if you think that the official definition of unemployment is too strict, and you think that the real unemployment rate is higher than the official rate -- even so, the official unemployment rate is still a good indicator of the state of the labor market, and whether things are getting better or getting worse. In the next video, we’re going to take a look at three different types, or causes, of unemployment: frictional, structural and cyclical unemployment. [Narrator] If you want to test yourself, click “Practice Questions.” Or, if you’re ready to move on, you can click “Go to the Next Video.” You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Growth_Miracles_and_Growth_Disasters.txt
♪ [music] ♪ [Alex] Are poor countries catching up to rich countries, or are they falling further behind? Said another way, is there divergence or convergence between standards of living in different countries over time? Let's start with what economic historian Deirdre McCloskey calls "The Great Fact" about the modern world. If you graph global economic output over the past 2000 years, you'll see this -- what we’ve referred to earlier as the hockey stick of human prosperity. Now let's look at that same data, not at a global level, but by region. Since the Industrial Revolution, the growth paths taken by different regions have diverged dramatically. The U.S. and Western Europe experienced the hockey stick path of growth, while other regions have stagnated. This was described as "Divergence, Big Time" in a famous economics paper. But that's not the whole story either. Let's dive into this data even further, down to the country level. Here's Argentina in 1950. It's a relatively successful economy with a standard of living similar to many Western European economies. Now here's Japan. At the time, they're quite poor, with a standard of living similar to Mexico. But let's move forward in time. Japan begins growing at an astonishing pace, doubling their living standards about every eight years. Argentina, on the other hand, experienced periods of negative growth. They managed to double their living standard just once in 65 years. By 2015, Japan -- it’s one of the most prosperous countries on Earth. Argentina, on the other hand -- it stagnated. It went from double the standard of living in Japan in 1950 to Japan being twice as prosperous as Argentina today. Japan is a growth miracle, with a standard of living over 10 times higher now than in 1950. Other growth miracles have occurred in South Korea and China. And India today looks like it may have started down the hockey stick path of prosperity. So the good news is that with the right factors in place, a poor country can not only grow, but it can grow quickly and catch up to developed countries. What took the United States 200 years of steady growth can be achieved in other countries by rapid growth, in about 40 years. Catch up can happen in a generation or two. The bad news is that it's not guaranteed. Some countries, like Argentina -- they grow well for a time and then they stall. Even worse are countries such as Niger or Chad which have never experienced significant growth. They're the worst kind of growth disasters –- extreme poverty with very little growth at all. And it's important to remember that these are more than just numbers. A growth miracle means not just more goods and services, but better health and greater happiness for millions of people. See our earlier video showing how GDP per capita is a good summary measure of a country's standard of living. On the other hand, a growth disaster -- it means the opposite. People are less prosperous and they live shorter, and less happy lives. So growth miracles and growth disasters -- they're possible. But what are the causes? What are the factors that lead to growth, prosperity, health, and better lives? That's the topic we're going to turn to next. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Labor_Force_Participation.txt
♪ [music] ♪ [Alex] In earlier videos, we looked at unemployment: when people want a job, but can't find one. But it's also important to look at the factors that determine whether people want a job. Why are some people in the labor force while others are not? The labor force participation rate is defined as the labor force divided by the adult population -- in both cases, excluding people in prison and in the military. If we google "Labor force participation rate United States FRED," we'll find this graph from the St. Louis Federal Reserve. The labor force participation rate was about 59% in the 1950s. In other words, 59% of the adult population was in the labor force, either working or looking for a job. The participation rate then increased to 67% by 2000 before falling to 63% in 2015. Why does the participation rate vary over time? One reason is changing demographics. Let's add to the data the male and female labor force participation rates. We can now see the changes in the total rate have been influenced by two quite different trends: a dramatic increase in the female rate -- the red line at the bottom -- and a smaller but steady decrease in the male participation rate -- in green at the top. In the 1950s, for example, most women were not in the labor force. Less than 40% of women worked. By the year 2000, most women were in the labor force. Female participation rates had reached 60%. Over this same period, male labor force participation rates have decreased from 86% to only 69%. So what's behind these changes? One force is big, structural changes in our economy over the past half-century. In particular, manufacturing has declined as a share of the economy, and services have increased. The decline in manufacturing has tended to reduce male participation rates. And the increase in services has tended to increase female participation rates. Let's take a closer look. Manufacturers used to hire a lot of relatively low-skilled, low-education workers, most of whom were men. Technology, however, has made manufacturing much more productive. We actually manufacture more goods in the United States than ever before, but we do so using fewer workers. And the workers who are hired in manufacturing -- they're more likely to be highly educated software engineers than relatively low-skilled line workers. The decline in manufacturing jobs has hit low-skill, low-education, male workers pretty hard. And unlike the shift from agriculture to manufacturing, these workers haven't been able to find high-paying jobs in other sectors of the economy. As a result, some of these types of workers have dropped out of the labor force. Women, on the other hand, have benefited from the shift to a service economy. Sectors that traditionally employed a lot of women, such as education and healthcare -- those sectors have grown. In addition, women more than men have increased their education levels. As these changes have worked themselves out, male and female labor force participation rates have become much more similar, although males are still about 12 percentage points more likely to be in the labor force than are females. Another important demographic factor that can influence the labor force participation rate is the age distribution of the population. Both young and older adults are less likely to work than people of middle age. Young adults, for example, are often not working because they're in college, while older people have retired. If the fraction of the population that is young or old changes, then we can expect changes in the labor force participation rate. We saw earlier, for example, that the participation rate has declined since 2000. Some people have suggested that this is because the economy is weaker than the unemployment rate would suggest. And as a result, many workers are simply dropping out of the labor force. There's probably some truth to this claim. But another reason is that Baby Boomers have been retiring in greater numbers -- and that alone would account for part of the decline in participation rates. Let's return to data from the St. Louis Federal Reserve, and now graph the labor force participation rate since 1980 alongside the percentage of the adult population in their prime working years, ages 25 to 54. As you can see, these two measures move closely together. As one increases or decreases, so does the other. And that's not surprising. It means that as the percentage of the people in the population who are most likely to work -- as that percentage increases, so does the labor force participation rate. And similarly for decreases. Since the share of the population which is most likely to work has been falling since around 1998, some of the decline in the labor force participation rate -- it was baked in. It was going to happen regardless of the state of the economy. In fact, careful estimates suggest that at least half of the decline in recent labor force participation rates was predictable from demographics alone. Demographics alone, however, aren't the only determinant of labor force participation rates. You won't be surprised to learn that incentives are also important. We're going to turn to that next. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Investing_Why_You_Should_Diversify.txt
♪ [music] ♪ [Alex] In previous videos, we've hopefully convinced you what you should not do. Don't try to beat the market by picking stocks. And definitely, don't pay someone big money to help you pick stocks. Now let's turn to a rule that tells you what you should do. Investment Rule #3: Diversify! Diversify! Diversify! And, choose funds with low fees. Diversification allows you to reduce your risk by spreading your investment across many different assets. The great thing about diversification -- it's a free lunch. It reduces your risk without reducing your return. Don't put all your eggs in one basket. I'm sure you've heard that saying before. Yet when Enron -- one of the most successful and seemingly safe energy giants -- when it collapsed, its employees had about 60% of their retirement savings in Enron's stock. Many employees who once had been millionaires -- they retired with next to nothing. So let's be clear: it may sound loyal, but investing in the stock of your employer -- it's never a good idea. Two reasons: First, you should never have a substantial fraction of your wealth in a single asset, whether it's your employer’s stock or not. Second, investing in your own employer really does put all your eggs in one basket. If the company goes down, you lose your job and your retirement savings at the same time. Terrible idea! Moreover, modern financial markets make it easy to diversify. Our favorite investment is index funds -- low fee funds that simply mimic a large market basket, like the S&P 500 or the Wilshire 5000. But don't limit your diversification to stocks from your home country. That's called home market bias. It's quite easy to diversify internationally by buying an international index fund or by buying more big multinational companies. Once again, diversification is a free lunch. By diversifying, you can lower your risk without reducing your return. Since stock picking doesn't work, you shouldn't pay someone to pick stocks for you. We've said that already. And we've said that our favorite investment device is a low-cost index fund. But, even among index funds, some have higher fees than others. So look for funds with low fees. Vanguard often has lots of good choices of low-fee index funds. But in any case, make sure you check. Fees might not seem like a big deal, but they're one of those things that adds up over time. If you invest $10,000 today, for example, and you hold it in a mutual fund that charges 1% fees annually, then in 25 years, you'll have about $57,000, assuming a market return of 8%. Fifty-seven thousand -- that's not bad. But if you invest the same $10,000 dollars in an index fund that charges 0.2% in fees, a very reasonable number, then in 25 years, you'll retire with just over $70,000 dollars. Do you really want to give up nearly $13,000 -- for nothing? The bottom line is that when it comes to investing, simple is the way to go. If your employer offers a 401(k) plan, sign up. Invest a constant fraction of your paycheck regularly and put the money in a low-cost index fund. I know there's a few sophisticated folks out there who -- maybe they're not yet convinced. Maybe your friend advised you to buy stocks in December to catch the January effect. Or they told you that stock prices fall on Mondays. Maybe you've heard that people aren't perfectly rational and that the market is filled with anomalies that efficient markets theory has trouble explaining. Next up, we're going to dive into the findings of behavioral finance to see whether we can profit from irrational behavior and market anomalies. [Narrator] Check out our practice questions to test your money skills. Next up, Tyler covers how markets sometimes misbehave. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Monetary_Policy_The_Negative_Real_Shock_Dilemma.txt
♪ [music] ♪ [Alex] So far, we've reviewed the challenges that the Fed faces when dealing with a straightforward aggregate demand shock. Now, we're going to graduate to a more difficult scenario, more difficult for the Fed -- a negative real shock to the economy. Recall from earlier videos that a real shock, such as a rapid rise in the price of oil -- that will shift the long-run aggregate supply curve to the left, causing growth to decrease and inflation to increase. Unfortunately, combatting these two issues -- sluggish growth and high inflation -- that requires opposite actions. To decrease inflation, the Fed would have to decrease the money supply and reduce aggregate demand. That will reduce the growth rate even further. Alternatively, the Fed can try to increase real growth by increasing the money supply and increasing aggregate demand. But that comes at the cost of even higher inflation. And remember, economic data isn't always easy to understand in real time. It sometimes happens, for example, that the higher inflation rate is seen in the data before the growth rate starts to decline. So the Fed -- it might start to cut back on the money supply before realizing that the economy -- it's heading towards a recession. So the Fed may start to move the economy in the wrong direction before learning what the actual state of the economy is. And this isn't the end of the dilemma. It's common for negative real shocks and negative aggregate demand shocks to come together. In the real world, everything is intertwined. And bad news, like an oil shock, can cause people to become pessimistic and to cut back on their spending, causing aggregate demand to fall. Now if you're confused right now, don't worry, you're not alone. Fed economists get confused as well. It's just not obvious how to correctly identify the combination of shocks that's hitting an economy. And so there's always lots of heated debate among Fed economists and policymakers about what the right course of action is. Although the Fed has considerable power to influence aggregate demand, the complexity of the economy and the challenges of data quality, timing and control -- that leaves lots of room for error. In fact, the Federal Reserve has probably made some booms and recessions worse rather than better. Some of the errors of the Fed -- we're going to take those up in the next video. And this will help us to understand the practical challenges, which are faced by economists, acting in real time to enact monetary policy at the Federal Reserve. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Growth_Rates_Are_Crucial.txt
♪ [music] ♪ [Alex] In our last video, we covered the surprisingly large differences in living standards between countries. But how did we get to where we are? How did these differences come about? Now we're going to dive into growth rates, and we're going to see how they affect prosperity. So this graph shows real GDP per capita in the United States since 1800. But let's just give a word of interpretation. A 1% increase from a base of 100 -- that's 1. But a 1% increase from a base of 1,000 -- that's 10. So a graph like this can make it seem as if the economy is growing at a faster and faster rate. Actually, all that's really going on here is a change in the base, in the size of the economy. So to handle this issue, we're going to change the graph to a ratio scale. This will help us to see growth rates a little bit more clearly. Now, each tick is a doubling. So here we go from $1,000 to $2,000. Now, $2,000 to $4,000, and so forth. The nice thing about these graphs is that a straight line means a line of constant growth. So for example, here's GDP per capita in the United States in 1845. It was around $2,000. Thirty-five years later, in 1880, it had doubled to $4,000. So we know immediately, right, from the Rule of 70 that the growth rate over this period was about 2% per year. So the lesson from this graph is that the most basic reason that the United States is wealthy is simply that it's grown consistently for a long period of time. We can also use this graph to do something neat. We can look at other countries today and place them in U.S. history. For example, here's Bangladesh and Uganda, both of which have a real GDP per capita today, which is about the same as the United States had in 1800. Here's India. The real GDP per capita today -- about the same as the United States had in 1880. Here's China -- about the GDP per capita of the United States during the Roaring '20s. But remember, India and China -- they're growing really rapidly. So anything I say today is going to be a little bit off tomorrow -- they're catching up. Here's Italy. It has a GDP per capita today, which is about what the United States had around 1980. I remember 1980. I got an Atari. It was pretty good. Life was good. So life in Italy is pretty good. Of course, these comparisons -- they're imperfect. One reason is especially interesting. Every country in the world today has a greater life expectancy than even the richest countries had in 1800. And that's because poor countries have benefited from spillovers from growth in the rich countries -- things like the eradication of diseases, like smallpox, the creation of antibiotics, improvements in the scientific understanding of sanitation. So, even countries which haven't grown in GDP per capita -- they are a lot better off in other ways because of spillovers from the rich countries. So these comparisons, yeah, they're imperfect. But I do think they can still give us some intuition for living standards in other countries, and also for how steady growth improves living standards. So real GDP per capita in the United States -- it's doubled about every 35 to 40 years. And over several generations, it's this steady growth, which results in monumental increases in the standard of living. If things had been different, if the United States had grown more slowly, for example -- suppose it had grown by, let's say, 1% per year since 1800 -- then GDP per capita today would be much lower, about what we had in 1940. Now remember, in 1940, hardly anybody has a car, they're just getting out of the Great Depression, they're about to go to war, World War II, no televisions. People in 1940 were pretty poor. In fact, the average person in 1940 had an income that today would put them below the poverty level. On the other hand, if the growth rate had been higher, suppose it had been 3% per year, then we would've hit our current living standards in 1917. And if we'd continued at that rate, then today we'd have a real GDP per capita level of $893,000. That would've been pretty nice! At current rates of growth, we're going to have to wait until 2159 before we hit that level. I'm probably not going to make it, unfortunately... unless of course, we can find some way of increasing our growth rate. So the lesson here is clear. It's that even small changes in growth rates -- they have really big effects when they're sustained over time. You might wonder, "Why did it take so long for growth in real GDP per capita to really get going? Why didn't it happen before the 1800s? You know, why didn't the Industrial Revolution, why didn't it happen in 1200 or 1200 B.C. for that matter?" That's a really important question. And in our next video from Everyday Economics with Don Boudreaux, he's going to take a look at some of the potential answers and some of the mysteries behind that deep and important question. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click, "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Fiscal_Policy_The_Best_Case_Scenario.txt
♪ [music] ♪ [Tyler] The best-case scenario for expansionary fiscal policy is when there are lots of underemployed resources in the economy, and the government is good at identifying and targeting these resources. In this video, we'll show how to analyze the effects of expansionary fiscal policy using the aggregate supply and demand model. We'll start with an economy that's growing at 3%, or at point "A" on the graph. Let's say, now, consumers become fearful about the future, and so they cut back on their consumption. Assuming no other changes in the economy, a drop in consumption shifts the aggregate demand curve to the left and down. The economy is now operating below its optimal level. It's no longer growing at 3%. In fact, as shown, real growth is negative, and a recession is underway. Even though the economy's growth potential is still 3% a year, when consumers suddenly stop spending, the economy takes time to adjust -- in part because wages and prices are sticky. So, the reduction in consumption creates a temporary reduction in the real growth rate as well. Of course, in the longer run, the fear will subside, and consumption will return to previous levels. That will mean the economy will readjust back to its potential. That's great, but even though everything may be fine in the long run -- still, we don't only live in the long run. As economist John Maynard Keynes once famously said, "In the long run, we're all dead." So, rather than waiting around for the long run, the government might try to step in to try to mitigate this slow, and sometimes painful, adjustment process. By increasing spending, the federal government can try to counteract falling aggregate demand. Or, the government could decrease taxes, hoping to increase private consumption. For now, let's assume the government decides to increase its spending to combat this economic slump. An increase in government spending, if done with sufficient promptness, will increase the velocity of money, causing the aggregate demand, or "AD" curve, to shift to the right. And in one scenario, government spending doesn't have to be as large as the fall in "C," or consumption, to counteract the recession, and that's because of the multiplier effect. Now we've demonstrated a situation where expansionary fiscal policy perfectly offsets the initial fall in consumption. But, as always, shifting lines on a graph is much easier than shifting around real resources in a multi-trillion dollar economy. Fiscal policy has many implementation challenges, and we'll turn to these next. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
What_Do_Banks_Do.txt
♪ [music] ♪ [Alex] Some people want to save and invest; others want to borrow. Sometimes these people -- they interact directly -- say, you borrow money from your parents. But typically, savers and borrowers -- they don't even know one another. So a variety of institutions act as bridges to link savers to borrowers. In this video, we'll cover banks. Banks attract savings from many different depositors by paying interest on deposits. And the banks make loans for which they charge interest. Banks earn a profit by charging a higher interest rate on the money that they lend, than they pay on the deposits that they receive. They earn this money by being a valuable middleman. Not only do the banks link savers with borrowers, they evaluate the quality of the borrowers, so that the loans are productive. Imagine that Howard Schultz came looking for a loan of a million dollars to buy a coffee company called Starbucks and transform it into something new. Now maybe you're rich and you can afford to lend him all the money. But if his venture failed and he couldn't repay the loan, it's going to be a pretty big hit on your wallet. So instead, perhaps you and 99 of your friends decide to share the risk, and you each lend him $10,000. Well, it would be extremely time-consuming and costly for all 100 of you to investigate the Starbucks business plan and decide whether to lend your $10,000. It would make more sense to appoint a single person to do the due diligence to evaluate the business on behalf of everyone. And maybe you would appoint someone who already was an expert in, say, the market for coffee. That's exactly what a bank does. It coordinates the lending of everyone's deposits, and a bank has specialized people and systems to evaluate loan applications. The bank scans the landscape, looking for the most qualified businesses and individuals to receive loans. By pooling the savings of many different individuals, the bank can make large loans, and also spread the risk across a whole portfolio of loans. That means that even if a few loans go bad, it won't bankrupt the bank. So instead of one person lending Schultz a million dollars, it's more like a 100,000 people lending Schultz $10 each, and also lending a similar amount to thousands of other entrepreneurs. Notice that since deposits are being lent out, that means that your savings doesn't just sit in the vault waiting for the day that you want to make a withdrawal. Bank managers pay careful attention to reserve enough cash on hand to fund those depositors that do come calling, while lending out the rest of the deposits to make productive loans. The cash that banks keep on hand -- that's called reserves, and as we'll see in a later video, things can fall apart pretty quickly if banks don't have enough reserves to pay back depositors when they do come calling. So let's sum up. Banks provide valuable middleman services to make our lives simpler. We deposit money in the bank and we earn interest without having to worry very much about risk, and without having to give much thought to how our savings are flowing into productive loans and helping to boost economic growth throughout the economy. Next up, we'll turn to another financial intermediary: stock markets. And we'll explain how stock markets turn savings into investment. [Narrator] If you want to test yourself, click “Practice Questions.” Or, if you're ready to move on, you can click “Go to the Next Video.” You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Introduction_to_Fiscal_Policy.txt
♪ [music] ♪ - [Tyler] When the recession of 2009 hit, the federal government tried to stimulate the American economy. It cut taxes and increased spending. In other words, it conducted expansionary fiscal policy. Fiscal policy -- the government's policies on taxes, spending and borrowing -- that's used to try to mitigate fluctuations in the business cycle, to even out the booms and the busts. But, how is it that expansionary fiscal policy is capable of actually working? Imagine an economy that's operating at full employment. Workers have jobs, and factories are operating near capacity. If in that case, the federal government tries to increase spending to, say, build a new road, then it necessarily has to take away some people and some capital from other sectors of the economy. GDP wouldn't increase, because there's already full employment. So government spending would simply be crowding out private spending and investment. Building the new road? It may or may not be a good idea, depending on how valuable that road would be. But still, the increased government spending would not, in the short run, stimulate the economy. But now, in contrast, imagine an economy during a recession. The fundamental factors of production are underused. Labor and capital are unemployed or underemployed. Machines and buildings are idle. In this case, government spending on a new road probably would increase GDP. In fact, an extra dollar spent during a recession might even increase GDP by more than a dollar. Say, for instance, the government hires unemployed construction workers. These construction workers then use their new income to, say, eat out at restaurants. This causes restaurant owners to hire more workers, and these newly employed waiters and waitresses -- they then spend their money throughout the economy. There's a kind of ripple effect, and the people who receive that money in turn spend more money themselves. The subsequent increases in spending caused by the initial increase in government spending -- that's known as the "fiscal multiplier." Now, expansionary fiscal policy is not the only kind of fiscal policy. The government also conducts contractionary fiscal policy by saving during an economic boom -- by either increasing taxes or by decreasing spending. At least that's how fiscal policy is supposed to work. Later, we'll discuss some of the political economy issues of continual deficit spending and why government surpluses sometimes are so hard to come by. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. ♪ [music] ♪ Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Splitting_GDP.txt
♪ [music] ♪ [Alex] GDP. It's a big number, and it encompasses a lot of activity, from consumers buying bread, to a business buying computers, to the government buying a tank. So when analyzing an economy, it's often useful to split GDP into different subcategories. I'm going to cover the two most common ways of splitting GDP: the National Spending Approach and the Factor Income Approach. First up, the National Spending Approach. This takes all the goods and services that go into GDP and splits them into consumption -- goods and services like cakes and massages bought by consumers; investment -- goods and services like computers or tractors usually bought by businesses; and government purchases, which includes consumption goods like paper and pens, and also investment goods like tanks and computers, which are bought by governments. We then need to make one correction. People in other countries might also have bought some of our goods, so we add exports. On the other hand, some of what we've counted already, some of what was bought by U.S. consumers, businesses or governments, was purchased from abroad -- imports. Imports don't add to our GDP, so we want to subtract imports. Exports minus imports is sometimes also called net exports. Now, how big are each of these categories? Consumption is the biggest at around 63% of GDP. Investment and government purchases make up the rest, with government purchases usually a bit bigger than investment. In the United States, net exports is usually quite small. It's important to remember that government purchases are different from government spending. When the government spends some tax revenue by sending out, let's say, a social security check, that's just a transfer. It doesn't add to GDP. Why not? Well, when the social security recipient gets the check and spends it on goods and services, that does add to GDP. So we don't want to double count. So remember, government purchases are just the money spent directly by government on goods and services. So why are we doing all this? Economists find it useful to split GDP in this way because the forces that determine consumption, investment, and government purchases, they're very different. And if GDP falls, we may be interested in knowing, was that caused by a fall in consumption, or was it a fall in investment or government purchases? If we want to combat a recession, we also have different tools for increasing consumption, investment, or government purchases. We'll be looking more about those tools in future videos. The second way of measuring GDP is called the Factor Income Approach. And it measures GDP by adding up employee compensation, rent, interest, and profit. Now, this may seem a little bit odd. Didn't we define GDP as the market value of goods and services? How can we measure it by looking at incomes? The reason is that when a consumer spends money on final goods and services, that money ultimately is received by someone, namely, by workers, landlords, lenders, and entrepreneurs. So we can measure GDP by looking at the spending, or the other side of the ledger, by looking at the receiving. Now, in practice, there are some tricky accounting issues, such as what to do about sales taxes, but we're going to leave that to the accountants. The basic idea here is that we can compute GDP by looking at the spending or the receiving. And in fact, we do both. When we calculate GDP by adding up employee compensation, rent, interest, and profit, we call it Gross Domestic Income or GDI. Why the different name? In theory, GDP and GDI are exactly equal. But since they're calculated in very different ways, they usually give slightly different results, hence the different names. Let's take a look at the FRED database. Here, we graph GDP and GDI. Hard to see a difference, right? But zoom in a little bit, however. We can now see that they're not perfectly identical. And in a recession, economists often look at both figures since one of them might sometimes give us an earlier or more accurate picture of the economic situation. Keep in mind, however, the key idea. We can split or measure GDP in many different ways, depending on the questions that we're interested in asking. The GDP is always the market value of all finished goods and services produced within a country in a year. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click, "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Game_of_Theories_The_Great_Recession.txt
♪ [music] ♪ - [Tyler] So let's consider in more detail four major theories of business cycles -- Keynesian theory, real business-cycle theory, monetarist approaches, and also the Austrian School of Economics. One way to get a better grasp on these differing theories is to see how they might attempt to explain a very specific historical event. And for that I have in mind the Great Recession of 2008. I'm reminded of the old analogy about the four blind men grasping an elephant. The elephant is a big, complex creature, and in this context, you can think of the Great Recession, or indeed the economy more broadly, as being the elephant. And we, as economists, we're like blind men. We're grasping the elephant. Some of us are touching the trunk. Some of us are touching the tail. Some of us are touching the leg. We see, feel, and process different aspects of the elephant. And it's not a question of one person's perspective being correct and the others' being wrong, but rather to understand the elephant, to grasp the Great Recession, what's important to do is to bring together these multiple and differing perspectives. So what would Keynesian economics say about the Great Recession? Well, a core Keynesian idea is the shortfall in aggregate demand. When there's a shortfall in aggregate demand, there are then declines in output and in employment, and that's when a recession ensues. So, for Keynes, the key components of aggregate demand were consumption, plus investment, plus government -- and, indeed, all of those were falling. Why was consumption falling? Well, the American economy had a housing bubble. Home prices were going up. Everyone was happy, spending and borrowing a lot of money. But when that bubble burst people then felt poorer and more in debt, and then consumer spending fell. What about investment? Well, here there was a problem with the banks. Banks held a lot of mortgages, but when the real estate bubble burst, those mortgage securities, those home loans -- they were worth a lot less. A lot of banks were either insolvent or near insolvency, and they just wanted to hold onto their cash. That meant less credit creation, and it eventually meant less business investment, so that component of aggregate demand fell as well. Finally, what about government? Well, you have less output, you have less employment, governments are taking in less money in the form of taxes, and that had a negative impact on government spending. So all these key components of aggregate demand were falling, and as Keynesian economics predicts, well, you ended up with a pretty big recession. Okay, so what might real business-cycle theorists say about the Great Recession? Well, one possibility would be to go back a little further in time and ask why the American economy had so many structural problems to begin with. And if you look at the data on productivity, in fact, the rate of growth in American productivity -- it slowed down dramatically, so there was something wrong with the supply side of the economy. So maybe the Keynesians are right that circa 2008, 2009, there was a big problem with aggregate demand. But, where did that problem come from? It came from the fact that we were, ultimately, creating less wealth. Imagine it being in America, where people were spending and borrowing as if productivity were growing at about 3% a year, but in reality productivity was only growing at about 1% a year. So, a more fundamental explanation, from the point of view of a real business-cycle theorist, is to focus on the problems, the structural issues in the American economy, even before that 2008 crash came along. Real business-cycle theorists also would look at the period of recovery and ask why was recovery so slow and so painful? Some of that was the continuation of a low rate of productivity growth, but also there was a lot of policy uncertainty. In some cases, taxes went up, or subsidies were applied in such a way that individuals had an incentive not to re-enter the labor force as quickly as possible. Okay, so what about the monetarists? Well, the monetarists, in some ways, take the side of the Keynesians -- that is, they see aggregate demand as the big problem in the Great Recession. But they talk much more about monetary policy. And here I'm thinking of one offshoot of monetarism in general -- it's sometimes called "market monetarism." The market monetarists look at 2008, just when trouble was starting to break. Now, according to monetarism, you want the Federal Reserve to be maintaining a pretty constant flow of nominal expenditure through the economy so that aggregate demand doesn't fall. Trouble was about to break loose in 2008, but at the Federal Reserve, a lot of people in the central bank didn't know this. They were still worried about rates of price inflation being too high. So the Federal Reserve did not perform the kind of expansionary monetary policy that would have been called for. So in the market monetarist view, yes, there was an aggregate demand problem; mostly, we can blame the Federal Reserve system, and the key moment there is, say, fall of 2008 when the Fed should have been much more expansionary. Okay, so what might the Austrian School of Economics have to say about the Great Recession? Well, like some of the real business-cycle theorists, the Austrians want to go back a bit further in time and ask what are the historical roots of the problems of 2008-2009? Some of the Austrians have been very critical of Fed policy. So, starting at around 2001 -- credit conditions were fairly loose, interest rates were quite low, and a lot of stimulus was given to credit, and also to housing markets. And it wasn't just the central bank -- there were actually a lot of different government programs to encourage mortgages, guarantee mortgages, and, of course, the American tax system also encourages borrowing to buy more homes. So you have this overinvestment, and, indeed, malinvestment in real estate, and according to the Austrian point of view, well, a lot of that came from government in the first place. So the Austrians admit there was a housing bubble, but they want to take a deeper perspective and ask, why did the housing bubble get so bad to begin with? Some of the Austrians also -- when they look at the length and severity of the recession -- they cite some of the factors that the real business-cycle theorists have talked about. So, putting all of those perspectives together, when you ask the question, what are the solutions to the problems the American economy has faced? there is indeed some need to choose across differing schools of thought. The Keynesians and the monetarists have different recipes for addressing aggregate demand, whereas the Austrians and real business-cycle theorists -- their concerns are really quite different, and very often they focus more on simply letting markets adjust. But, putting aside solutions, if we're just trying to understand how did this all happen? What was the entire series of events? I would go back to this metaphor of the elephant. Economies and macroeconomies, they're very complex, and we, as economists, we are operating somewhat blind. But we can grasp different sides or different angles of economies, like the blind men can grasp the elephant. But it's not enough just to touch or talk about one part of what happened. So when we put together all of these differing and multiple perspectives -- that actually gives us a deeper and wiser understanding of what the Great Recession was all about. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
What_Is_Money.txt
♪ [music] ♪ [Alex] What is money? In previous videos, we've taken the ordinary meaning of money for granted. But now we want to get a better understanding of money, banking, and central banking. And to do that, we need to be more precise about what money is and how we measure the supply of money. Economists typically define money as a widely accepted means of payment. Basically, money is anything that can be easily used to buy goods and services. Now it's clear from this definition that currency -- paper bills and coins -- they're definitely money. Most of your payments, however, are probably made by writing a check or using a debit card -- both of which transfer funds from your bank account to the seller's bank account. Checking accounts, therefore, are also considered to be money. What about savings accounts? Well, now it gets a little bit tricky. Technically, you can't use the funds in a savings account to buy goods and services directly. But in practice, it's just so easy to move funds from a savings account to a checking account that we often also define savings accounts to be money. For the same reasons, we often also define funds in a money market mutual fund to be money. What about jewelry? Is jewelry money? What about a watch or a comic book? Comic books aren't a widely used means of payment, but you could sell a comic book in your local pawn shop and use the proceeds to buy goods and services. So is a comic book money? Probably not. Unlike moving funds from a savings account to a checking account, selling a comic book -- it takes quite a bit of work, and you never know exactly how much you're going to get. So we don't consider comic books to be money. The basic idea, then, is this: we count as money any asset that's a widely used means of payment or any asset that can be easily converted into a widely used means of payment with little loss in value. What is and isn't money, however -- it's not written in stone. There could be judgement calls. As a result, economists have defined several different measures of the supply of money. The most important of these are the monetary base and the cleverly named "M1" and "M2." The monetary base is defined as currency plus reserve deposits -- deposits held by banks and other institutions in their accounts at the central bank, the Federal Reserve. You may not have heard of reserve deposits, but they're basically the checking accounts that banks use to pay one another. Since so many payments require that funds move from bank to bank, reserve deposits are a very important part of the financial system. M1 is defined as currency plus checkable deposits. M2 includes M1 plus savings deposits, money market mutual funds, and small-time deposits. Now, there are other definitions of the money supply, but these are the ones which are most commonly used. The monetary base is important because as we'll see in future videos, the Fed has the most direct control over the monetary base. However, in order to have profound effects on aggregate demand and the economy, the Fed must also influence the bigger definitions -- M1 and M2. And in the next few videos, we'll learn about the tools that the Fed uses to try to control the money supply and aggregate demand. But first, we need to understand more about how banks can also influence the supply of money through fractional reserve banking and the money multiplier. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. ♪ [music] ♪ Still here? Check out Marginal Revolution's University's other popular videos.
Principles_of_Economics_Macroeconomics
What_Caused_the_Industrial_Revolution.txt
♪ [music] ♪ - [Don] When we tell the tale of the hockey stick of human prosperity, the phenomenon of innovationism plays a leading role in the story. Think about it. The steam engine, indoor plumbing, penicillin, semiconductors, air conditioning, automobiles, TVs, airplanes, desktops, laptops, iPads, smart phones, the internet -- the list of brilliant inventions from the past few centuries is long. Yet, the number of relatively minor, unsung improvements is still longer -- much, much longer. I'd personally like to give a shout-out to whoever invented the sealed lunch bag. You rock. The great economic historian, Deirdre McCloskey, coined the term “innovationism” to describe this phenomenon. She contends that it is the defining feature of the past 200 or so years of human history. Of course, the world had inventors and innovators before the 18th century, but they were few and far between. Compared to today, the world before the 18th century was not only very poor, it was also static. People in, say, 10th century France or 15th century Sweden lived their entire lives without much change. Their economy, their world, was pretty much like their parents' world, which was pretty much like their parents' world and so on, for generations on end. So what caused this orgy of innovation and the resulting bend in the hockey stick? Scholars still debate this question today. Of course, one important component, as argued by Nobel economist Douglass North, was good institutions, such as secure property rights, non-corrupt courts, and the rule of law. These institutions laid the foundation for the resulting expansion of specialization in trade, which unquestionably fueled the innovation engine. However, some scholars contend that this explanation is incomplete. For example, some point to improvements in education, others to the discovery of inexpensive access to reliable energy, like plentiful coal in England. McCloskey argues that the vital spark for all of this innovation was a change in attitudes. Specifically, the growing appreciation among ordinary people, of entrepreneurial innovators, and of the economic changes they unleash. Rather than celebrate conquerors and kings, people began to applaud merchants and inventors. Whatever the answer, getting it right is of profound importance, not just because it explains how we got to where we are today, but, much more importantly, because it is crucial to helping still poor people reach our high level of prosperity, as many around the world are unlucky enough to live on the handle of the hockey stick. Voting continues, so please send us whatever additional Everyday Economics questions you have. Here's the current leader board. Go vote and tell us what topics you want covered next. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Sticky_Wages.txt
♪ [music] ♪ [Tyler] Unemployment is one of the biggest personal and social costs of a recession. Now, given that unemployment is so disruptive, you might wonder, "Why don't employers just cut wages and lower their labor costs rather than firing workers? Wouldn't this be better for just about everyone?" In fact, wage cuts are not as common as you might think. And this is a phenomenon called "sticky wages." Sticky wages means that wages get stuck and fail to adjust downwards, forestalling the recovery process during a recession. So, why don't these wage cuts happen more often? There are a few reasons, but to explain one of them, I'll turn to the parable of the "angry professor." This is based on some people I know -- for instance, my brother Tyrone. He's a professor at Cornell, and he's a perfect example of an angry guy. Why? In a university, when a professor's nominal wage is cut, or even when he or she doesn't get a raise, you'd be surprised at how these people react. They get disgruntled. Their morale can fall. And a lot of them -- they publish fewer papers, or maybe they make trouble at faculty meetings, or they don't try as hard when they teach. But here's the odd part. There's a funny fact about how they interpret changes in their wages. If they're given a nominal wage cut, they get mad. But if the professor's nominal wages go up, by say 3%, and inflation is maybe 4 or 5%, that's just like a wage cut, in real terms, adjusting for inflation. But, in that case, the professors don't get so upset. And that is what we economists call "money illusion." People get more upset by a cut in their nominal wage sometimes, than a cut in their real wage. Now, that matters because a firm often doesn't want to lower nominal wages because of worker morale, and that means that wages fall only slowly in a recession, even when falling wages would end the recession more quickly. This also helps explain why some price inflation can do good in a recession. Price inflation makes it easier for real wages to fall. So, for instance, if prices are going up by 4%, employers could give a 2% increase in wages, and that helps keep up worker morale, even though real wages, and indeed real labor costs, have fallen by about 2%. That fall in the labor cost -- it improves employment, or at least helps limit its deterioration. Now, we can see sticky wages in the data, but do we really know why it's happening? Well, we have a pretty good idea. Economist Truman Bewley -- he surveyed managers about their employment decisions during a recession. He found the main reason employers fire employees, rather than cut their wages, was because they're worried about employee morale. Low nominal wages can bring low morale, and that can generate low productivity, as we saw in the case of our "angry professors." Sometimes, it's easier just to fire some of the workers -- and have the low morale leave the building altogether, and keep the nominal wages constant for the rest, and then reassure them that their jobs are secure. Note that sticky wages are often more sticky for employed workers than for unemployed workers. Wages for a person often can, and do, adjust downwards during a recession, but often that's only after being fired from their first initial job and then rehired by a new and different firm at a lower wage rate -- a slower process than if your current employer could simply cut your nominal wages and keep you on with the same level of morale. Now, given that the sticky wages phenomenon exists, it takes a lot longer for an economy to adjust to negative shocks. In the meantime, unemployed workers are bearing some very real costs. [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Frictional_Unemployment.txt
♪ [music] ♪ [Alex] Frictional unemployment is short-term unemployment caused by the ordinary difficulties of matching employee to employer. The moment a student graduates, for example, and starts to look for work, they're officially unemployed. After a few weeks of applications and interviews, the student might be offered a job, but perhaps the pay is too low, or the location not quite what the student wanted. The student remains unemployed. Only after a few weeks more does the student find and accept a job and officially exit unemployment and enter employment. The student's period of unemployment -- that's frictional unemployment. Now frictional unemployment -- it's ever-present, because the U.S. economy is very dynamic. To see this dynamism, let's take a closer look at some of the job statistics. We often hear on the news that, say, 200,000 new jobs were created or lost this month. Here's a graph of net employment changes. You can see the big recession in 2008 and 2009. when in the worst months as many as 800,000 jobs were being lost. Since the end of 2010, you can also see the recovery, where there have been a little more than about 200,000 jobs created every month. Now, these figures -- they are useful, but it's important to understand that they're net changes. When the news reports that 200,000 new jobs were created this month, what actually happened is that there were around 4.5 million new hires and 4.3 million new separations, that is quits or layoffs. So the net number -- it hides the vast amount of job change which is actually happening behind the scenes. Every month, millions of people quit their jobs -- sometimes to get a new job, sometimes to go back to school, sometimes to retire. Other people start new jobs after graduating or finding new opportunities. This all causes frictional unemployment and it's a normal part of a dynamic economy. Now sometimes changing jobs isn't by choice. People lose their jobs due to a firm going bankrupt, downsizing, or moving locations. But that can also be part of a healthy economy. When firms compete, some will naturally do better than others at delivering the products and the services that consumers actually want. We used to fly Pan Am, eat at Bob's Big Boy and choose our top 10 friends on Myspace. These firms, however, they've disappeared, while others such as Southwest Airlines, Shake Shack and Facebook have grown. It's easy to see these big changes. Less obvious are the smaller changes that occur every day. But all of these changes are important because they move resources across the economy from where those resources have low value to where the resources have high value. So short-term, frictional unemployment -- it's inherent in a growing and changing economy. And overall, it's a small price to pay for growth and change. More serious, however, are the two other types of unemployment: structural unemployment and cyclical unemployment. That's what we'll turn to next. [Narrator] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
How_the_Federal_Reserve_Worked_Before_the_Great_Recession.txt
♪ [music] ♪ - [Tyler] The Federal Reserve is one of the most powerful players in the economy, because it controls the supply of money. And through that control, it influences aggregate demand in the economy. Sometimes the Fed wants to increase aggregate demand, and at other times, decrease aggregate demand. But how does it do this? Well, the Fed uses the money supply and interest rates to affect the amount of loans and credit. Let's briefly recap how the Fed did this in the old days, before 2008. Now in the old days, the Fed typically conducted monetary policy by targeting the federal funds rate with open market operations. What's the federal funds rate? Well, the federal funds rate is the overnight lending rate from one major bank to another. Yes, banks do loan money to each other. Recall that banks make money by taking in deposits and using those deposits to make loans. Banks cannot lend out all of their deposits, because they need some funds on hand to settle transactions with other banks, to give to customers, and also to satisfy the Federal Reserve which requires, by law, that banks hold a certain percentage of their deposits as reserves. Prior to 2008, the banks didn't have much incentive to keep excess reserves -- that is, reserves above and beyond what was required by law -- so the banks tried to keep reserve holdings relatively low. Sometimes banks found themselves with too few reserves to meet the requirements of their customers or of the Fed, so they borrowed reserves from other banks. Borrowing and lending of reserves in the federal funds market -- that established an interest rate, the federal funds rate. And now we get to our second concept, open market operations. The Fed affects the federal funds rate by performing open market operations, and those we define as the Fed using its reserves to buy and sell government securities, typically Treasury bills. And the Fed is making those trades with banks. So if the Fed wanted to lower interest rates, it would buy T-bills from banks, thus increasing the supply of bank reserves. We call that an expansionary open market operation. The new reserves would allow banks to make more loans, thus stimulating the economy, making it easier to start or expand new businesses or easier to get a mortgage. This increase in reserves, it also would lower the opportunity cost of banks loaning those reserves out to other banks, and that, in turn, would lower the federal funds rate. Thus, prior to 2008, the Federal Reserve used open market operations to change the supply of reserves until the federal funds rate was more or less at the level the Fed wanted. This figure shows excess reserves prior to October, 2008. Note that banks were typically holding about 2 billion dollars in excess reserves at that earlier period in time. By the way, if you're wondering, that big spike in 2001? That was a response to the terrorist acts of 9/11, when the Fed made tremendous amounts of emergency cash available to the financial system. Now, at that time, demand deposits in the system often were around $300 billion, so excess reserves were really quite small relative to deposits, less than 1%. Now, going back to the longer story, keep in mind that while the Fed has considerable control over the federal funds rate, there are lots of different interest rates in an economy. In theory, these interest rates to some extent move together, and they're affected by the interest rate the Fed does influence. In practice, those connections can be looser or tighter, and that will influence how good a job, or how exact a job the Fed does in steering the economy. How exactly does this work in practice? Well, the Fed Chair announces a change in the target federal funds rate. That signals the Fed will buy and sell T-bills until the federal funds rate adjusts to the new target. It's interesting, though -- usually the federal funds rate adjusted to the Fed's announced level very quickly, sometimes well before the Fed even conducted the open market operations at all. In fact, the Fed's communication is another important tool the Fed has to influence the economy. For instance, the Fed has very important psychological effects on the market, through its talk, its posturing, and its announcements. Of course, the Fed had other instruments to influence the economy before the Great Recession, but we're focusing on the most important tools. In summary, before the Great Recession the Fed usually changed the supply of bank reserves to affect interest rates and the money supply, and thus it could influence credit conditions and aggregate demand. That was then. Now, for our next video, we're going to consider the contemporary procedures. - [Narrator] You're on your way to mastering economics. Make sure this video sticks by taking a few practice questions. Or, if you're ready for more macroeconomics, click for the next video. ♪ [music] ♪ Still here? Check out Marginal Revolution University's other popular videos. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Patents_Prizes_and_Subsidies.txt
♪ [music] ♪ [Alex] Growth on the cutting edge is all about the creation of new ideas. So we want institutions that create incentives to produce new ideas. Ideas, however -- they've got some odd properties and it makes this kind of tricky. Two people can't use the same shovel at the same time. If one person's using the shovel, the other person can't. That's pretty obvious. Two people, however, can use the same idea at the same time. In fact, the same idea can be used by millions of people and then millions more. In the language of economics, ideas are nonrivalrous. Or we might say that ideas are made to be shared. We want to spread good ideas as far and as wide as possible because the more people who use a good idea, the greater the gains. But, there's a problem. If no one is ever excluded from using a new idea, who will pay for new ideas? And if no one pays for new ideas, then what incentive will there be to create new ideas? Let me give an example. It takes about a billion dollars and years of research and investment to come up with a new drug and to get it through the FDA approval process. But once the formula is known, pharmaceuticals are typically cheap to produce. The first pill costs a billion dollars, the second pill costs 50 cents. Imitation is much cheaper than innovation. But if anyone can copy the new drug, the price for the new drug will quickly be pushed down to its production cost: 50 cents. Now that sounds great! But, that would leave the innovator with no way to recoup the billion dollars of research and development. So we might be worried that if we let ideas spread as far and as wide as possible -- that ideas are going to be under-supplied. The US founders understood this problem, and so they created a special institution to incentivize idea creation and they wrote it right into the U.S. Constitution. In order to promote the progress of science and the useful arts, the Constitution empowers Congress to give to inventors an exclusive right to use and sell their invention for a limited period of time. In other words, a patent. Patents give innovators a temporary monopoly, and thus a way to profit from their idea before the imitators can jump in and push prices down. This gives the innovators more incentives to innovate. But it also means that good ideas -- they don't spread as far and as wide and as quickly as possible. Patents can prevent other innovators from building on top of good ideas. Check out my other video on patents that talks more about this. So we have a great dilemma. We want lots of ideas, but we also want to spread new ideas, and these two goals are in conflict. Navigating this tension is complicated. A patent is one solution, but how long should the patent last? Ten years? Twenty? Fifty? And how broad should the patent be? And how much of an innovation counts as enough to get a patent? Should we allow patents on new business methods like online auctions? Should Amazon be able to patent one-click buying? How about on genes, like the breast cancer gene? And how about patents on new methods for teaching yoga? There are no simple answers to these questions. It's all about balancing incentives for new ideas and letting ideas free in the world so that other innovators can build and improve upon them. And here's another problem. Fundamental discoveries in mathematics, physics, and molecular biology -- they're really important. But often, the more fundamental the idea, the more difficult it is to profit from . . . precisely because the idea's just got so many applications. If it weren't for Einstein's theory of relativity, for example, the GPS navigators in our cars would send us way off course. Yet Einstein was never paid for this application of his idea. In our Principles of Microeconomics course, we learned that goods that create spillovers or positive externalities -- that they're undersupplied. But we also learned that we might be able to increase the production of these goods with subsidies. Government funding of universities and government research grants subsidize the production and sharing of new ideas. This argument for subsidies is strongest where the spillovers are largest, namely for research in basic science. Subsidizing universities or government labs, however, doesn't give researchers much skin in the game. Researchers might work on problems that they find interesting, rather than on the problems that consumers actually want solved. So instead of paying for the input of research, another idea is to offer a large prize for the output of solving a problem. Instead of pharmaceutical patents, it might be possible, for example, to offer pharmaceutical prizes. You win a large prize by creating a drug that successfully cures a disease and then you give the solution to everyone so the idea is spread as far and as wide as possible. One advantage of prizes is that they leave it open how a goal is to be accomplished. Prizes are often won with radical new ideas that no one was expecting. Prizes and subsidies, however -- they've got problems of their own. Who decides what gets subsidized? Who decides what goals get prizes? Patents are profitable only when consumers value the new idea. Subsidies and prizes -- they don't have to pass this same market test. Now that doesn't mean that we shouldn't have subsidies and prizes. It's just another demonstration of how tricky it is to create the best institutions for producing good ideas. When it comes to ideas, just as with goods, institutions create incentives. We want incentives to create valuable new ideas, but also to allow ideas to spread as far and as wide as possible. Patents, subsidies, and prizes -- they're three types of institutions that navigate these trade-offs in different ways. And each may be best at different times or for different types of ideas. Finally, there are other factors that determine the volume of ideas. More people, for example, means more ideas. Next up is a TED talk by a handsome and brilliant economist who will explore some of these additional factors. Don't miss it. After that, we'll finish this section with the Idea Equation. And that's going to help us predict the future of ideas and the future of economic growth. [Announcer] If you want to test yourself, click "Practice Questions." Or, if you're ready to move on, you can click "Go to the Next Video." You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Intro_to_Stock_Markets.txt
♪ [music] ♪ [Alex] Let's continue our discussion of financial intermediaries by looking at stock markets. Stocks are shares of ownership in a corporation, and they're traded in organized markets called stock exchanges. Let's go back to the example we've used before, Starbucks. A member of the public could first buy shares of Starbucks in 1992, after it completed its Initial Public Offering, or IPO, otherwise known as going public. If you own Starbucks shares, you're a part owner of the Starbucks corporation, and you're entitled to a share of the firm's profits. Sometimes you receive this profit directly through a dividend payment. Profits can also be reinvested in the business to grow it, hopefully increasing the value of your shares if you ever decide to sell. It's important to note that when we think about turning savings into investment through buying stocks, it's not the typical buying and selling of existing shares of stock that we're thinking of. That just transfers ownership from one shareholder to another. It doesn't mean that Starbucks actually has additional money to invest. It's when new shares of stock are issued and sold that savings are turned into investment. That happens at the IPO, the Initial Public Offering, or when firms decide to issue new shares of stock, often as part of a plan to raise money to invest in significant new business ventures. The existence of stock markets is a key institution for encouraging entrepreneurship. Selling shares directly raises money to fund big ideas -- that's clear. But less obviously, IPOs provide a big payoff for founders and venture capitalists who invested their time and money when the firm was just a risky startup. Once a company goes public, founders and initial investors can sell some of their ownership in order to diversify their own holdings. Here's an important difference between banks and stock markets. When you purchase a stock, you're essentially making a bet on the success of that company. So when it comes to the stock market, some savers will wind up happy and others will wind up a little bit sad. So the stock market can be a riskier method of investment than investing through banks. Bank savers typically do not have to deal with risky ups and downs in the value of their deposits. Next up, we're going to look at another type of financial intermediary -- the bond market. But before you go, let us know what you think of our videos. Drop me an email, or leave a comment on our feedback site. Thanks! [Narrator] If you want to test yourself, click “Practice Questions.” Or if you're ready to move on you can click “Go to the Next Video.” You can also visit MRUniversity.com to see our entire library of videos and resources. ♪ [music] ♪
Principles_of_Economics_Macroeconomics
Tyler_Cowen_The_Economics_of_Choosing_the_Right_Career.txt
♪ [music] ♪ [Tyler] For individuals entering the labor market for the first time, there's some good news and some bad news. The good news is that the higher earners, or those with postgraduate degrees, are earning more than ever before. The bad news is this: In the year 2000, four-year college grads actually earned more with their entry jobs than they're earning today. Another way to put this problem is to think about taxi drivers. In the year 1970, only 1 out of 100 taxi drivers had a college degree. These days it's about 15 out of 100. Now having a college degree means you have a much lower chance of unemployment than if you don't finish college at all. But overall, what we see is a lot of waste of talent, of human capital -- people who finish college degrees but don't have the right skills to get the best jobs. So there's good and bad news from the labor market. How do we think about what's been driving it? Well, we should go back to the core economic concepts of supply and demand. That is, the supply and demand for labor. To think about how those supplies and demands have changed, let's start with the factor of technology. Technologies have changed, and this has altered supply and demand. So think about a lot of older jobs. They were based in manufacturing. You worked in a factory. Maybe you didn't need a college degree at all, but you didn't necessarily need that much fancy training. A lot of people could take these jobs and that helped build the American middle class. Today it's different. You're much more likely to work with computers and work with information technology. And this will affect both the supply and demand of different kinds of labor. Let's think first about skilled labor. Well, skilled labor -- working with computers -- is much more powerful. The computer enhances the productivity of the skilled laborer. And information technology makes it possible for skilled labor to sell their products around the entire world. And that makes it possible for wages at the top to be higher. At the same time, with information technology, that can be pretty hard to learn. It can be pretty hard to keep up with all of the new developments. So the supply there is more limited. Now let's think about how computers interact with less-skilled labor. Well, less-skilled labor might find it harder to work with computers, but there's another factor -- the computer actually might be competing with you. So in the old days, if you would have gotten a job as a clerk filing papers, well now we do that with software. If you might have gotten a job as a travel agent, a lot of those jobs have gone away. People just book online. So changing technology has made wages rise more at the top, but has held wages down for a lot of other jobs. And new college graduates are experiencing that when they go into the labor market. The second factor of supply and demand has to do with the growth in global markets. Over the last 35 years, we've seen at least two billion new workers in global markets, often in China and in India, because those countries are now wealthier, freer and more open. Think about how this affects skilled labor. Let's say you're a worker really good at working with computers. Maybe you work at Apple, and you help design the iPhone. Well, there's now a much bigger market -- the whole world -- you can sell to, and this means the value of your labor, and thus your wages, will be higher. At the same time, say you're a lesser-skilled worker. Well, there are now more people in the global economy you have to compete with. And a lot of them work pretty hard and they're being paid lower wages. So if you don't have a special skill, you might find your job prospects aren't doing so well, because on the supply side, there's more competition for you. A third set of factors has to do with slower economic growth, slower productivity growth and slower dynamism in the American economy. For instance, contrary to what you sometimes might hear or read, the number of start-ups in the American economy has been declining each decade since the 1980s. That means there are fewer new jobs. That means there is less of a demand for new labor, and that makes it harder for a new college graduate to get the job he or she wants at the right wage. When productivity growth is low, dynamism is lower, there is less turnover in jobs. And that can be fine if you're an incumbent who already has a great job, but if you're just starting off, that's going to make things for you tougher. Another way in which labor markets have become more static is that more and more jobs now require what is now called occupational licensing, namely legal permission to do the job at all. Right now, over a quarter of the jobs in the United States require this kind of legal permission, often coming from a state or local government. It may make sense for some jobs, but should it really be the case that you need a legal license to be a barber, or to be an interior decorator? That increases the cost of entering those sectors, it means a lot of time and some money, to get the license. Again, it's good for the incumbents, who face less competition, but it's bad for people starting off in the labor market. You put all of those factors together, and then what we had happen in this country was the Great Recession starting in 2008. This meant that output was declining, employment was declining. People were laid off. There was a financial crisis. What did a lot of employers do? Well, they froze their hiring. So again, if you were out there trying to get a new job these years, or immediately thereafter, it was harder. And we know from the data that people who start working during bad economic times -- it's slower for them to climb the future ladder of success. So even 5 years out,10 years out, they're earning lower wages or receiving fewer promotions than otherwise would have been the case. So this has led to a persistent effect on American labor, which has limited opportunities. So to sum it all up, the labor market is more about skills than ever before. Yes, finishing college is a great idea, but these days it's no longer enough. What really matters is how much value you can produce for an employer. Labor markets are ruled by supply and demand, and supplies and demands are changing all the time. So the way to think about how to do well in labor markets is to understand those supplies and those demands. Take a look at the relative wages of, say, engineering majors versus psychology or communications majors. You might be surprised. In today's world, the momentum is moving toward people who are trained in information technology, who work well with computers and who can exploit growing global markets. When supply and demand are ruling labor markets, the people who do well are those who have an economic understanding of where is demand high, and where is supply scarce. [Narrator] Check out our practice questions to test your money skills. Next up, we'll show you where to find data to help you decide which career to choose. ♪ [music] ♪
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
17_Packet_switching.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So my name is Hari Balakrishnan, and I'm going to take you through the rest of 6.02, doing the remaining lectures in the class. So far in 6.02, what we've looked at are ways in which we design a single communication link. So we know how to take two computers or two nodes and design what a link between them might look like. And this link might be an actual wired link. Or it might be a radio link. Or it might be an acoustic link. There's some medium over which these two guys communicate. And the main ideas we've looked at have to do with coding, in particular, channel coding, which is a strategy to combat noise and errors that might show up on the channel. And then in order to match what we communicate to the characteristics of the channel-- for example, the ability of the channel to deal in sinusoids-- we studied modulation and de-modulation. So those are the two main elements that we studied. And in both of these, we looked at both how you do this to achieve reliability-- because ultimately, we want to communicate information in a way that's reliable-- and do it efficiently. In particular with modulation, we look at a scheme to share a medium amongst multiple conversations, frequency division multiplexing, which is the topic of one of the tasks on this lab. And with coding, we looked at ways in which you do this coding in a way that isn't just replicating every bit but involves some linear algebra operations that allows you to gain efficiency. So the rest of the class is really about taking for granted our ability to design communication links and putting them together and composing them to build networks. So the basic problem is actually very, very easy. The problem is, you're given a set of nodes-- let's say computers-- and the problem you want to solve is to come up with a way by which you can allow any computer or any phone or any device on this network to communicate with any other device on the network. And that's the problem. So you're given n nodes. And you want all-to-all communication. Now, this is a little different from the kind of-- there are other networks you could design. You could design a network where you're given n nodes and you have 1 to n communication. There's one transmitter, many, many receivers, and you want to design a network for that purpose. What's an example of a network where you have one transmitter, many receivers, and you just want to build something that makes that work? Radio is one example. Television is another example. And those are good examples. In fact, for those kinds of one-to-many networks whether you have-- or k to n networks where you have k sources of information and n receivers and k is a lot smaller than n, it'll turn out that the basic frequency division approach makes sense. I mean, that's how radio stations or TV stations work. Someone in the US-- the Federal Communications Commission has decided to allocate different chunks of frequency to different TV stations and different radio stations. And the assumption is they're always going to be using it. Turns out that assumption may or may not be true. But under the assumption that they're always going to be using it and you have many, many, many receivers, you just divide up frequencies and allow them each to transmit in their own frequencies. And then you have a receiver that's capable of tuning to different frequencies. And you get the information or the channel that you want. We'll actually come back to that problem a little bit in the next two lectures. But for today, the design problem-- and going forward, the design problem you should have in mind is you want a network where you have all-to-all communication and you want to be able to support any application. This is a big deal. We're not just designing a network to allow telephone calls to work. Or we're not just designing a network that allows you to do video conferencing. We're trying to design a network where any application can run on it-- in particular, applications that you might not have envisioned. This is the reason why the internet works really well is because when they designed the internet, they designed it under some set of assumptions. But they were really, really smart to design a network that made minimal assumptions about the application. So it's a network that's good enough for almost any application, though it isn't perfectly optimal for any application. It's just good enough for everything. And that's a really good characteristic of a well-designed network is if it can work even for things you didn't even dream of. When they build the internet, they suddenly didn't dream that the web would exist. They didn't dream that people would be tweeting and telling people they're going to the bathroom or whatever they do on Twitter. I mean, they designed a network. And it just kind of is amazing that all these applications can work. So the question is, what did they do correct? What did they do right? And what are things that-- what general lessons can we learn from it? And the general high level lessons you learn actually apply to any system you build. It'll turn out that whenever-- if you're confronted with a real world problem in an industry or research or whatever, very often you're trying to make decisions on what you need to be doing. And it's very tempting to make decisions based on what you think it's going to be used for. But very often, what you end up eventually using it for is very different from what you thought in the beginning. So it's good to have applications in mind. But it's good not to embed too much about those applications in the design of network. So the high-level principle here is how you can do something that works well enough without making too many assumptions about what's running on top of it. There are two big themes. They're the same two themes that we studied before that we're going to keep coming back to. The first is efficiency. And the second is reliability. The same two themes we come back to over and over again. There's a third important theme about network design, which has to do with scalability. I mean, how can you make it work so this network can work for millions or billions of devices and billions of computers? That's a topic we're not really going to talk about. I'll get to it in the last lecture. But 6.033 and 6.829 we'll talk about those issues. So let me start first with efficiency. If I tell you how to build a communication link that can communicate between any two devices or any two computers, it should be pretty straightforward to now design a network that allows all-to-all communication. Something out? That's a mouse. Great. One way you can design this network is to simply take your communication link that we know how to build and do this. Just connect every pair of computers or every pair of nodes to each other. I'm probably missing a few of these. But this is a great network design because it's composed of bunch of links to build a network. So why don't we do this? Or maybe we should do this. What? It's too expensive. Why is it too expensive? Sorry? You know, how many of you are in Professor [INAUDIBLE] recitation? Great. I understand he gives you guys money to answer or if he makes a mistake. I'm going to do the same thing. Whenever I make a mistake, Professor [INAUDIBLE] will give you some money. [LAUGHTER] So I actually-- I mean, don't hold me to this. But why don't you guys answer? This is pretty straightforward. How many links do you need? n [? choose ?] 2. It's about n squared, right? So n squared, depending on the context, it's either too big or too small. But it's about n2 to n squared lengths. It turns out that's actually a pretty large number of links, because-- and the notes talk about some of the reasons why this is too expensive. But the other reason it's a problem is that it's one thing to design a network where every computer in this room can talk to each other. And conceivably, we might get tangled up in all these wires. But we could imagine laying wires between every pair of our computers and communicating. But there are two reasons this is a big problem. I want to communicate with computers in California or China or wherever. And individual links going across the world and my computer to China and your computer to another computer in China just doesn't scale. It doesn't work very well. And the second problem, the reason why this issue matters is that not all communication links are wires. In fact, right now, the most dominant mode by which people gain access to the internet, including right now in this room, is through radio, is through wireless. And this is a shared medium. So it's not like we can somehow put these wires together. We're going to have to share this communication medium. We're going to have to share this communication network. And somehow we have to come up with a strategy to do this efficiently. There's a few different principles involved in how you design networks. But the main one is that we're going to construct a special computer called a switch. And a lot of what we're going to be doing has to do with what we do in the switch. The other part of what we're going to be doing is what we do in the computers itself. So our network is going to be designed using a set of rules that are obeyed and implemented and followed by the computers. OK, a special set of rules that are implemented by these computers called switches and a special set of rules that are implemented by the end computers, by the devices on the network. And together, they're going to make our communication work. So the high level plan is going to be that we take these computers and rather than put wires between every pair of them, we're going to connect them together into-- perhaps there's lots and lots of computers and many of them get connected to one of these boxes, which is a switch. And a switch may connect to other switches. And some of these switches may have other computers attached to them. And then eventually you might get to other end computers. And when you build a network like this, a structure like this, this kind of a picture is called the network topology. A switch has one or more links attached to it. These links could be wires. They could be shared things like-- like this thing here is a switch. It has no visible links, but it probably has one wire link connecting it via ethernet to the rest of the MIT campus. And out here lots of computers right now are connected to it. It gives the illusion that each of your computers has a separate link to the switch. And we look at how that illusion is maintained and done next time, next lecture. But this is an example of a switch, probably the world's-- you know, this thing is made, I think, by Cisco. So they charge $600 or $800 for it. But really, you can buy it for $40. When you put the word "enterprise" next to anything you sell, you can mark up the price. But anyway, the world's cheapest switches are on Wi-Fi access points. So you connect the stuff together into a topology. And the job of the switch is to look at messages that come in from these links and figure out what to do with those messages and make sure that together they coordinate to get messages to the destinations to which you wish to send those messages. So here's a picture that I got today from MIT's ISNT, which is a picture of MIT'S network. So I just want to give you a sense for what this looks like for a campus like MIT. So the first thing to notice is that this is actually-- it's got some redundancy built in. You don't see it in the picture. But really what's going on here is that we have these two routers here. In the context of the internet, these switches are also called routers. It's taken me 10 years to pronounce it router because where I was brought up, they pronounced it "rooter." And many people say that. But in the US, they say "router." So anyway, these routers here, there are two backbone routers. And they're actually-- each of these guys, these other routers in these different buildings, are connected, actually, to both of these. So the idea here is that if one of those links were to fail or if one of these routers were to fail, the other guy would take over and handle this traffic. Under normal conditions traffic is kind of balanced between these two different routers. So some of these computers-- some of these other routers are connected to one of them. Some of the other routers are connected to the other. And together they work to provide connectivity. These backbone routers get connected to these things that are called external routers, which are routers that connect to various other networks and internet service providers that MIT uses. MIT is extremely well connected. The amount of bandwidth coming in and out, as you might have noticed doing, I don't know, bittorent or whatever the cool people do these days with networks, is phenomenal. MIT commercially uses Sprint, which is an internet service provider, uses level three, which is probably the biggest internet service provider in the US. This thing called [INAUDIBLE] tech is-- I found is that-- so MIT now does telephony through the internet. So it's voice over IP as opposed to the old telephone system. So a lot of that voice traffic goes through that network service provider. Other things here-- this NOX is-- I think it stands for the Northeast Crossroads or something like that. It connects to a network called the internet two, which is a network connecting many universities in the US. And it's a very, very high bandwidth network. And so if you were to communicate with, say, a Stanford or something like that, it wouldn't go over the public internet. It goes over a network that's essentially not commercially paid for but is a private network connecting different universities. And it has a connection to Comcast. So many people who have Comcast in their homes in this area tend to have good-- or are supposed to, in theory, have good delay, low delay to MIT. Out here on this side, MIT is connected to other research and education networks. It has high connectivity to fermilab and to [INAUDIBLE],, because I'm assuming there's a huge amount of data flowing because of things like the LFC experiments. They send terabytes or petabytes of data back and forth. So you need high bandwidth. So they have their own network connection to do that. This NLR is something called the National Lambda Rail, which is another high speed network connecting a bunch of East Coast universities. And then out here on the edges, you have MIT connecting to other-- out here-- other internet service providers. This thing here is funny. It's called Big Ape, which is actually-- its called the Big Apple peering exchange. It's this place in New York City where a lot of people-- a lot of companies and internet service providers have gotten together. And you can just connect to other networks. So MIT connects to, I think, 13 other networks on a non-payment basis. Whereas to internet service providers you have to pay money, you can peer with other networks essentially on a bilateral agreement. So I carry your traffic, you carry my traffic. So it turns out that out in New York, there is this building where a lot of these different networks have gotten together. And MIT is one among those networks. So it has extremely good connectivity. But you can see that already MIT is a tiny campus. And already it's got such rich connectivity to the rest of the internet. I guess as far as college campuses go, it's a big campus. But still, in the grand scale of the internet, it's a tiny thing. And you can already see that there's so much complexity and so many things going on inside the network. So the question is, how does this network get designed? And the main idea that I want to get at today is this idea of packets and packet switching. So the design principle that's used in communication networks is this idea of packets and packet switching. There are some special rules-- simple, special rules that you have to follow to allow these switches to send messages back and forth. And in fact, these are fairly obvious rules. But what's remarkable about them is how simple they are. And they can work. The main idea is that you take your message and you have to decide who it needs to be sent to. And you have to decide who it's coming from. So if I decide that I want to send a message to you in this network, my computer and your computer have to somehow have names associated with them. And in the context of packet switched networks, these names that we associate with-- ideally, these names should be associated with computers. But they turn out to be names that are associated with the link that you use from your computer to send these messages. These names are called addresses. So very concretely, if I have a computer here, my computer may have a name. But this computer here has two or three different links coming out of it. If I connect this-- even this thing here or this ethernet link to the USB port here and I connect a cable to it, that's one link. The Wi-Fi on this is another link. If I turn the Bluetooth on and use that, it's a third link. Each of those links has a different name. The name here is equivalent to an address. Each of these things is an address. So when I send a packet, I have to tell you my address. And similarly, if I want to send someone else, some other computer, a packet, I have to specify the address that I wish to send it to. So that's the first rule of packet switching. It's specify an address-- in particular, specify a destination address. And you specify a source address. Now, the idea is once I specify the addresses and I construct a message, my message has some bits in it. Maybe it's a file. Maybe it's a piece of video or whatever. I add something to that message, which I call the header. The header has a bunch of fields in it specifying something about what should be done with the message. But the only two important things here-- there's three or four things that you need. But the non-negotiable part that you need is a part of this address-- part of this header should specify the destination address. There's other parts of it that specify the source address, as well. The basic structure is very simple. I send a message in which I specify a destination address. And the job-- and my job is done as the source for the time being. I send it to some switch. I'm connected to a bunch of switches. My computer picks a switch to send it to. The switch it picks is typically the switch that that link is connected to. So if I am connected right now through ethernet and Wi-Fi, there's some rule on my computer that decides whether to use ethernet or Wi-Fi. And let's say it decides to use Wi-Fi. It sends this thing, this message, with this destination address to that access point. And that's the first switch it goes to. And then it becomes the switch's job to figure out how to get this message to the actual destination. This combination of a header that includes the destination address and some number of bits that corresponds to the message, this entire bag of bits is called a packet. For something technically to be considered a packet, it needs to have an address on it. Or it needs to have something that's equivalent to an address on it that then allows the rest of the network to decide how to send that packet onward. This is a lot like the way the post office works. When you deliver-- you write your letter. You write who it's from. And you write who it's to. You put it in the mailbox. Your job is done. And maybe at some later point, if it's registered post, you get an acknowledgment that the other guy received the message. Packet switch networks are very much like that. They just work a little bit faster. Now, why is this idea good? Now, the reason this idea is good is that it's extremely robust at dealing with failures, at least in theory, because it becomes the job of the switches and the network to talk to each other and run some sort of algorithm between each other that allows them to always construct and maintain some information that allows them to always, no matter what the failures are, as long as there is some path that takes you from here to there in the network, regardless of failure, as long as the underlying topology allows you at least one path to get between one place to another, the switches figure that out. And if you want to make a network more reliable, you add more switches and more links. And you figure out how to make it reliable. The end points and nothing else have to really bother with that problem. And you can take portions of the network that are unreliable and add some redundancy to it, add more pads to it, and run some other algorithm that allows the switches to figure out how to divert, how to route packets, or how to move these messages across. And this idea is a brilliant idea. It looks completely obvious in retrospect, like all brilliant ideas. But it's actually quite recent. I think they celebrated its 50th anniversary quite recently. In 1959, Paul Baran, who was at the RAND Corporation at the time, wrote one or two-- you know, it's not often you can call a paper seminal. This is seminal. This is really important. It just changed the way communication worked. His paper is called on distributed communications, introduction to-- the first one was "Introduction to Distributed Communication Networks," where he looked at various ways you could design these network topologies and completely theoretically argued that this design would allow you to build a network that could withstand various kinds of failures-- in particular, even adversarial failures caused by enemy attacks. And the second part of the story with these messages that are in packets is he said that if I want to communicate a large amount of data, what you should do is break it up into smaller pieces. So you take a message-- if you have a big file to transfer, don't put it in one big packet. But instead, you break it up into smaller pieces and send each piece into the network. So a big file gets broken up into many packets. Each packet becomes an independent, atomic unit of delivery. Packets could be sent along very different paths, in principle, between any point in the network and any other point in the network. And at the other end, packets could arrive along different paths. And as long as there's some working path, it's the job of the network to figure out how to get those packets through. That's the basic idea. So the first one is this idea of using an address on messages. The second one is the idea of breaking it up into packets. And in particular, these packets could all take arbitrary paths. The sources and the destinations don't determine the path. The switches determine the paths that you have to use, using some algorithms that we're going to be studying. So is the idea clear? Does everybody understand kind of what a packet switched network is? The textbook-- the notes also talk about other ways of doing it. The other big way of doing it, which predates this, was what was done in the Bell Telephone network. It's called circuit switching. It's a different idea. I'm not going to talk about in lecture. You can read about it. It's important stuff to read about, but mostly cultural at this point, because almost every network is packet switched today. So any questions about this idea? It's pretty simple. OK. So here's an example of the world's simplest packet header. This is the 6.02 reference design. So for the labs and everything else, this is the packet header we're going to be using. It has just four fields-- a destination address, which specifies where the packets should be sent. It has something called the hop limit, which I will talk about in a couple of lectures from now as to why we need it. It has a source address, mainly because when I receive a message-- when this computer receives a message from someone, it often wants to send a message back in response. And having the source address allows it to send a message back to the person who sent the message. It's just for two way communication. And it has a length. And the reason for having the length is convenience. You kind of know once the header is done, how many bits do you need? Or how big is the actual data corresponding to the packet? It's also called the payload. How big is the payload in the packet? Now, real world packet header is a little more complicated. Just for concreteness, this is what IP version 6, which is the version of IP everybody's trying to move to, the internet protocol, looks like. It has the destination and source addresses. It has the hop limit. It has the length. And it's got a few other things that we're not going to worry about. They have to do with allowing switches to prioritize certain kinds of packets so that-- I guess things like if you were doing Skype or voice telephony, you might want to schedule those packets differently in the switch so you get low delay. Or if you were-- maybe the CEO's packets get higher priority, whatever. You could come up with policies on deciding how you switch these-- how you schedule these packets. So that's the main idea in packet switching. For the rest of today, I actually want to talk about two performance metrics that people use to evaluate how well a packet switching network is doing in terms of how properties that users care about. And I want to also explain to you why this idea works-- like, this idea that nodes just send data. All these nodes are sharing a communication medium-- I'm sorry. Sharing resources in the switch. So this node can send packets. This node can send packets. This node can send packets. And the switch must have a plan in mind to-- let's say that all these packets are going to some destination and have to go on this link. This switch must have a plan in mind for deciding how to take all these packets that are coming in and sending them along this link. I mean, for example, what happens if packets come too fast for the switch to handle? The speed of these links when they all simultaneously send packets could be bigger than the speed of the link going this way. What does the switch do with that? Does it just drop the packets? Does it hold onto them for some time? What does it actually do? And I want to do this first with a very simple picture that tries to get at why this idea really, really actually works. This idea that makes packet switching work has a fancy name. It's called statistical multiplexing. So let me explain what that means. Let's take it with a very simple picture. So let's say that you have a switch with one link coming out of it. And let's say that the speed of this link-- I need to get into some metrics here. So links are measured in terms of how quickly-- how quickly is the wrong word. In terms of the rate at which they can send data. And there's another metric, which is the delay of the link. So I'll get to both of these more carefully in a bit. But the important thing right now I want to keep in mind is the rate of the link. This is the rate at which it can send bits per second. It's a measure of throughput. So it's typically measured in bits per second. So let me actually imagine that the rate of this link is one megabyte per second, which is 10 to the 6, which is a million bytes per second, or about 10 million bits per second. Let's imagine a simple network that looks like this. Let's imagine that all these links are also coming in at 1 megabyte per second. If somebody came and told you, here's the design of my network. I have a switch. It's connected to three computers, each of which can-- is connected with a link whose maximum speed is one megabyte per second. And this switch is going to connect to something else downstream, maybe another switch, and it goes somewhere else. And the speed of this link is one megabyte per second. Is this a good network design? How would you go about assessing that question? Is this good or bad? How would you know? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Right. So let me ask this before we answer this question. Let's say this was 10 megabytes per second. Is this a good network design? It is. You're paying too much, though, because I mean, really, this link is too fast for the amount of load that is coming in. But yeah, you know, it's a reasonable network design. But the real question is, if it's one here, is it a good network design? And the answer, as the gentleman here pointed out, is it really depends on how much traffic or how many packets per second or bits per second these different computers are going to be sending. Let's say that they all actually send-- when they send traffic, they send at one megabyte per second. And when they don't send traffic, they're quiet. How would you determine whether this is a good network design, whether this works or not? Like, in practice, on average, how often can each of these guys be sending before you determine that this is probably not-- this network isn't going to work. Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Right. And they may or may not be equal. Ideally, what you'd like is just to make sure that over some window of time, they sent slower than the rate at which this link can ship packets. Now, the reason why packet switching works is that when you build a network like this and you scan it out to bigger numbers, it turns out to be extremely unlikely that everybody using the network exercises the network at exactly the same time. I mean, a bunch of people might have their computer on. But if you think about how it's used, you click on a link and you get a bunch of stuff showing up. And then you click on-- you read it for some time. You click on a link and something else shows up. Or if you're watching a video stream, video is compressed. So if the scene changes very often, you end up using a lot more of the-- in terms of the bitrate. But then every once in a while, it's one of these old Russian movies where nothing's changing for 10 minutes. And it's very heavily compressed. And then you get to Schwarzenegger and it's blowing your bandwidth limit. So I mean, it's kind of like that. Traffic is bursty. So when traffic arrives in bursts and the users are not all highly correlated with each other-- I mean, from time to time, you do get these correlations. These are called flash crowds. Presumably this happened last night. Everybody is hitting refresh on the New York Times website. And presumably what's happening there, of course, is that these websites really know what they're doing. So they've actually provisioned with the expectation that starting from 8:00 PM, everybody is sitting there glued. Nothing's changing, but everybody's hitting reload. And they've designed this network-- they've provisioned their network to allow for people to get the answers they want or the results they want to see. So here's some pictures. So what I did was I took-- I sniffed on the traffic in this room. So here's the kind of stuff that you see. So this is the traffic in this room during lecture. Now, this is actually not this semester. But I would assume that it's fairly typical. I should also say this was during-- well, I don't want to say. The x-axis in these pictures is time. The y-axis is the number of bytes that was sent. So you can see that what I've done on top is I've broken time into 10 millisecond windows. So initially on top, it's every 10 milliseconds I just count the total number of bits-- actually, total number of bytes that was sent. Now, you can't read the scale on the y-axis on top. But on the top curve, it goes up to 200,000 bytes in a small 10 millisecond window. Then the curve down here does the same thing. But I've picked 100 millisecond window. Now you can see that what has happened when you've picked a bigger window of time? Has it become smoother or less smoother? What can you say about it? It's become a little smoother. But surely there still are these peaks. The bursts do become smoother. But they don't completely disappear. And what's remarkable about network traffic is that these bursts never completely disappear, but they do get a little smoother as you aggregate over more time. Over 100 millisecond windows, that's what it looks like. Over one second window, it looks smoother. But you can actually see that from time to time, there are these big bursts that take up a lot of the-- that end up over any window of time that you expand out, there's still some probability with which you'll see a big burst of traffic showing up in that window. That's kind of a nice and noteworthy characteristic of kind of real world data traffic. In fact, even when you go to 10 second windows, it says, look, I'm looking at 10 seconds at a time. You get stuff that looks like this. MIT runs a website you can get access to using your web certificates. It's called mrtg.mit.edu. You can actually go to this website and you can see, for different switches, including ones in your dorm or wherever you're living if you live on campus, you can actually look at the statistics from your router. They do this on a per switch level. It's kind of interesting to see when people use this network and when they don't. I think an interesting characteristic of MIT's networks is it turns out if you look at some of the dawn network traffic, it peaks at between 1:00 and 3:00 or 1:00 and 4:00 in the morning, which is probably good, because honestly, I think MIT should negotiate preferential pricing with ISPs, because no one else is using those ISP networks at that time. So it would be actually-- it turns out I learned that the Amazon Kindle kind of does that. When you do your newspaper subscriptions, they actually send it through wireless networks, through this commercial 3G and 4G wireless networks. And I believe that what they do is they send it to you in the middle of the night when not many people other than at MIT are using those networks. So you could take advantage of some of these time varying properties [INAUDIBLE] doing it. So why did I tell you this story? The same thing-- I showed you these time windows. The same thing applies when you bring many, many users together. The odds that we all are going to click on some link at exactly the same time and all of us cause a burst of traffic to happen exactly at the same time is extremely small. Now, it can happen if there's an adversary in the network. If there are bad guys-- and how many of you have heard of denial of service attacks? Yeah, DDoS, Distributed Denial of Service attacks. I understand if you know Russian you get an edge in doing it. So these things are launched because they commandeer a whole bunch of machines and they coordinate an attack. They destroy the assumptions that make statistical multiplexing work because the normal assumption is people are not exercising the network at the same time. So you're not attacking some website or whatever at the same time. But if you coordinate an attack, then you make that assumption not hold, causing congestion to happen, causing traffic to exceed what your network link can support. But under normal, non-adversarial conditions, the assumption is that people are randomly gaining access to the network, which means that you can actually get away with the design of a network that looks like this as long as you study statistics like the average amount of traffic. Like, on average, the guy is not going to be sending more than-- this node is not going to be sending more than a certain amount of traffic when measured over some period of time. What happens when people send traffic in a burst? What happens when, from time to time, in fact, you see these bursts of traffic, right? You look at this picture here. You do it over a one second window or a 100 millisecond window and you see these big peaks of traffic. Lots of bytes at 100 millisecond window. What that really means is that this switch here is going to be getting traffic from different users that probably exceeds-- you know, is perhaps the sum of all of the input links. So it's a large amount of traffic. If you have a design like this, something's got to give, because you're getting water or packets coming in at one megabyte per second times three. And you've got a link that can only send one megabyte per second. So what can you do? What can the switch do? AUDIENCE: [INAUDIBLE] PROFESSOR: The easiest thing it can do is just drop it. Just say, you know what? Just drop it. You laugh, but I'm telling you, sometimes dropping it and letting the end point deal with it is a better strategy than holding onto it and simply keeping it in line. It's like, you've got to be careful, right? I like the idea of storing it. But for how long do you store it? How much do you store? For example, if I look at that burst of traffic here and I have a network like this and I look at this big burst of traffic here, over a 10 second window, I'm seeing traffic that's probably, in this example, perhaps 10 or 100 times bigger than the average. The average is sitting down somewhere. And maybe this is 10 times the average. The peak to average ratio might be 10 to 1 or 20 to 1. So how much should you store inside the switch? If you were designing a network and I told you, well, all right, good idea. Why don't you store the packets. You're going to put these packets into a data structure called a queue, right? Packets come in. Packets go out. Packets go out whenever the link is able to send packets. You keep shipping packets out. In the meantime, traffic's coming faster than you can handle. You're going to put stuff in a queue. How much? Do you want to keep everything? If you did, you'd be like Disney World, because they have these lines that go forever, and nothing's moving and everybody just piles on the end of the line. This is a tough question. We're going to answer this question somewhat. There's no single, easy answer to this question. But the rule of thumb that I'm going to have you keep in mind now is you're probably going to keep between 10 milliseconds and 100 milliseconds worth of traffic. I'll get to why later. For now, it's some small amount of time worth-- amount of traffic. The reason why you need this queue is to absorb a burst of traffic that you're not able to immediately send. But the important principle in a packet switch network is, you need a queue, but they're a necessary evil, because the only thing that the queue is doing for you is absorbing the burst. But the only thing-- the bad thing that it's doing for you is adding delay. Just because you have a queue, the network ain't going to move faster. The network's moving-- the link's moving at the same speed whether you have a queue or not. The only thing the queue is doing is it absorbs a burst so that whenever the network link is able to send packets, you can ship packets from the queue and you don't want to drop too many packets. Now, if you're lucky, the size of the queue is enough to absorb all of the burst. And then the traffic eases and you get to send the rest. But if you're unlucky, the queue overflows and you drop some packets. And then the endpoints have to somehow deal with it. So what are the things we've looked at? Packet switch networks as defined by a header, which includes the destination address. The way the network works is that the sources just ship a packet with the header that includes the destination address. The switches somehow are going to figure out how to ship-- how to get those packets to the destination. The reason why this stuff works is because of the statistical multiplexing. And finally, the reason we need a queue in a packet switch network is to absorb these bursts. So what I want to do in the remaining six or seven minutes is to tell you about the other metric by which we're going to evaluate our networks. The first metric I introduced already is the rate of a link. When you have links of different rates, you can also define the rate for an actual communication. When a source sends a packet to a destination, you can measure the rate at which bits are arriving at the destination. That's the throughput of the data transfer of the bit rate. The other metric we're going to care a lot about is called the delay. The fancy term for delay is latency. I really don't know why they have two terms. But you know, from time to time, people use the word delay or latency. And by the way, I'm going to try hard to use the word rate here or bitrate or throughput. Often you see the word bandwidth, like oh, my bandwidth is 10 megabits per second. And that's actually fine to use, except it's confusing in a real communication system because we're going to-- we've already used the word bandwidth to refer to a frequency. And so we've already said the bandwidth is defined in terms of, say, Hertz or something like that. And it's just a little confusing to also use bandwidth for rate. So we're going to try to use words like bitrate and throughput to refer to bits per second. So delay is measured in seconds or milliseconds or microseconds. And what we want is you have a source that sends a packet or set of packets-- let's say a single packet to a receiver going through a network of switches. And I want to ask-- if I send a packet at some point in time, let's say at time 0, when does that package reach the receiver? That's the delay for a single packet. So I just want to explain to you how to calculate this or how to measure this. So let's say that the packet has a size of L bits. So what does the answer depend on? Let's take an even simpler example. Let's say that I have a sender. I have a receiver. I have one link between them. No switches. And the packet has size L bits. I send a packet at time 0. When does the last bit of the packet show up at the receiver? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Good. So I need to define this thing here. Let's say that the bitrate of this link is c bits per second. So I have l bits and I have a link that can send packets at c bits per second. Therefore, something here should be l divided by c second. That is from the moment I start shipping these bits, from the time delay between when the first bit arrives-- first bit arrives at the receiver and the last bit arrives at the receiver, that time distance or time difference is c divided by l-- sorry, l divided by c seconds. Because I ship these bits. These bits go back to back over the link. If I look at when the first bit arrives and I look at when the last bit arrives, that time difference is the spacing-- the time difference between any two bits showing up at the receiver is 1 over c, seconds, because if the link can send c bits per second, any two bits are separated apart by 1 over c seconds. Therefore, from the time at which the first bit arrives to the time at which the last bit arrives, that distance is l over c. That difference is l over c seconds. This l over c has a name associated with it. This is called the transmission delay. Now let's say I want to send just one bit. I have to now-- I send a bit at some point in time. And that bit shows up some point in the future, because it can't show up immediately. If it did, we'd probably have to change the laws of physics because speed of light is no longer valid as a finite limit. So what is the time between when I send the first bit and when the first bit shows up here? What does that depend on? Like, let's think I want to communicate to the moon. I send one bit of information. I put it out onto the-- or even one sample. I put it out on the radio or whatever. How long before it gets to the moon? AUDIENCE: [INAUDIBLE] PROFESSOR: Depends on what? Does it depend on the rate at which I can communicate? No. What does it depend on? AUDIENCE: [INAUDIBLE]. PROFESSOR: Sorry? You guys said it. What is it? AUDIENCE: [INAUDIBLE]. PROFESSOR: Speed, and the speed of what? AUDIENCE: [INAUDIBLE]. PROFESSOR: It's the speed of light in that medium, or speed of whatever the signal you use is. If it's acoustic, then it's the speed of sound over the medium. So it depends on the distance and it depends on a speed at which a signal can propagate through that communication medium-- for example, the speed of light. So the distance is b and the speed of the communication medium is, let's say, v. That thing is called the propagation delay. So let me organize this properly so I'm not confusing everybody with these different terms. So so far we've hit two sources of delay. The first source of delay, which I said second, is the propagation delay. This is the time it takes for the first bit to get to the other side. It depends on the speed at which a signal propagates through the medium and the distance between sender and receiver. So sound travels at one foot per millisecond, I think, roughly something like that. So if I'm doing acoustic, that dictates the propagation delay. The second delay is the transmission delay, which depends on this l over c. The third delay is whatever processing delays there are. That, for example, is when a switch gets a packet, it has to look at the packet's header, figure out the destination, do something to that. There's some competition time that the switches have to work with. That delay is called the processing delay. This is purely some sort of computing delay. And it's usually very, very small. And the fourth delay is the queuing delay, because it could be that you get these packets in and they have to sit behind in a queue. And that imposes a delay in communication. So that's called the queuing delay. And it's usually a very variable source of delay. On many networks, these other delays are constant-- not always, but generally constant. The transmission delay may or may not be constant. But usually these are more constant. The queuing delay is not a constant delay. And the actual delays that you experience when you click on a link, there's lots of reasons why the website is slow. But these are a principle, dominant factor in many, many cases. So we'll pick up on this next week after the quiz two stuff. You deal with quiz two and p set 6. And we'll continue with multi-hop networks.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
11_LTI_channel_and_intersymbol_interference.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HARI BALAKRISHNAN: Hi. My name is Hari Balakrishnan. I'm your replacement professor. You guys know about replacement referees? AUDIENCE: Yes. HARI BALAKRISHNAN: No. So I'm the other professor in the class. And I generally start with materials that's a little bit later. But what we thought we'd do today is to talk about this Audiocom library. So before I get started and show you how to use this and all the different ways in which things will break because it's something the real world, let me see a show of hands. How many people have tried to install or run or do something with it? OK, how many people don't have any idea of what I'm talking about? OK. And how many people have managed to get something to work? By "work," it's if you got the preamble to decode, it means something is working. All right how many people have got nothing? OK. Well, it'll all work. It's just a question of figuring out how to get it to work. OK, so here's the story. In order to do something practical with all the theory that you're learning, it's important to try to implement something. Now, the theory we're learning applies to a wide range of communication channels. It applies to radio. It applies to wired links-- you know, ethernets or cables. It applies to optical links like infrared or free-space optical links. And it also applies to audio. And we've decided to use audio as the vehicle to bring these ideas into the lab for two reasons. The first and most important reason is that it turns out it's the easiest piece of hardware. It's the most convenient piece of hardware that everybody has access to. All you actually need is a computer with a microphone and a speaker. And in fact, I would imagine that in the next couple of years, it'll run on this thing too. So you could write apps on this that will make this work. Already, we have stuff working on an Android and iPhone to do much of the stuff. So the first one is just-- it's everywhere. Everybody has it. You don't have to do anything special to get it. The second is that it turns out it actually illustrates many of the problems that we're talking about-- noise and this idea of how linear time-invariant systems-- an understanding of linear time-invariant systems applies to what we're trying to build to communication channels and how you use the ideas you've learned about LTI systems to improve the performance of whatever you implement. I'm going to restrict myself to this Audiocom system, but these ideas apply across the board. So let me tell you how this system does its job. And this goes back to what Professor Verghese was telling you before. Ultimately, all of the information is modulated on top of a carrier. And a carrier, as you know, are sinusoid. So time goes this way, and this is the amplitude, or which we also call the voltage. So it's a time-invariant waveform. What your receiver is capable-- so the transmitter is capable of generating these kinds of waveforms at different frequencies. So this might be, for example, 1 kilohertz or 1,000 hertz. We can go up to-- I've got mine to work up to 20 kilohertz, which is fortunate because I can't hear it. And there's a wide range of frequencies this could work in. In the lab, I found that things tend to work between 1 kilohertz and only about 3 or 4 kilohertz. So that's the range we're talking about. So you pick a carrier waveform for your transmission. What the receiver's capable of doing is-- it's a digital receiver. So what it's capable of doing is receiving data, receiving signals from the sound card at a certain number of samples per second. We're going to call that the sampling rate. We've used this term before. The sampling rate can be anything. By default in the system, we've picked the highest possible sampling rate of 48 kilohertz. In some cases, you might have to yank it down. You might have to go as low as 8 kilohertz. But 48 kilohertz seems to work in the lab. It works on a bunch of machines that I've looked at. So we have two things. We have the 1 kilohertz, which is the carrier waveform. We'll use the notation FC for that. And we can receive samples at 48 kilohertz. And that's how the transmitter also, when it wants to send stuff on the air, it picks a sampling rate. It's going to pick 48 kilohertz. The receiver and sender need to agree on the sampling rate. What happens then is very straightforward. If you have a carrier waveform that looks like this, what 1 kilohertz means is that we do the cycle 1,000 times a second, which means that one period of the cycle is 1/1000th of a second, or 1 millisecond. Now, if I sample this at 48,000, or 48 kilohertz, or 48,000 times a second, in one of these periods, how many samples do I get? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: This is where you have to tell me something. Sorry? AUDIENCE: 48. HARI BALAKRISHNAN: 48. Great. So I sample at 48,000 times a second. Each of these things is 1/1000th of a second, so I get 48. What that means is that we pick a sample here. We pick a sample here. We pick a sample here. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and we keep going. 12, 12, et cetera. So we get 48, OK? So if I were to send a pure sinusoid at the receiver-- and the way the sinusoid is transmitted is by providing these samples, each sample being equal to this voltage value picked from this curve. And I send each of those samples, and each of those in our implementation is a floating-point number, OK? Like a 32-bit floating-point number. I pick a number here, 0. And then I pick something here, which is the value of the sinusoid at this point in time, and this, and this, and this, and all the way down. And I send those samples. The receiver is going to be listening at 48 kilohertz and picking up whatever it's getting on the audio channel. And that's what it's going to be assuming. That's what it's going to assume was sent. Of course, in the real world, if I send at 1 volt, what might be received is 0.01 volts, or 0.2 volts, or whatever, because signals may, in fact, lose amplitude as you transmit them over the air. So that's the overall context. I'm going to start by showing you a few things here. The first thing I'll show is this grapher program that actually is part of the package. But you need some GTK toolkits for this to work. So you need this for the lab, but this is just a useful test. If I transmit a waveform, this program assumes that I'm transmitting data at 1 kilohertz. If I send data at 1 kilohertz-- and this is a pure sinusoid. Let me do that I'll explain its parameters in a bit. [TONE PLAYS] You can see that what's going on here is if I'm silent, the points get demodulated at plus 1 and minus 1 that correspond to the top and the bottom of the waveform. [CLAPPING] And if I make a noise, the effect of noise get visible. And in fact, you can see that sometimes the guy on the left goes to the right, and vice versa, OK? So that's actually a very visual effect of what noise does to your transmission. I'm going to show this again. I'm going to do this again so you can see what's going on. As the amount of noise increases, you can see that-- you can imagine that what's going on here in the signal-- and this is called bipolar signaling, where you might send bit 0 as a particular kind of sinusoid, and a bit 1 at the other way, as the other kind. So the idea would be that the 0's get mapped to the left. The 1's get mapped to the right. And as we increase the amount of noise, you can start to see that they cross over, and we'd be making errors in our estimation. [TONE PLAYS] So things are working fine now. But as I [CLAPS] make noise, or as you start talking, and you create additional noise, we're going to start to see the effects of the noise. And this is annoying, so I'm going to turn it off. OK? So that's the first order way in which you determine that things are working. You don't need to use the graphing program to do that. And this is actually annoying to the eyes, so let me turn that off too. So you run, and the documentation we've given you tells you how you go about step by step trying to debug this. So let me tell you what these options mean. It's all written up, so you don't have to take notes right now. Everything is there. In that command line, the dash little s 256 refers to the number of samples per bit. I'll get to that point in a second. The dash capital S 1 means that we send-- capital S is the kind of source. You know, it's something about the nature of the source. Capital S 1 means that my entire information is sent as 1's, which means that I'm sending a pure sinusoidal carrier. All I'm doing is sending the sinusoidal carrier. Dash g just means, show me some graphs. And n refers to the number of bits I wish to send, OK? So these are all documented, and you don't have to worry about it. So what we're going to do now is actually to show you what happens when we transmit some data. This is actually the very first task in the lab. You transmit some data, and you plot what the noise histogram looks like. [TONE PLAYS] So what we did was we sent a couple hundred bits at some appropriate number of samples per bit. You know, the way you tell if stuff's working is that line there. If it received the preamble-- and there's this long preamble. I'll explain what that word means. But if you receive the preamble, it means that stuff's working. If you didn't get the preamble, it means that-- it doesn't mean that things are completely broken. It just means you got to do a little bit more work. We also plot out the signal-to-noise ratio of the transmission. And that shows up there. If that signal-to-noise ratio is something like 20 dB or 15 dB, things are fine. This system doesn't really work below 15 or 10 decibels. It means that the signal-to-noise ratio is too low, which means you either have to yank up the volume, or you have to go to a quieter location. Or-- and I'll explain this-- you have to change some of the parameters in the program. Now, what does this graph look like? We'll ignore this for a second. For this number of samples, that was the normal distribution. At 0.44, you could look at it, and it sort of looks Gaussian. It's not quite a Gaussian. I'll explain why in a little bit. What this picture show is this is what the noise-- this is what the received samples look like. And it has a mean value that's in the center. And then it's kind of got a shape on both sides. This picture in the middle are the samples post-demodulation. It's actually want you received at the receiver after we ran the demodulation step. And the stuff on top-- the blue refers to what was transmitted, which in this case was a pure carrier. The green shows what was received, which is some noisy version of the carrier, because we didn't send information other than just the carrier. This is effectively the first lab task. We have to do nothing more than run this a few times. And most of the first task in the lab, the first two tasks in the lab is just making sure that the stuff works. Now, this entire system-- you know, everything works if this thing called the preamble is decoded. So what is this preamble? And why do we need it? Well, the problem is that, as you saw in this graphing receiver when I showed this to you, the audio hardware is going to be-- what it's doing is it's always listening. When you run the program, it's always listening on the channel. So it's getting data. Even when I'm not sending anything, it's getting something. There's always something on the audio channel. So the question is, how does the receiver know that what it's receiving is part of a legitimate transmission? All communication systems need to solve this problem of synchronizing between the sender and receiver, and that's done using the preamble. The preamble is nothing more than a well-known bit sequence, sequence of bits, that the sender and receiver both agree on. So in this case, our preamble is that long sequence there-- 1 0 1 1 0 1 1 1, et cetera. And there's some guidelines and rules of thumb that go into what makes a preamble good and what makes it not so good. We're not going to worry about that here. I'm just telling you that's the preamble. So as long as the receiver is successfully able to decode the preamble, it means that it knows that there's a legitimate transmission over the air, and it can start listening to it. So if you don't get the preamble, all bets are off as to what the heck's going on on the channel. And sometimes, you may not get the preamble. Now, one of the things you would have to do, you would want to do, is when don't get a preamble, usually, it's a sign that either some samples are being lost because the audio hardware isn't able to cope with it. And the documentation describes how you deal with that problem. Or it means that the sampling rate is too high, and maybe something is running on your computer that's-- I don't know. You have some video going on in the background, and it's not keeping up to read at 48,000 hertz. Or it's a sign that the signal-to-noise ratio is too low. Maybe you're in a very crowded location, or maybe the volume is too low. So those are usually the ways in which you go about fixing this problem with the preamble. Now I want to show you one more noise graph. What I'd like you to do now is actually make-- we're going to try to do this with some music on the side. What I want to show you is that as you increase the amount of noise, the noise is captured in the variance of the Gaussian distribution. The more the noise, the bigger the variance in the distribution. So I'm going to try to do this by sending 1,000 bits. So this will take a little bit of time. So let's go send it without-- [TONE PLAYS] So we'll send this without a huge amount of noise. I mean, there's some ambient noise in the room. And this will take a little bit of time to demodulate and get working. There's a couple of errors. The bit error rate is 0.002 bits, so we had an error here. And we saw something like this. It isn't quite a Gaussian in this case. But we saw something like that. Now, what I'd like you to do is kind of start-- when I say yes, just start making some noise. Just clap, or just talk, or whatever. Just turn on your phones. Do something. [CHUCKLING] [TONE PLAYS] Yeah. [CHEERING AND APPLAUSE] All right, let's see what-- All right, we're done. All right. [LAUGHTER] AUDIENCE: I had to. [CHUCKLING] HARI BALAKRISHNAN: I was actually going to play-- I found some nice YouTube clips of a music group, one of the MIT a cappella music groups. OK, we really got toasted here because you guys started talking, and we couldn't recover the preamble. But we've probably got-- AUDIENCE: [LAUGHTER] HARI BALAKRISHNAN: That was the noise distribution that we saw. And you can see that it's actually kind of a [INAUDIBLE] distribution. I'll show you what this picture is. This is a picture that essentially will become your best friend, or maybe your worst enemy. This is called an eye diagram. And it looks completely messed up. So I'm going to show you what that is. And we're going to talk about it. So let me explain to you-- but this noise was too high. And the point here is that that's shown in this picture here where we ended up with enough of a variation that we couldn't distinguish between 0's and 1's. We sent something which had a little bit of 0's in the beginning. And then the entire 1 distribution was spread between 0.1 volts to 0.9 volts. So the variance is extremely high. Now let's do one thing. I'm going to change this to just transmit random pieces of information here. When I don't give anything, it means that the data that's being sent is just a random sequence of 0's and 1's. Let me change this to 200 bits. [TONE PLAYS] All right, so we got those bits through. And that's a beautiful eye diagram. I'll explain what this means. But the point is that when you see the separation, and you see a point in the middle here with a big gap on this eye diagram graph. And then you see a separation between, these were the 0's, and those were the 1's, which means you could threshold somewhere in the middle and separate out which bits were 0's and which bits were 1. It means you're cooking things a bit, OK? So you can see that there's a distribution here of what the 1's looked like in the empirical data. There's a distribution here of what the 0's look like. This is not going to be a Gaussian at all. I'll explain why that is in a moment, OK? So I have to do two more things today, and then I'll turn it over to Professor Verghese. The first one I want to explain to you is what this eye diagram is and why it's kind of useful. And why, as you add more noise into the system, the combination of something called intersymbol interference when noise gets in the way of decoding the bits. Now, on this channel, there are two things that distort the quality of communication. The first is noise, and you kind of saw the effect of that. The second is this thing that we've been studying by modeling it as a linear time-invariant system. The idea is that when you have a sequence of 0's on the input, and then you go into a sequence of 1 samples, that sudden sharp input transition does not immediately get captured at the receiver. It takes time for the 0 to settle into a 1 and a 1 to settle into a 0. And you can see that in this picture here. Now let's focus in in this picture here and look carefully at this place here. Oops. All right, let's do this again. I learned about this, like, 15 minutes ago. So bear with me. All right, so they were 0's at the bottom, and then we bumped up to 1 on the input. The input bumped from 0 to a 1. This is after we demodulate it. So if things work perfectly, you would see at the output the 0 immediately goes to a 1. But what do you actually see? You see that it takes a while for the 1 to settle down, right? It goes from 0 to 1 sharply on the input, but it takes a while to go from 0 to 1. And then I go up like that. And then I want to go from 1 to 0. And you can see that as it comes down to 0, it takes a while to settle down. Do you guys all understand why that happens? Like, I don't mean the physics of why it happens. But what I mean is how that idea relates to this idea of the unit step response and the unit sample response? This is nothing more than the unit step response of this channel, right? I have a 0, and I've assumed I'm on 0 for a while. I bump up to a 1, and it takes a while for it to go from 0 to 1. And similarly, it takes a while for it to go from a 1 back to a 0. Now, suppose I end up switching 0, 1, 1, 0, 0, 1, 0, 1, 0, 1 very, very quickly. And I don't give enough time for the whole thing to settle down. In other words, as I go from a 0 to a 1, before it settles down into 1, if the next bit is a 0, and then I start coming down. And before the next bit settles down to 0, I get to a 1. And before it comes back to 0-- I keep doing that. What I'm going to end up with is this combination of 0's and 1's that are sort of random start confusing the receiver because we're not giving enough time for the 0 to settle down. And similarly, when we go from 0 to 1, we're not giving enough time for the 1 to settle down. That's what this eye diagram was referring to where we found in this confused case-- and then when you have noise, some of the 0's get moved to 1 anyway, and 1's get moved to 0. And we end up with this very crowded picture. The way you tackle this problem is it all has to do with the number of samples that you decide to use within 1 bit. So coming back to this picture, if I do this at 48,000 samples per second, in any one of these carriers, I have 48 samples that I use. But now I get to decide how many samples corresponds to 1 bit of information? When I have a bit that I want to transmit-- let's say a 1-- how many samples? What that effectively means is to transmit a 1, how many of these periods of this waveform do I want to use to transmit a 1? So for example, if I decide I want to send a 1 as three carriers of the waveform, then in any one of these carriers, I have 48 samples. Therefore, I represent a 1 as 48 times 3, which is 144 samples per bit. Now, to transmit a 0, I could do a bunch of different things. I could decide to keep the channel silent. If I do that, it's called on-off keying. And that's what we're using here. So we're sending 1 as-- if we decide to use 144 samples per bit, then it means we represent a 1 as 124 samples, which corresponds to three of these. And then we represent 0 as nothing. So we don't send anything. If we do that, this is called on-off keying because we send on for 1, where we send a sinusoid. And for sending us a 0 bit, we send nothing. Now, if the number of samples I select per bit is too small, you end up with this effect that I don't give enough time for the 0 to settle down to-- when I make the transition from a 0 to a 1, if I pick too small samples per bit, before we settle into a 1, the next bit may show up as a 0. And then before we settle down into a 0, the next bit shows up perhaps as a 1. And we end up commingling the 0's and the 1's. And we're not able at the receiver to tell the difference between these different-volted samples after we demodulate. So we don't know what happened. So if the eye diagram is a way to capture that, what the eye diagram does is-- it's a kind of a clever hack. What it does is-- thought I had it somewhere. There it is. The way you generate an eye diagram-- you don't have to worry about writing the software for it. But you will look at a lot of these kinds of pictures in lab 5. You transmit a random sequence of bits. And then you look at the samples that were received at the receiver after demodulation was done. The output of the demodulation is a set of voltage values. What you do is you look at three bit periods. In other words, you look at a sequence of time, a number of samples that corresponds to three bit periods. So for example, if I pick 144 samples per bit, what I do is I take 144 and multiply that by 3 bit periods. And I look at that many samples. So for every three bit periods, I take all the samples, and I plot them, OK? So a particular sequence of bits-- in this case, it might be a 0 and a 1 and a 0. This is a 0 followed by a 1 followed by a 0. A different sequence of bits could be a 1 followed by 1 followed by 1. A different sequence of bits could be a 0 followed by 0 followed by 0. So there are eight combinations of sequences of three bits each. So there are eight-- you have this clean eye diagram like this. You're going to see a variety of different lines corresponding to all the places where different 3-bit sequences appeared in your input. So for any given 3-bit sequence of the input, there's a 3-bit sample sequence at the output that corresponds to a number of samples equal to the samples per bit multiplied by 3. And each of those generates one of those trajectories through this picture. If you generate that picture, and you find that there's a very clean gap between all possible combinations of these bit sequences, as is in this case here, then it means that you're very likely to be able to decode. Because what you can do is, essentially, the receiver can decide that when 1's happen, it corresponds to something over here at 0.35 or 0.4 volts. When a 0 happens, it may not be a 0, but it may correspond to something like a 0.1 volt here, which means I can pick the middle point and slice it at that middle point to determine whether the received bits were 0 or 1. Now, if we were to run this experiment again and make a lot of noise-- let's try that-- what would happen is we may not be able to decode at all. So I'm going to request you guys to make a little bit of noise after I start. And then we'll see how it goes. All right, start. [WHISTLING] [TONE PLAYS] [APPLAUSE] AUDIENCE: [SCREAMS] HARI BALAKRISHNAN: You know what? It wasn't loud enough. It worked great. But let's look at what the eye diagram looks like. Well, it's a little worse than the other one, but I think the energy is down in this room. So let's do it one more time. I want to be in a position where nothing decodes. [TONE PLAYS] [SCREAMING AND APPLAUSE] Well, 15% better is, in fact, my preamble decoded, which is amazing. [INTERPOSING VOICES] HARI BALAKRISHNAN: And that's what it looked like. OK, this is an eye diagram even a mother wouldn't like. All right, so I'm going to stop here. Before I turn it over to Professor Verghese, are there any questions? Anything at all. I know some of you have had issues. And I met one or two of you this morning to fix your computers. And I'm happy to do that. I'll be in the lab from 4 o'clock. We'll get it working on your computers. If it doesn't work, then you're going to have to use the lab machines. Let's take some questions or comments. Any people have any questions about this stuff? Do people get an idea of what's going on and what do you need to do? Once you get this working, the labs sort of-- they'll write themselves. You just have to do a little bit of work. Questions? This can go up to higher frequencies too. So if the sound's annoying your ears, you can actually get it to work. I know one of you guys is trying to get this to work with ultrasonic reception. That's challenging, but we can probably help you on. Yeah. AUDIENCE: What do we get for the uniform noise since [INAUDIBLE]? HARI BALAKRISHNAN: Uniform noise? AUDIENCE: Yeah. So if we just have another tone on the other side, how do we get to [INAUDIBLE]? HARI BALAKRISHNAN: Yeah, you mean if you were to make another transmission at exactly the same frequency? The beauty of this is that there's different demodulation schemes. Right now, we're using something called the envelope demodulation, which Professor Verghese talked about before. All that's doing is it's taking the absolute value of every received sample and then running a simple averaging filter. And that's what you write in the lab. It's a very, very simple demodulation. We'll study something called quadrature demodulation in probably next lecture or the one after that, which means that'll have the property that, if you transmit at a certain frequency, and somebody else interferes and transmits at a different carrier frequency, you're still going to be able to recover your transmission. But if the other transmitter has significant signal strength in the same frequencies that you're transmitting in, then he's going to start to look like noise to you, and that's going to not work. So what happened here is when you guys were all whistling and were clapping and so on, there were signals generated across all frequencies, including the frequency at which I was transmitting. And that's what caused the signal to have noise. It's not like you were all transmitting at 1,000 hertz. It just so happens that that combination of noise you were making had a component at 1,000 hertz. Does that answer your question? AUDIENCE: Yeah. HARI BALAKRISHNAN: Any other questions, comments, remarks? OK. GEORGE VERGHESE: Yeah, we've been talking about modeling the baseband channel. And we've said we'll focus on LTI channels. So I just wanted to take advantage of this being up here to have you think about what this tells you about how close to linear this channel is, and how close to time-invariant it is. What do you think? What we're seeing here is, for instance, a step response to something that's at 0 and then goes up to 1 and stays at 1. We're seeing a superposition of many such 0-to-1 transitions, some 1-to-0 transitions, later 1-to-0 transitions, and so on. So we're really looking at a superposition of step responses staggered in time and going from 0 to or 1 to 0. Do you think time invariance is maybe a good assumption for this channel? Plausible, right? Because the stuff that we get here looks a lot like the stuff we're getting here. The deviations might be noise, but the more structured parts of the waveform here match the structured part of the waveform here. And what you're really seeing is the loudspeaker here reverberating through the room. You're all staying fixed, so the echoes are from fixed locations. The walls are fixed. And so what we're seeing is the step response of the room, in effect. If you hit the room a little bit later, well, you get the same response, but a little later. Does this look like it's very linear? Would linearity be a good assumption for this channel? I mean this is very partial information, but does it give you enough to judge? So why do you think linearity might be good here? Were you saying, yes, it's a good assumption? AUDIENCE: Yeah, because time is a pattern. GEORGE VERGHESE: Because what? AUDIENCE: Like, if you have a signal go five seconds from a different signal, it'll still appear in the center. GEORGE VERGHESE: But that's the time-invariance argument that you're giving me. What's the linearity argument? Yeah. AUDIENCE: When you scale the intact input, the output gets scaled accordingly. GEORGE VERGHESE: OK, so when you scale it-- so we're not really seeing too much. Well, we are seeing scaling. What kind of scaling are you seeing here? We have 0-to-1 transitions, but we also have 1-to-0 kinds of transitions, right? What would you want to see for a linear channel for how the 1-to-0 transition behaves relative to the 0-to-1 transition? What would you expect on the linear channel? Yeah. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: So from superposition, you would really expect to see the same shape on the downslope that you see on the upslope, right? So when you look at the upward transition here, you see a certain shape. Well, it's not quite matched on the downward slope here. And the reason is that the particular simple demodulation that we're using here for this on-off scheme actually makes that not look very linear for signals that are for channels that have this kind of an overshoot to them. But the other scheme that Hari mentioned, the quadrature demodulation, will actually do a lot better. We, in fact, saw that, right? We saw that there was modulation that essentially pulled out the absolute value of what you were sending. And then there was another modulation scheme that actually pulled out the signal itself. And so if you're doing things like taking absolute values somewhere in the middle there, then you're going to start losing the ability to model it as linear. OK, so linear time-invariant models are not necessarily good for all channels. It's something that you've got to look for. There are good reasons to try and structure a channel so that it's close to linear and time-invariant because then you can do a lot of analysis and design for it. OK, so I want to continue talking about our models for LTI channels. And still in the time domain, next time, we'll start to look at this in the frequency domain. Hari mentioned frequency several times here. So we're talking about an LTI channel, Linear and Time-Invariant. We talked about characterizing it by its unit sample response. And so let's see. Unit sample response means put in a unit sample function. You get out an output that you're going to call the unit sample response. Put in an input x(n). That is a summation of such things. Let's say x(k) delta of n minus k summed over all k, right? That's the general input represented as a weighted combination of unit samples. Well, what comes out in that case? What is y of n? If we're talking about a linear time-invariant system, then it's the same weighted combination of the responses to these unit samples. So we're going to get summation x(k) h of n minus k, right? So this is the convolution expression that we talked about last time. So what I want to do in the rest of the lecture is give you some other ways to think about this. Our notation for this was y of n equals x convolved with h evaluated at time n, right? Another way to think about this-- let's see. Let me actually first do an example and then give you another way to think about this. Suppose I have a system whose effect is to multiply the input by A, some number A, and delay by some number D. And I tell you that this is LTI. Actually, I don't have to tell you that it's LTI. You can prove that it's LTI. If I tell you have a system whose only action on the input is to delay the input by capital D and scale the input by capital A, you can actually prove that it satisfies time-invariance and that you can superimpose, OK? So this is LTI. So what's the unit sample response? If I put in the unit sample function of the input, what's the output? AUDIENCE: A delta n. GEORGE VERGHESE: Anyone? AUDIENCE: A delta n minus D. GEORGE VERGHESE: Yeah, A delta of n minus d, right? And if I put some general input function in here, what's the output? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Without telling you about convolution. Somebody-- I heard a voice here. Yeah. AUDIENCE: A of s times [INAUDIBLE] D. GEORGE VERGHESE: Right. We didn't have to do any convolution to figure this out, right, because I described the system to you in a simple way. You could tell me what it does to the output. All right, so I want to give you another way to think about a system where the unit sample response of h events. I'm talking about a system with the unit sample response h of n. So what does that mean? That means that at time 0, I evaluate 0. When I put out and put in a unit sample at time 1, I get some h1. At time 2, get some h2, and so on. This is a hn, OK? So if I give you a system and tell you that the unit sample response is this function, h, Here's a way to think of what it is. Inside there, here's what my system can be thought of as doing. Inside here, I've got many parallel paths. I've got a system that scales by h0, delays by 0, and in parallel with the system that scales by-- let me put it up here-- scales by h1, delay by 1, and so on. OK, so all of these in parallel-- each one is very simple. Each one is as simple as the example I showed you. OK, I've got a whole bunch of these parallel systems, each one as simple as this. If I put a unit sample in here, what comes out? It's going to be exactly that, right, because the unit sample would get scaled by 0, delayed by nothing, will come out there. Then unit sample will get through this path scaled by h1, delayed by 1, come out there. When you assemble all of these with this summer here, you're going to get exactly that response. So here's another way to think about what's sitting inside a system, an LTI system whose unit sample response is given to be that, OK? So if I put x and n, what is it that's going to come out? If I put x and n, what comes out? Well, through this path, I get x(n) scaled by 0 and delayed by nothing. Through this path, I get x(n) scaled by h1 and delayed by 1, and so on. So what comes out? y of n is equal to h(m) x of n minus m over all m, OK? So I take x(n). I shift it by nothing. I scale by h0. That's one of these terms. The term corresponding to m equals 0. I take x(n), scale it by h1, delay it by 1. That gives me the term with m equals 1. So here's another way to write it, OK? Last time, I said that actually you can write convolution in this form or with the operation reversed. So it actually doesn't matter which order you write things in. This would be something you might write as h convolved with x at time n. So two different ways of writing the convolution and two different ways of thinking about how the output gets represented that way. You can easily get from one representation to the other by just making a change of variables. Like, let n minus k equals m, and you get from this representation to the other. This is a more mechanistic way of thinking about why these two representations work. OK, so with that is a given, let me show you how to actually carry out these operations graphically. In either form, here is a simple graphical way to think about what's going on. So to determine y at time n, this is the operation that I have to carry out, all right? To define y at time n, the output at time n, I've got to find a way to implement this operation. So how are we going to think of this graphically? I want to sketch the signal x. I want to sketch the signal h and then do something with these two signals to construct this, OK? So when I plot x, and I plot h to implement this operation, what's the name on my time axis? Is it m that I'm going to stick here or k or what? I want to implement this. I want to draw these two time functions, the functions of k, right? And it's just the number that I'm specifying here. OK, so on the k-axis, I'm going to take x and plot it. So x is some time function. Here's my x. I won't label them all because that would get crowded. And just to keep things clean, let's change colors here. How am I going to plot h of n minus k? Well, let's start thinking about the n equals 0 case. So for n equals 0, I've got to plot h of minus k. So how does h of minus k relate to the unit sample response h of k or h of n? If I tell you that I have a system-- we had an example up here, didn't we? I lost it. If I tell you that I have a system with this unit sample response-- let's do something simple. Let's say that this is 1/2 to the n times u of n. So for positive time, it's kind of a decaying geometric series. And for negative time, it's 0, OK? So that's an example of a unit sample response of a system. So if that's h of n, what does h of minus k look like? Yeah, anyone? AUDIENCE: Reflection. GEORGE VERGHESE: Sorry? AUDIENCE: Reflection [INAUDIBLE].. GEORGE VERGHESE: It's the reflection, OK? So if I want for n equals 0 to plot just the h of minus k that I need here, it's that reversed and plotted. OK, so this is an h of minus k here. What about h of minus k? Sorry, what about h of n minus k? Suppose I had n equals 3 now. How do I get h of 3 minus k? So I want to get h of 3 minus k. So that corresponds to sliding this over. Do I slide it to the right by 3, to the left by 3? You can tell by looking at the argument here. Whatever used to happen at 0 has now got to happen k equals 3. So that means a rightward shift. OK, so you flip this over, and then you slide it by n steps, OK? So you take the h of k, flip it around to get h of minus k, slide it by n steps. So if n is positive, you're sliding it to the right. If n is negative, you're sliding it to the left. So now you're on a single figure. You've managed to plot these two. What's the remaining operation? What you've got to do is the point-by-point product of these two waveforms and sum over the entire time axis. It's like taking a dot product, right? So you're going to take this value of the red curve-- well, actually, let me slide it over. Let's do this case. This is the slid-over case, right? You're going to take every one of the purple values multiplied by the white and sum over the entire time axis. That's an implementation of this. So in recitation tomorrow, you'll get practice on this. But what you want to think of, sort of the mantra for graphical implementation of convolution is you've got to do the flip of one of the time functions, slide by n, and then the product, OK? So you slide it to get one particular value of n. If you want the next value of n, you slide it 1 over and go through this whole thing. So it's flipping, sliding by the right number of spots, doing the inner product. That gives you one value of the answer. And then you repeat. All right, you'll get more practice in recitation.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
12_Filters_and_composition.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we're going to continue talking about LTI systems and how to work with them. We're thinking of an LTI model for a channel, something with input xn, output yn. And we've seen that the output can be obtained from the input through this convolution operation, right? So for instance, this was one way to write it. And the shorthand notation here was h star x, evaluate to the time n. The m here, in general, goes from minus infinity to plus infinity. In general, you put a unit sample function of the input, the response can extend from minus infinity to infinity if you've got a non-causal system. If you've got a causal system and you put a unit sample in, then the response starts from 0 and goes on to the future. But it's often useful to be able to represent and analyze non-causal systems. I mean, if you have all the data stored in your computer, then you can look forward and back from the time that you're at. And therefore, you can do things that can be looked at as non-causal or analyzed as non-causal. Now, there's one issue that we've kind of swept under the rug, which is you can write nice looking formulas, but do they mean anything? So we went through a plausible derivation, but if you end up with a summation from minus infinity to infinity, then you've got to ask yourself when does this make sense? Because you know that adding an infinite number of things can cause problems. So you need conditions for all of this to make sense. For instance, if h was 1 for all time and x was 1 for all time, there's no way this is going to make sense. Because it's going to blow up at every value of n, right? So you clearly need conditions. So what we're looking for is conditions for convolution to be well behaved. And I'm going to give you one important condition. Well actually, let me give you two. So here's one. Suppose we have a causal system and input that starts from 0 at time 0. When I say starts out, I mean that for all prior times the input is 0. So we've got an input that is 0 for all negative times. And then, at n equals 0 we start to get some action, OK? So let's see, what happens to this infinite sum in this particular case? If I have a causal system, that means that h of n is 0 for n less than 0, right? That's what you should think of right away when you're told a system is causal. The unit sample response can only extend from 0 onwards. So if the unit sample response is 0 for a negative time, and the input starts at time 0, what simplifications can you make to that convolution representation? Somebody? Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: 0 to infinity does it? OK, so the answer was that instead of starting at minus infinity, you only need to start the summation at m equals 0. Because the h of m is going to be 0 for all negative values of m. So you can start that summation at 0. And then, where does the summation-- where can the summation end? AUDIENCE: n? PROFESSOR: n? Yeah, because this x here is 0 for negative values of the argument. Negative values of the argument are values of m greater than n. So this only needs to extend over a finite region. Well, if you have a finite sum, then you're happy. Nothing's going to blow up. So this is certainly one case where everything works fine. There's another case which I'll describe which is where your input is bounded and what we mean by that is that your xn has an absolute value that's less than or equal to some maximum value-- some finite maximum value for all time. So no matter what n you pick, you're always within this interval, OK? So your input is, here's your n, here's your plus m, here's your minus m. And your input is constrained for all time to just lie between these limits. So that's a bounded input. And here's the other part of the condition. Absolute value of hm summed over all m is finite. We say that h is absolutely summable-- absolutely summable. So what does that do for us? Well, it actually allows us to bound the outputs. So now, it turns out y of n, that's the absolute value of the output. Well, that's less than or equal to the absolute value of this convolution expression, which is in turn less than-- well, it's equal to that, right? It's equal to that, but it's less than or equal to this. Absolute value of a sum is less than or equal to the sum of the absolute values. And that's less than or equal to capital M-- this is the max value that we allowed up there-- times summation hm. So this whole thing is finite. So basically, you can bound the output. You can guarantee that the output is bounded provided these two conditions are satisfied. So this is actually a very important pair of conditions. It's one that we encounter all the time and in practice. And because of this result, we actually refer to an LTI system that satisfies this absolute solvability condition. We say that the LTI system is bounded input, bounded output stable-- bounded input, bounded output stable. And I've run out of space from my stable there. So if somebody tells you they have a bounded input, bounded output stable system, if we're talking about an LTI system, what they mean is that that condition is satisfied. So that's an important condition. OK, I have all that on a slide. But if I race it past you on a slide, then it's hard to track. But this is something that we want to do. So for instance, if I had an LTI system whose unit sample response was the following-- let's say it's 0 for all times up to and including time 0, and then it takes the value 1 over n, from then on, OK? So let's see, one way to write this is 1 over n u of n minus 1. If I tell you that the h of n is this, that automatically takes care of zeroing out everything from 0 backwards, and then putting in 1 over n from then on. So is that BIBO stable system-- bounded input, bounded output stable? Yes? How many think yes? You'd like to think it's stable, because a unit sample response is decaying. But actually, it doesn't satisfy the absolute solvability condition. The sum of 1 over n from 1 to infinity is-- it actually blows up. It doesn't converge. If it falls off any faster than that, then you're in good shape. But this is actually bad. If you had something that was 1 over n squared-- OK, so this is not BIBO stable. But if you had 1 over n squared u of n minus 1, it is BIBO stable. If you had, let's say, 1/3 to the n u of n minus-- well, we'll say u of n, that's BIBO stable. So this falls off as 1 over n squared. That's fast enough for the sum to converge. This falls off exponentially faster. It's as a geometric series. It's a discrete time exponential. So that's fast enough. So that's also BIBO stable. All right, so this is time domain. We know how to analyze any LTI system with this. You tell me what the unit sample response is and I can tell you what the output is for any given input. But this would be a nightmare if we had to do design with this, because convolution is not-- it's simple enough to implement for a particular case, but it's not a simple operation to think in terms of. The reason is that the output at any one time is obtained by scrambling all the inputs for all time, combining them in this weighted linear fashion. And then, if you move to the next time step, you're again scrambling all the inputs, but with the weights shifted a little bit, so you've got to start from scratch again. So it's very hard to know what you can say in general using the time domain. It's not clear at all how you'll do design, for instance, of filters to filter out noise and so on. So this is actually-- it's important. It's a full characterisation of LTI systems. But if we had to stop there, it's a fair bet that we wouldn't be anywhere near where we are for engineered systems, certainly not in digital communication. So the key thing is actually to start thinking in terms of frequencies. So we're going to look-- we're going to spend the next several lectures looking at the frequency domain. And so what is the frequency domain? Well, we're going to be focusing, essentially, on sinusoidal inputs and inputs that are related to them. But let me actually just start back with something simpler. Here's my question. And I hope the answer isn't already on the slides. Maybe it is. Is it true that if my input was periodic-- so for instance, if I had an input that, let's say, did this, it ramped up over maybe four time steps, and then started again, ramping up over 4 times steps, and continued that indefinitely. So here's my x of n. It's an x of n that has some basic period that then repeats periodically. So it satisfies that condition xn equals xn plus capital P for some number capital P that's the period. What is capital P in this case? 4? OK. So every 4, this repeats. Is it true that if I had a periodic input to an LTI system that I'll get a periodic output? Any reason I should expect that? Yeah? AUDIENCE: Depends on the system. PROFESSOR: Could you speak up a little? AUDIENCE: It depends on the system. PROFESSOR: It depends on the system. I'm telling you it's LTI but nothing more. So it depends on which particular LTI system? That would be my intuition, too. Yeah? AUDIENCE: Just like in the piece that it can average over a long enough time sample, to where it would be constant. So I'm assuming it's constant. PROFESSOR: Well, but that's an average. That's one number. What I'm asking you is could this output as a function of time also be periodic? Can you guarantee it? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: I'm not exactly understanding your prescription. So are you telling me how to prove that yn equals yn plus something for all n? AUDIENCE: If we choose enough-- if we take enough x's can we make y constant? PROFESSOR: According to this-- well, I raised the convolution expression. But the convolution expression requires us to take x's from minus infinity to plus infinity. In general, the output at time n depends on all the inputs. So it's not clear how you can block things off. You had another idea? AUDIENCE: Well, because it's time invariant, [INAUDIBLE] PROFESSOR: OK, good. This is a time invariant system, right? So if I shifted the input by capital P, the output should also get shifted by capital P. But shifting the input by capital P gives me the same input back again. So the response that I get for that shifted input must be the old response that I had, which tells me that the output also has the property that if I shifted by capital P, I'd get the same thing again. So this is guaranteed to be period capital P. Now, actually there's a little twist to that. Because we usually think of the period as being the smallest interval for which you can repeat. It's conceivable that this output has a smaller interval, and therefore that the period is some integer fraction of this capital P. But this is the general idea. So you see that just knowing that the system is LTI, you can already tell a lot about what the response will look like. One of our favorite periodic inputs-- well, if I asked you to tell me what your favorite periodic input is, what might it be? Any-- sorry? AUDIENCE: Constant input? PROFESSOR: Constant is good. That's periodic. A little less trivial than constant? AUDIENCE: [INAUDIBLE] PROFESSOR: Sinusoid, right? So here's the nice thing about sinusoids. It turns out, for an LTI system, if I put in a sinusoid, not only is the output periodic with the same period. It's also a sinusoid. So that's an even greater restriction here. You see, in this particular case, I have an input that's periodic. I'm guaranteed the output is periodic with the same period. But the actual shape of the waveform can be all messed up relative to this. It may have no obvious visual relationship to this. But if you have a sinusoidal input, then it turns out that more is true. So it turns out, if you put a sinusoid in, what you get out is a sinusoid of the same frequency. What might change is the amplitude of the sinusoid and the phase angle on the sinusoid. But it'll be the same frequency sinusoid that comes out. So that's a fairly dramatic restriction. And that's actually key to frequency domain methods. What it means is we can focus on what an LTI system does one frequency at a time. I'll look to see how it behaves when I excite it with a particular value of this big omega here-- that's the frequency-- look at the response. I know the response will be exactly at that frequency. So all I have to capture is how much did this input get scaled by-- in other words, how much did the amplitude change by-- and how much did the phase get changed by? I just need to know the magnitude and phase transformation of the cosine at each frequency. If I know that, I know everything about the system. And it decouples my design. It allows me to think frequency by frequency when I design with an LTI system. So this is actually a great simplification. One other remark I've made here, by the way, it's certainly the case in continuous time, if I gave you x of t equals cosine, let's say, omega 0 t plus theta, this is always periodic. This is periodic with period 2 pi over little omega-- a little omega 0. Because any time I increase t by an integer multiple of this, I'm going to get an integer multiple of 2 pi added into the argument of the cosine. So I'll be back where I started, right? So this is always periodic. But with a discrete time sequence, you actually have to be a little more careful. We can think of some particular discrete time sequence. We refer to the omega and the continuous time case as the angular frequency in radians per second. Here, we're thinking of this as angular frequency in radians per sample, for instance, because the thing it multiplies is n. But basically, the units of big omega are angle. Well it turns out that this may not be strictly periodic in the sense that shifting it by an integer will get you exactly the same waveform. And that's all related to whether this frequency-- whether 2 pi over omega-- the thing that you would like to compute as a period-- is rational or not. It turns out, if 2 pi over big omega is rational, then the period is the numerator of that rational. But otherwise, it's not rational. But you can think of it as being samples taken from some periodic quantity. So there's an underlying time varying-- sorry, there's an underlying continuous time periodic waveform. And you take samples of it. And depending on the frequency of the underlying sinusoid, the sequence of samples may be exactly periodic, or may be close to it in the sense that they're samples taken from a periodic signal. In either case, actually, we tend to not fuss about that. We'll talk about a cosine like this as a sinusoid of frequency omega radians per sample, and we'll talk about the period as being 2 pi over omega even when it's not strictly periodic. So 2 pi over omega is our notion of period. So a couple of examples here, you can-- I'll leave you to look through those on the slides. But you can easily construct examples where 2 pi over omega is rational, and draw a picture, and convince yourself that it's actually periodic. So in this case, for instance, if big omega-- big omega is whatever multiplies n, so it's going to be 3 pi over 4. 3 pi over 4 is big omega. So now, I need to look at 2 pi over omega. And so that's equal to 8 over 3. The numerator of that is what the period is. So if you were to actually sketch this out, you would find that every 8 samples, it repeats. Whereas, if you look at this example, the thing that multiplies in is 3 over 4. So now, the period 2 pi over omega is 2 pi over 3 over 4. So now we're talking about 8 pi over 3. That's not rational. You're not going to get a periodic sequence. But we still refer to 8 pi over 3 as the period of this discrete time signal, OK? Just because it comes from sampling an underlying continuous time periodic signal. All right, that's a detail. I just don't want you to trip up on that later on. So here's the basic statement-- what I said earlier. If you have an LTI system-- by the way, I like to represent this with an h dot, so that slipped past there. I think I've talked about this before. Let me not spend time on it-- notation, notation. So if the input is a sinusoid of some frequency, and amplitude, and phase, the output is guaranteed to be a sinusoid of the same frequency, potentially different amplitude and different phase. So what I want to do is establish that for you, starting with our time domain characterization, which is LTI convolution. It turns out, actually, sinusoids are not the only things that have this property. In fact, it might be good for me to show you another example, too. So let me give you another example of a waveform that you can put at the input-- or signal that you can put in the input to an LTI system, and it comes out the same shape despite all the convolution. So here's the example. Suppose xn is what I think of as a discrete time exponential. So this is r to the n for some real number r. So maybe-- maybe r is 1/2. So this is-- so this is xn. It's a discrete time exponential. You're used to thinking of that as a geometric series. But when you're talking about signals and systems, you like to think of that as a discrete time exponential. It does have an exponential fall off. What if that goes in to this system? Well, the output is given by this expression always, for an LTI system. So let's just plug in what x is, that summation over all m, hm, r to the n minus m. I'm just substituting in this expression for x of n. The n piece of this doesn't depend on the summation index. So I can actually simplify this further. And here's what I have. So look what's happened. I sent in a discrete time exponential. And out comes the same discrete time exponential, but scaled by some number. This is just a number, right? It's an infinite sum. It's just a number. It works out to be a number. You'll have to look for conditions under which it's guaranteed to exist. And certainly, if the system is BIBO stable, and-- well, if the system is causal, and the exponential is decaying, then this is guaranteed to exist. OK, so here's an example of another kind of signal-- related to the sinusoid as we'll see-- that has the property that you put it through the LTI system. Despite all this convolution stuff that's going on, I don't even have to know what that h is. I can tell you right away that what comes out is the same exponential but scaled. So now, we're going to try and establish the property for sinusoids. And the way to do that-- the efficient way to do that is actually to work with exponentials again. Except they're not exponentials of the type that I have there. They're complex exponentials. Now, when you learn complex numbers in high school, maybe you thought you wouldn't have to deal with them again. And then you came to calculus and you had complex numbers, and you thought maybe after that, you don't have to deal with them again. So I'm here to tell you that you're always going to have to deal with complex numbers. So you have to get comfortable with them. So we're talking about, essentially, points in the plane-- point with a real part and an imaginary part. So here's the complex number c. Here's the real part. Here's the imaginary part. I never thought I'd need to define j, but actually, it turns out that if not everyone in the room is an electrical engineer, then maybe you're used to thinking of i as being the square root of minus 1. Electrical engineers like j, because they reserve i for currents, right? So j is the square root of minus 1 in all electrical engineering. The key identity that you need is Euler's identity. Let me actually write it on the board and leave it there, because we'll be coming back to it multiple times. Well, actually I can have it on this board, because it's really the same. It's really this picture. e to the j theta is some complex number. Its real part is cosine theta, and its imaginary part is sine theta. That's all that Euler's identity is saying, right? Here's a complex number. Its real part is cosine theta, its imaginary part of sine theta. So what's the magnitude of e to the j theta? Every complex number has a magnitude. e to the j theta has magnitude what? 1. Because it's cosine squared plus sine squared, right? And the angle of e to the j theta-- the angle of a complex number is just the angle from the real axis up to this vector. So the angle is? AUDIENCE: [INAUDIBLE] PROFESSOR: I'm hearing more complicated answers than I expected. Theta, right? You're right. It's arctan sine theta over cos theta, which is theta, right? All right, so this is Euler's identity. And this is really critical. And if you can remember this, then you can remember all sorts of other identities that might trip you up from time to time. So for instance, let's see, if I have e to the j theta 1 times e to the j theta 2, does that simplify to this? That's OK, right? That's just combining the exponents. Well, expand this out using Euler's identity. Expand this out using Euler's identity. Expand this out using Euler's identity, and you discover, for instance, that cosine theta 1 plus theta 2 equals cosine theta 1, cosine theta 2, minus sine theta 1 sine theta 2. OK, you know all these identities. But if you're ever pressed to derive them, the place to start is Euler's identity. If you're pressed to derive Euler's identity, then maybe go back to Taylor series. But if you want to carry one thing in your head, carry Euler's identity. All the rest follows. All right. So here is-- well, actually, let me make that point a little later. These are easy. e to the j0 is just-- this unit length vector line along the real axis is just the number 1. e to the j pi, it's the unit vector lying along the negative real axis. So that's the number minus 1, right? So here's how we use complex exponentials to prove the result that I claimed earlier, namely that sinusoids in gives you sinusoids out. A sinusoid of a given frequency in gives you a sinusoid of the same frequency out. What we're going to do is actually combine sines and cosines into one calculation. This may look a little funny, because now, we're suddenly putting a complex input into this LTI system. But if you think about the math that we did for LTI, it didn't really care whether we were feeding in real numbers or complex numbers. So we could have a signal that has a real part and a complex part at each time. Convolution would work exactly the same way. We arrived at convolution through linearity and time invariance arguments, and all those work for complex signals. So we could actually be putting in a signal of this type and seeing what comes out. Well, this is very close to the calculation we did earlier with a real exponential. The only difference now is it's a complex exponential. But here's the computation. We start, as always, with the time domain representation. That's convolution. Substitute in for x of n minus m. x of n is this signal here. I should have labeled it as x of n. So just stick x of n minus m here. And then, you discover that there's a part of this that doesn't depend on the summation index. So you pull that out. So you're left with this summation inside and this piece out. So look what's happened. You've put in a complex exponential. And out comes the same complex exponential scaled by something. What is this complex exponential? Well, its real part is a cosine signal, and the imaginary part is a sine signal, all right? So this object is something that we'll encounter and use a lot. It's referred to as the frequency response of the system. You've heard the term used, undoubtedly, in other settings. But here's the definition. It's a function only of big omega. Because once you've summed over m-- little m-- everything else has gone away. So only depends on big omega. And here's what it is. It's the summation over all values of m, h of m, e to the minus j omega m. That's the frequency response. So if you give me the unit sample response of a system, I can find for you what the frequency response is. So let's actually do an example. Let's see. Let's take one of your-- let's take an averaging filter that you've looked at. To make it easy on you, I'll sketch it as a function of m. It doesn't matter what that's called. So here's 1/3, 1/3 1/3. We've come to recognize this as B unit-sample response of a 3 point averaging filter. It's a causal 3 point averaging. So what's the frequency response of this system? Well, it's h of big omega. And then I just follow this prescription. So the only values of h that are non-0 are for m equals 0, 1, and 2. So it's going to be 1/3, 1 plus e to the minus j omega, plus e to the minus j2 omega. And I'm done, right? Now, to actually get a feel for this what you want to do is write it in different forms. For instance, a very useful way to write this is in terms of the magnitude and the angle. So here's a way to represent any complex number. This is a complex number, right? I can write it as magnitude times e to the j angle. So that will actually turn out to be a much more efficient way to think about frequency responses. This is the magnitude of the frequency response. And this is the phase. So how does that bring us back to sines and cosines? Let me actually go a little bit out of order here, and here's the basic statement. From the result that we have up there-- and I'll show you on the next slide how to derive it-- what you can show is that if you put a cosine into the system, what comes out is the same cosine, except its amplitude is scaled by the magnitude of the frequency response, and the phase angle is increased by the angle of the frequency response. So this is really-- if you know this as a function of frequency-- if you know the magnitude and phase angle of the frequency response as a function of frequency, you can describe the response of the system to any cosine input. What if the cosine was a little more complicated than the one there? Suppose I had-- suppose I had cosine omega 0 n plus, let's say, pi over 4 going in. What comes out? What do you think comes out? Anyone? I showed you a particular case here. How much new work would you have to do if it was actually a slightly shifted cosine? Take a guess. Yeah? AUDIENCE: Just replace n in the output with n plus pi over 4. PROFESSOR: Yeah, you just change this by adding in an extra pi over 4. So what comes out-- that's what you said, right? Is that what you said? OK. Sometimes I'm guessing because I don't hear that well. So here's plus the angle, and then plus the additional pi over 4. And this actually will follow just from time and variance of the system. So you can actually-- so this is actually pretty general. If you're going to remember one thing about frequency response in terms of what the operational significance is, for instance in a laboratory experiment, this is the result to remember. This is why frequency response is important. And the proof, very easy. Once you have the basic result with exponential inputs, the proof is easy. Because a cosine can be written as the sum of these two exponentials. How do I get that? Just from Euler's identity, right? Use Euler's identity for each of these-- for e to the j, e to the minus j, when you add Euler's for this and Euler's for this, the sine terms cancel out. So you get 2 cosine in the numerator and you divide by 2. So if you're stuck for a derivation of a result like this, go back to Euler's. So cosine can be written this way. So when I feed a cosine big omega 0n of the input, what I'm actually feeding in is a linear combination of two exponentials. But I know how to write the response to an exponential. If this exponential-- this sum of exponential goes in, what comes out is the corresponding sum of responses. So it will be this exponential times the 1/2 there, scaled by the frequency response. That's what frequency response does to an exponential. And this exponential comes out the same, but scaled by the frequency response-- evaluated at the frequency of that exponential. So maybe you've-- you're having trouble visualizing the result that I'm invoking. But it's the one that we just proved. If you have e to the ae to the j omega n going in, let's say, at some particular frequency omega 0 to a system with frequency response h of omega, what comes out is the same exponential that went in, scaled by the frequency response evaluated at the frequency that we're talking about-- omega 0. So when I label the system with the frequency response, this is a general omega. But the value of it that I'm interested in is the value of-- at the frequency of the inputs. So if the input is e to the j omega 0n, what comes out is that same e to the j omega 0n, but scaled by the frequency response at that frequency. So that's what I'm invoking here. Invoking it twice, well, this is just the real part of this quantity, because it's a complex number and its complex conjugate, and so on. So you put it all together, and you actually very directly have this result. So we're using complex inputs as just a trick for getting the results that we'd really like to get for real inputs. If you didn't want to do that, you could actually just put in the cosine omega 0n, and crank it all the way through, and you would get it. So you could say yn equals summation over all m, h of m. Then we have x of n minus m here, right? So it's going to be cosine omega 0, n minus m going in. And now, use appropriate algebraic identities and you'll get the same result. So we didn't have to use complex exponential inputs to get the result. It's just a convenient way of getting it. Again, if you're going to carry one result in your head in the complex case, this is the one to carry. It is very simple. It says complex exponential in, you get the same exponential out. So let's play a little more with this particular filter since we see that magnitude and angle are so important. We're talking-- we're back to this 3 point averaging, right? Here's the frequency response. And I'd like to get the magnitude out. And you could certainly use Euler's identity on each of these pieces, group all the real parts, all the imaginary parts, and so on. It ends up actually being a bit of a mess to try and write down cleanly. So let me show you a trick that works for this kind of thing. And it'll give you some practice in thinking about these complex exponentials. Do you agree that I've just rewritten the same thing as I had above? OK, does this simplify? Can you write it as something real? We've got a complex quantity and its complex conjugate there, so we should be able to collapse them into real, right? The way you recognize a complex conjugate is that the j has gone to a minus j. So what is this whole thing? Somebody? AUDIENCE: [INAUDIBLE] PROFESSOR: Where did that come from? Can I have a hand, just--? Oh, yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, so let's say 1 plus 2 cosine omega, right? So that's simplified nicely. OK, so are we in a position to say what the magnitude of h is? I've got an h that is represented as the product of this. Oh, by the way, sorry, I-- this should just be a 3 here, not a 1/3. Everyone shook their head in agreement when I wrote that down, but-- what's the magnitude of the product of two complex numbers? Is it the product of the individual ones? The magnitude of h is going to be, let's see 1/3 magnitude e minus j omega times magnitude 1 plus 2 cosine omega, right? The magnitude of a product of complex numbers is just the product of the individual magnitudes. What's the magnitude of this? 1. So we're actually done. We actually have a very simple expression for the magnitude of the frequency response of this moving average filter. That's all it is. Oh, sorry, I need the absolute value. So let's see, I think I have that sketched out in one of these. I actually have three moving average filters drawn out here. The one that I've just worked out is this case. This is the 3 point moving average filter-- a height of 1/3 for each of these at 0, 1, and 2, and everything else is 0. Here is the frequency response magnitude. The notation that's used on the figure is slightly different. So some people, including me in the next course I teach, will write the frequency response as this instead of h of omega. But that's really unnecessarily fussy. It's important when you're talking about z transforms at the same time that you're talking about Fourier transforms. But for us, it's not important. So you'll see slightly different notation, probably in the notes as well. But just think of that as just h of omega. So here's the filter we were talking about. Here, supposedly, is the frequency response magnitude. What we should be seeing is the magnitude of this quantity. And let me see if you believe it. So what we have is-- so what I have is, at omega equals 0, I've got something that starts at 1. And then, when I get out to minus pi over pi, this has come down to the value minus 1/3. And so this is what I have for-- this is the quantity within the absolute sine, before I take an absolute value. And when I take the absolute value, this flips over. And that's really what you're looking at. You're looking at frequency response magnitude, which is this. And you've got to figure out the phase accordingly. And that I'll leave you to do in recitation. But I want to ask you one last thing. Why do I not bother to plot the frequency response beyond minus pi to pi? AUDIENCE: [INAUDIBLE] PROFESSOR: So the reason is that the frequency response tells us what the response is to inputs of this type, right? The frequency response says if this goes in, how much it gets scaled by when it comes out. Well, if I increase omega 0 by an integer multiple of 2 pi, I get the same exponential back again. So there's no new information outside of minus pi to pi. Another way to think of it is we're really talking about complex numbers and their angles. Once you've made a full circle from minus pi to pi, there's no new space to cover. So you'll see frequency response is only plotted from minus pi to pi. All the interesting action is there. All right, we'll develop more intuition for this in recitation and in the next couple of lectures.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
10_Linear_timeinvariant_LTI_systems.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Thank you for coming out here in the rain and the day before a quiz, but this is stuff we need to know. So I'm going to be talking about a powerful class of models for communication channels. We've already seen the kind of setup that we're talking about. And so what we're looking to do is model a channel between this point xn, I think my pointer is-- OK, there we go. Between xn and yn out there, so what we refer to as the baseband channel. So we've got xn coming in, various things being done to it, and then yn coming out. And we refer to this as the baseband channel. So what's happening in here is things like-- let's see-- D to A conversion, Digital to Analog. And then there is the modulation. And then there is the physical channel. I may not have left enough space in this box. But here is the demodulation and whatever filtering, demodulation and filtering that happens in there. So there is distortion-- oh, I'm sorry. I forgot the A to D, didn't I? So let's stick that back in here. We're doing all our demodulation and filtering in discrete time, so we have an A to D converter here, and then D mod and filtering. And there is various places here that you can get distortion and noise. So for instance, the physical channel is a source of noise. But the discrete time operations as well, the computational pieces can also introduce noise. You could have numerical noise, because you're sounding off numbers, and so on. So there are various places that noise can originate. And there are various places that distortion of the signal can originate. So in the filtering process, for instance, or the channel process, you can get phenomena that will take what started out as a straight edge here and cause it to now get a little bit spread out and not so clean at the edge. OK, so that's what we refer to as a distortion. So there is all sorts of things in here that can account for that. Now, when we say baseband channel, we're actually trying to distinguish it from the channel that you see after the modulation. So once you've modulated, you typically move things to some other frequency range. And so the actual transmission across the physical channel happens in some other frequency range. And so the word "baseband" here is used to distinguish the channel that we're talking about from that channel. So this is what we're going to be focusing on. And then will later come back to talking about the modulation and demodulation pieces. So last time, I introduced a way to represent such models just as systems with an input and an output. One thing I made a point of saying was that when we look at a figure like this-- here is a system. We've got some input sequence that's actually going in and maps to some output sequence. So I use this notation with a dot there to indicate the entire time function. So I've got some entire time function here that goes through the system and gets mapped to some entire time function there. And I'm not telling you the details of how that mapping happens yet, but this is my abstract picture. Now, in many places you'll see people writing-- and again, I said this last time. But I want to remind you, you'll see them labeling xn going into the system and yn coming out. And when you see that, you've got to think that what you're looking at is just a snapshot at time n. So this picture is what you get in a snapshot at time n, whereas this picture is the picture that refers to actually mapping the input signal, the entire input signal to the output signal. OK, so these are two different ways of representing things. In this system, I'm not taking the value of time n and producing a value at time n. I typically will need to look at lots of values of the input to figure out any particular value of the output. All right, I did mention briefly the notion of causality. And we'll come back to that later. But the rough notion is that-- or a good enough notion is that the system is called causal if the response at any time depends only on present and past inputs and not on future inputs. That's easy enough. And then there were-- we were going to specialize, actually, to the case of linear and time-invariant systems. And so I want to first introduce the notion of time invariance. Time invariance says basically that, if you shift the input by a certain amount, then the output gets just shifted by the same amount. But the same input-output pair works as before. So what you're really trying to get at is a time-invariant system is one where the laws by which you compose the values of the input to get the output don't change with time. So let's see. Let me give you an example here. Suppose I had a system whose input and output were related in this fashion. Would that, do you think, be a time-invariant system or a time-varying system? I seem to have functions of time in here. Does that make it a time-varying system? Or is it perhaps time invariant? Yeah? AUDIENCE: Time invariant, because of the law [INAUDIBLE].. PROFESSOR: OK, so time invariant, because the law by which you're composing things to get the output doesn't depend on time. So the point is that these coefficients are constant. So because these are constant, what you have is actually a time-invariant system. So to get the output at any time, you're taking 1/3 of the output of the previous time plus twice the input of the present time. And that prescription holds along the entire time axis. So the actual value of n doesn't matter. But if I had here some function of n, if, instead of 1/3, I had something like 1/3 to the n, now I've got a time-varying system, because the law by which I combine things actually depends on my position along the time axis. So this would be time invariant. This would be not. So that's what this is trying to get at. Easy enough. The other notion was that of linearity. By the way, if you read the chapter, you'll see some other examples that will help you hone your intuition for what's time invariant and what's not. For linearity, the basic idea was that you can superpose inputs and find the corresponding responses by superposition. So if you've got the results of two experiments, the input in one experiment and the output, the input in a second experiment and the output, and then you take a new experiment in which the input is a linear combination of the previous two ones, the response will be the same linear combination of the previous two responses. So that's the basic idea here. So linearity means that superposition works. And so this is another feature that we'll use. And for this example on top, do you think it's linear or not? So what you really-- the way to think about it is, suppose I had an experiment A in which my output was y, in which I fed in xA, some time signal, and I got a response, some time signal. So what that means is that this is true. This is what it means to say that this is an input-output pair in experiment A. And now in experiment B, similarly, I have yB n satisfying this equation. So the subscript here just means experiment A, experiment B. So this is an experiment A and experiment B. So now the question you want to ask yourself is, is it true that, if I defined a new input xn to be, let's say-- what notation did I use there? Well, I didn't want an A and a B, did I? OK, if you ignore the notation on my slides, let's say that this is an alpha x A plus beta x B. OK, so here is a new experiment in which I'm going to use an input that's a linear combination of the previous two inputs with some arbitrary weights alpha and beta. And the question then is, is the corresponding combination of the outputs in the previous experiment, so alpha yA n plus beta yB n, does this x and y end pair satisfy the same equation? OK, so what we want to check now is, is it true-- well, is it true that the xn here, the yn here will satisfy the equation on top? And you can see very quickly that it will. And the reason is that yn here is expressed as a linear function of the yn minus 1 and xn minus 1. So when you substitute these in, you'll find that xn defined this way and yn defined this way will actually satisfy that equation. So this is what superposition requires you to test. So if it's true for every possible pair of experiments here and every pair of weights alpha and beta that the superposition satisfies the equations governing the system, then what you have is a linear system. What about if I had to change this to 1/3 to the power n? So I had a time-varying expression of this type. So I have a time-varying system, do you think this system would still be linear? So if you work through it, you'll see, for the same reason, that superposition still works. So if I had 1/3 to the n there instead of 1/3, I get a time-varying system, but I could still superimpose solutions. It would be a linear time-varying system. Now, we don't want to spend too much time teasing all these apart, because what we'll be focused on is linear and time-invariant systems. And you'll actually come quickly to recognize them. OK, I defined last time also a pair of special signals which you've seen before, the unit sample signal which has the value 1 just at one point and the unit step signal. So let me just sketch them out for you here. So the unit sample, this is a signal delta n which is an entire signal. It's not just the number 1 at time 0. It's the entire signal. That's the unit sample function. There is another notation that's also sometimes used, which is delta sub 0 and dot. So this notation is a little bit more evocative of a function, whereas here, you are often tempted to think of it as a number. This says, what I'm looking at is a function. It's a unit sample function. And the 1 is at the value 0. So if you had delta of n minus 3, that would be this function shifted from 0 to 1, 2, 3. So the 1, value 1 would sit at time 3. Another notation for that would have been this. Sometimes that notation is useful also in making sense of expressions that you're looking at. OK, so this was the unit sample function. And then the unit step function steps up from 0 to 1 at time 0. That's the unit step function. And we also talked about the response to these two inputs. So you see them up there. And now my question is, if a unit sample signal at the input produces the unit sample response hn at the output and un produces the step response, and if what you have is an LTI system in here-- so it's the same LTI system that we're talking about-- can you actually relate the two? So the question is, can you relate the unit sample response and the step response? Do I need to give you both if I have an LTI system? Or does it suffice to give you one? So here is one way to think of that. This, by the way, is the same LTI system. Maybe I should indicate that more explicitly by, let's say, it's a specific system, system zero. And with the same system, I'm trying to deduce the results of another experiment. So if we're thinking superposition, can you tell me how to write the unit sample function as a linear combination of unit step functions, maybe delayed unit step functions, scaled unit step functions? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, would it be-- you said un minus un plus 1? Or is it un minus 1? n minus 1? So what we're saying is, take this unit step and then subtract from it a unit step delayed by 1. OK, so here is u of n minus 1. If we took the unit step and subtracted from it a delayed unit step, delayed by 1, the result will be just that value 1 at time 0 will survive. Everything else will cancel out. Is that what you had in mind? OK. So if delta of n can be written as that linear combination of unit steps, can you tell me how to write hn in terms of unit step responses? We're talking about an LTI system. I took that out, but we're still talking about an LTI system here. Somebody who hasn't spoken maybe? Yeah? AUDIENCE: It should just be x of n minus s of n minus 1. PROFESSOR: Yeah, OK. So superposition says that, if you've got an input that's a linear combination of inputs for which you know the results of the experiment, then the corresponding output is the same linear combination of the outputs for that experiment. So this is going to be sn minus s n minus 1. So you can actually deduce the unit sample response, given the unit setup response for an LTI system. So let's see. We've used linearity. Have we use time invariance? We used linearity because we said, here is an experiment in which the input is a linear combination of inputs that we know the responses to. Where have we invoked time invariance? Anyone? The superposition idea was part of the definition of linearity. Because a system is linear, if the input is a superposition of two inputs for which you know the response, then the output is the corresponding superposition of the responses. That seems like I've only used superposition there. Have I actually use time invariance as well? Yeah? Sorry? AUDIENCE: [INAUDIBLE] PROFESSOR: I've used it in concluding that, if I put in u of n minus 1, the response is s of n minus 1. So I've used time invariance as well as linearity here to come up with this statement. OK, good. So this is what I have on the slide. And you've figured it all out already. We've arrived at this equation. Now, if I want to turn it around and write sn in terms of the unit sample response, I can do that as well, except this is analogous to integrating a differential equation. What we have is a difference equation here. And when you come to integrate, well, in discrete time, what you do is summation instead of integration. You need to assume an initial condition of some kind. And so it turns out if you assume that, way back in the past, the value of the step response was 0, then you can actually go from this description to a description the other way, relating the step response to the unit sample response. OK, so if I have a causal system, for instance, so the causal system, it's got no response until the input hits it. So when I put a unit step in, I'm not going to get a response until time 0. And so I know at minus infinity, the step response was 0. And I can move forward from there. OK, so you can actually relate the step response to the unit sample response the other way as well here, where the summation is from minus infinity to the n that you're interested in. We'll be dealing right through with causal systems. If there are any deviations from that, we'll point them out. But basically, we'll be dealing with causal systems. OK, so let's-- this is an identity we'll be wanting to play with a bit. So let me put it up here. So the step response, let's say, for a causal system is going to be summation from k equals minus infinity to n h of k. So I take all the values of the unit sample response up to the present time and sum them together to get the value of the setup response at the present time. OK, so let's look at an example here. Here is the unit sample response of a particular LTI system. Is this a causal system? So this is the response to a unit sample. So the input was 0 everywhere except for a value of 1 here. And you see that the response actually happens subsequent to that input. So if the response starts at time 0 or later for an input that started at time 0 or later, then what you are looking at is a causal system here, certainly in the case of a unit sample response. OK, so what's the step response going to look like, then? Anyone want to say in words? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, so the step response, if we're evaluating it at times over here, we're summing all the values of hk from minus infinity up to the present time. So the step response is 0 here, is 0 here, is 0 here. And then a time 3, the step response jumps to 1, and from then on stays at 1. So the step response is just that delayed step. And it kind of makes sense, because the kind of system we're talking about must be a delayed by 3 system here, because we put in a unit sample input, a unit sample function. And what came out, if you look at it, was actually delta of n minus 3. It was the unit sample function delayed by 3 steps. Is the height-- the height is unchanged, right? The height of still 1. So this must be a delay by 3 system we're looking at. And sure, if we put in a unit step, we're getting-- or sorry, yeah, if we put in the unit step, we're going to get a response that's just the step delayed by 3. I'm going-- sorry, yeah? AUDIENCE: Yeah, maybe I'm just a little confused here. But why is it from negative infinity to n and not like from n to positive infinity. PROFESSOR: This is because of my assumption assuming s of minus infinity was 0. So I need to have a boundary condition from which I start inverting. So just to go back-- let me just go back a second here. Oh, where am I going? OK, so we derived this first expression. If we want to turn it around, well, sn is hn plus sn minus 1. And then I can solve for sn minus 1. I can keep stepping backwards. But at some point, I need an actual value so that I can close off that expression. And if you're talking about a causal system, then what you're guaranteed is that the step-- if you're talking about a causal linear system, because the all-zero input produces the all-zero output, and it's causal, you can actually deduce that the step response at time minus infinity must be 0. The input hasn't yet arrived. Therefore, the output must be 0. AUDIENCE: So h of 5, does that mean [INAUDIBLE]?? PROFESSOR: H of 5 is just a number. It's not a function, right? If I write something like h of 5, it's just a number. So it means the value of the unit sample response at time 5. OK, this takes a little getting used to, but let's do another example here. So here is another unit sample response. This is more complicated, though. I put in a unit sample. And what comes out is a response that-- well, it still starts at time 0. So I'm talking about a causal system. Everything to the left is 0. And the stakes a value 0.2 for some number of steps and then settles to 0. So the question then is, what is the step response? So if you imagine that what you're doing to find the step response at any time is summing this from minus infinity up to that time, you will see that the step response is linearized like that. And then it settles out. OK, so you can get one or the other. And there are other examples on the slides. I won't go through all of them. I'm going a little slow here, because we miss a recitation tomorrow. Recitations tomorrow are office hours, so I wanted to actually give you a few examples. Here is a case where the unit sample response increases linearly and then stops. And so the unit step response actually starts to accelerate quadratically and then stops. This is the discrete time version of integration that we're looking at. Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Oh, sorry, this thing? AUDIENCE: Yeah. PROFESSOR: Ah, OK, ignore that notation. First of all, it's bad notation. But these are figures that I got from somewhere. If I was doing it from scratch, I wouldn't have put it in. But I'll explain it. That's the notation for convolution. I don't actually like that notation. OK, examples of this type-- now, here is one important thing for you to get a feel for. Notice in all these examples, the unit sample response settles down to zero after some time. So you hit the system with a unit sample function at the input. So you hit it with a value 1 at time 0 and nothing else. And it responds. And it response for a while and settles. Now, that's not true for all systems, that they settle in finite time. A typical system might ring indefinitely, might respond indefinitely to a kick. Here, are all these examples are ones where the system has a transient and then settles down. And so what you expect to see in the step response is there is a transient. And then it settles down. The difference is, in the step response, it settles to another value. It doesn't come back down to 0. So when this comes back down to zero, what this is settled up to is sort of the integral of this. It's the area under this, but we're talking about discrete time functions, not continuous time functions. So the value that it's settled to here is the area under this. So the duration of a unit sample response gives you some feel for how long a transient lasts. So if you've got a channel and you hit it with an input, you know that the transient will last about as long as the unit sample response lasts. So the transient and the step response shows that clearly. You can get more elaborate sorts of unit sample responses. Here is one that changes sign. And correspondingly, what you find with the step response is that it's not a monotonic increase to the final steady state. There is actually some oscillation before it settles down. But it's the same idea. You're computing the area under this, if you like, but the area goes positive, and then slightly negative, and so on-- well, positive and then less positive. And so that's what you're seeing up there. Now, why do we talk about step responses so much? Well, it turns out that for a lot of what we do with signaling on communication channels, we're signaling with signals of this type, on-off-type signals, or plus-minus signals, the sort of square-wave-type signals or rectangular-wave signals. And these can be thought of as combinations of unit step functions. You may have seen this in recitation last time as well. So you can take an input of this type and write it as a linear combination of unit step functions. A unit step function that has its step at 0, minus 1 that has its step at 4, plus 1 that has its step at 12, minus 1 that has its step at 24. So if you combine those, if you add up all of these, you're going to get that input. So then it's back to this game again. If the input is a linear combination of unit steps scaled and delayed, then the response is going to be the same combination of unit steps. So that's what the response will look like. So here is the step un gives rise to sn. Therefore, minus un minus 4 will give rise to minus s of n minus 4, and so on. So knowing the step response, you can actually say what the response of the channel is going to be. All right, we've seen this visually too. I did an example last time where what went in was that square wave and what came out after we had done the demodulation of the filtering was a response that sort of had the features of what went in, but it was a little distorted. So you can see that what we're looking at here, for instance, is the character of the step response, because what went in at this point-- right now the input on the output are at rest. You've forgotten about what happened before. And now the input jumps up. Well, the output doesn't jump up all the way immediately. It's got a little transient before it settles. So what you're looking at is really the step response of the channel, where the channel includes all these pieces. It's everything including the filtering. In this particular example, if you go back and look at those slides, this was all entirely due to the local averaging that we were doing in the filtering here. But it does give you some kind of a distortion. OK, so the step response is important to figuring out the shape of the output of a channel. Here is another example that has a more rounded kind of step response, but it's still the setup response we're looking at. So here is the input step. And here is the response to the step. Again, there is this notation. And I've said, ignore this for now. We'll explain it shortly. OK, and once you've got the step response at the output, you're ready to start thinking about how you'll detect whether it was a 0 or a 1 that went in. So you might set a threshold, pick times at which you're going to sample. And then you come up with your call of what the input is, so 1, 0, 0, and so on. So this seems all benign enough. But now what if you decide you want to get that information across the channel faster? So you want to signal faster? So what you're going to want to do is put that same information, the transition from 1 to 0 to 1, 1, 1, 0, 1, and so on, you want to squeeze that into a shorter length of time. So suppose this is what you send over that same channel. Well, now you, again, are going to superpose the step responses. But what's happening now is you've gotten so ambitious with how fast you want to get the bits across that you're not giving the step response time to settle. So over here, yes, there is time. The step response went up and settled because you had three 1's in a row over there. But now you're going down to 0 for one time instant and then jumping right back again. Well, here is the flipped over step response. And it doesn't have time to make it all the way down. It's jumped up again. OK, so if you get very ambitious with your signaling to try and get more of the bits across, you're going to start seeing the limitations imposed by the channel. The channel can only respond so fast. And you can drive it faster. So it's important to have a feel for that as well. So when the channel starts to respond like this, you become much more susceptible to noise. So for instance, if there was a noise spike at this point, you could well end up with a received sample that was above the threshold. And then you'd wrongly decode the 0 as a 1. So taking account of the channel characteristics is important when you're setting a signaling rate. You might want to get information across quickly, but you have to take account of the fact that the channel needs some time. OK, so much for steps. We'll come back to that later. We can do the same kind of thing with unit samples. So here is-- and in fact, the rest of the lecture, we're going to be talking about making up a signal as a weighted combination of unit sample functions. So take an arbitrary signal like this. Think of it as-- let's see, this starts with the value of something, 0.75, I guess, at time minus 2, and then a value of minus 0.5 at minus 1, and so on. So here is your input signal xn. And I want to think of it as made up of a bunch of unit sample functions. So what are the unit sample functions? Well, here is one that's centered at minus 2 but scaled by the value that the input signal has at time minus 2. Here is another one centered at minus 1, but scaled by the value that the input signal has at time minus 1, and so on. So what I'm basically doing is decomposing the input into a weighted combination of unit samples. And you can always do that. And it looks a little magical when you put it into notation like this, but that's basically all that it's saying. So to make sense of this, think, for instance, of putting in an actual number here. So if I wanted x at time 3, well, I'm going to set n equals 3 on the right-hand side and evaluate the sum. Well, the only value that survives is the value for which k equals 3. So I'll pull out x3. So this kind of seems tautologous. But it's a way to represent a general input as a weighted combination of delayed unit samples. OK, so if that was what went in, you're in the position now to tell me what comes out. So I'm talking about an LTI system. I'm talking about an LTI system. And the input xn is a weighted combination with these weights of a bunch of unit sample functions. Well, let's actually-- well, let me actually write this the other way as well. Another way to say this is, here is a time function going in. It's a weighted combination over all possible values of k of xk times-- and this is my other notation, remember, for unit sample functions. So I'm saying this is a unit-- sorry, this should be delta sub n. What should it be? Yeah, OK, so this is another way to write the same thing. We've chosen to write it this way. And actually, I find that simpler. But if you want to be reminded that what we're talking about here is an entire time function, then this is a notation that you might go to. Yeah? AUDIENCE: Why is the first sum [INAUDIBLE] PROFESSOR: OK, good question. Because I'm right now allowing my input to have values that extend from minus infinity to plus infinity. So I'm taking an arbitrary function. If we're talking about an experiment in which the input starts at time 0, then we can actually simplify these. I'll show you that. OK, let me actually erase this, because I don't want to confuse you with that. OK, so if this is what goes in-- it's a weighted combination of unit sample functions delayed-- what is it that must come out? OK, so what are we working with? We're working with the fact that, if delta of n goes into our system, what comes out is hn. So if it's a weighted combination of deltas that goes in, what's the response, given that this is an LTI system? Someone who hasn't spoken today, maybe? Do you want to try? yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: The Same weighted combination of those responses-- so it's going to be summation over all k, the same weight. So it's going to be the xk's. But now here is the responses. So what have we been able to do? We've been able to write down what the output looks like for an arbitrary input in terms of the unit sample response. If you give me the unit sample response for an LTI system, I can write down the general response, the response to a general input. And this is what we refer to as a convolution or a convolution sum. That's a convolution. It may look mysterious. So let's actually do it. Let's do it step by step again. Here is our LTI system. If I put in a unit sample function-- OK, so this is the unit sample function going in-- I get some response. And let's say this is 0. This is 1, 2, and so on. The response, we refer to as the unit sample response hn. So what is this value? This is the value h0. This is the value h1. This is the value h2, and so on. OK, what if what goes in is actually the value x0 at this time and 0 everywhere else? What's the response in that case? So this is just a scaled version of the unit sample function. Instead of 1 going in, I'm having x0 go in. So the response is going to be-- what do we have? The same response? Twice the response? What comes out? AUDIENCE: x0 times that. PROFESSOR: I didn't hear. Where did that come from? Yeah? AUDIENCE: x0 times that. PROFESSOR: X0 times that-- OK, so what we'll get is x0 times h0 coming out at the first time, and then x0 times h1 coming out of the second time, and then x0 h2 at the next time. And if I keep going, I get x0, let's say, hn at this time, and so on. What happens if now it's not that, but it's some value x1 going in at time 1 and 0 everywhere else? So this is starting-- this is centered at a time 1, not at time 0. And it's scaled by x1. So what is it I'm going to see at the output? There was a hand somewhere there previously. Maybe you can answer now. Yeah? AUDIENCE: The same graph translated 1 over and scaled by x1. PROFESSOR: Right, exactly. So what's going to happen is, nothing will happen here. I'll get x1 h0, x1 h1, and so on, x1 h of n minus 1. And it keeps going. And you keep going here as well. You keep stringing in these. Each one of these will fire off a scale of the unit sample response, but delayed appropriately. And so at the next time, what you're going to get here is x2 h of n minus 2. And it keeps going. And what if you're interested in the value at time n? OK, so you look along here. And you've come to the value of time n. So it's going to be the sum of all of these, if your input is the sum of all of these, right? If your input is the sum of all of these x's, your response is going to be the sum of all of these. So what's the sum of all of these? Well, xk h of n minus k. That's all there is. There is nothing-- there is no magic to this. It's just invoking linearity-- that's the scaling part of it-- and time invariance, which is the delaying part of it. It's as simple as that. All right, so we'll be seeing this notation a lot. You probably recognize this kind of notation from the convolutional coder as well. And we don't want to keep writing these sums. So here is the notation that we use. We say that x is convolved with h. And we are interested in the value at time n. So this operation of-- this summation here is referred to as a convolution, as I said. I'm telling you what value of time I'm interested and the response at. That's the n. So that's what this notation is. The k here is just a dummy index. We're summing over the k. It doesn't matter what I called it. I can call it j. I can call it l. It doesn't matter. The important thing is this n here tells me at what time I'm looking for the response. And that's why that's the argument that I stick in here. All right, so all that's on the slides, but we've actually derived it ourselves here. Now, again, some gripes about notation-- you'll find, if you look in most engineering textbooks, that this would be written xn star hn. And I can't tell you how much I detest that notation. You'll never find it in a math book. The problem here is that this n is being asked to do too many things. The n is supposed to suggest-- the xn here is supposed to suggest we're interested in the whole time function. You would have been better off calling it x dot, but, OK, we're used to thinking of xn as also denoting an entire time function. This h is supposed-- the h of n is supposed to denote an entire time function. But the n is also supposed to tell you at what time you're interested in the response. So that index is just doing too much work. And it ends up being confused and confusing notation. So when you're in your downstream classes from here, if you find an instructor using that, make sure that you give him or her grief and say you really can't make sense of that, because this is much cleaner notation. This is what conveys what's actually going on. All right, I'm going to skip over a few things. I just want to suggest some properties here. And then we'll come back to more of this next time. OK, so it turns out that convolution has nice properties. For instance, the order doesn't matter. You can write x star h here, but it's the same as h star x. And that just comes from making a change of variables in here. If I call this m, then k is equal to n minus m. And I get something that looks different. But it's really the same thing. So this is the same as h star x. So convolution, you can interchange orders. There is some conditions on this, but we can talk about them later. You can associate them. You can group them arbitrarily. And you can distribute convolution over additional functions. So all of this actually makes it-- this is very powerful, because it allows you to deal with combinations of systems. And I'll just give you one example. And then we'll quit. So here is an example of the kind of thing you can do. Suppose you have an input going into one system, LTI, with a unit sample response h1, and then the output of that going into a second system, LTI with unit sample response h2, and then producing an overall output yn. Well, so how do you get y? It's h2 convolved with w. I've dropped the argument n because I want to do this just for general values. But w itself is h1 convolved with x. Now, I can group these any way I want. So I can, because convolution is associative, I can put those parentheses where I want. So this is equal to the expression at the end. But that's the same result I'd get by putting this input into a single LTI system whose unit sample response was the convolution of the two individual ones. So I can start to collapse two systems into one equivalent LTI system. And that kind of thing ends up being powerful. But I can also interchange orders. So from here, you can go to this, which then tells you that, for an LTI system, if you've got systems in cascade, if you've got two LTI systems in cascade, actually, the effect on the output is the same, whatever order the input-- the systems are connected in. You might ask yourself whether the same is true if this was linear but time varying. And you should hopefully find out that, in general, for linear but time-varying systems, you can't do this. So really, linearity and time invariance is what it takes to be able to attain this. OK, let's leave it at this for now. And we'll pick up again-- well, you pick up some in problem set four and also next week.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
7_Viterbi_decoding.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, we're going to continue talking about convolutional code. So I want to give you a quick reminder of how coding works and then talk to you about decoding. Can you hear me OK now? All right? OK. So we talked in terms of a state diagram, but let me remind you of the shift register picture was. So we had a two-stage shift register. For this particular example, we had xn, a stream of data being fed in here. So since this is a shift register, what sits in here at time n is the previous input. What sits here is the input from 2 times ago. And you can then feed these off and get your parity check. So take these in particular combinations and make your parity checks. So you can have one box spitting out a p0 of n. And then you can have another box that takes these same outputs from the shift register and puts out-- let me just show them. Actually, why don't I just put it here? You generate a bunch of party check bits. And I've shown an example on top where-- this is the same one I used last time. p0 at time n-- I should have had an n there-- is xn plus xn minus 1 plus xn minus 2. And p1, xn, and at xn minus 2. And we skip the xn minus 1. But you can choose different coefficients there. Different coefficients will give you codes that have different properties. So the choices in the code are how many shift registers do you have, so how much memory. The constraint length here is equal to what? Constraint length in this particular example on the slide? AUDIENCE: 3. PROFESSOR: k equals 3? Oh, you can see it. It's the number of message bits that are involved in generating a parity bit at the maximum. It's actually not the number. It's the window over which you're taking message bits to combine to make the parity bits. All right, so for instance, if you just had p1, you would still say that your constraint length is 3, because you're involving a window of three message bits. It's the span over which you're extending. All right, now, in terms of interpreting this, we've got the possible states of the shift register combination. So 0, 0; 0, 1' 1, 0' 1, 1, these are the four possible states. So in general, what you have is for a constraint length k, you've got 2 to the k minus 1 states, because one of these is the input. And then the other is stored in memory. So you've got k minus 1 stored in memory. So that's the number of states that you have. And that's how these circles are labeled here. And then on each of the arcs, what you have is the message bit that's coming in at that time and the parity bits that are emitted. So for instance, from 0, 0, if you've got 0 here and 0 here, the only places you can go to at the next step are 0, 0, and 0, 1, because you can either-- sorry, 0, 0 and 1, 0, because you can either feet in a 0 or a 1 from here. If you feed in a 0, then at the next state, you're still in 0. If you feed in a 1, then at the next state, you're 1, 0. So those are the only possibilities from 0, 0. And if you had a 1 in, you would go from 0, 0 to 1, 0. And what would be your parity check bits? So if you had 1 at the input and you have the parity check expressions that I have up here, you see that what you would be emitting would be a 1 and a 1. Is that right? If you had a 1 in the input, 0, 0, and you use these two parity bits, you generate a 1 and a 1. So that's how these arcs are labeled. Now we said, to actually understand the convolutional code well, what you really want to do is translate the state transition diagram to a trellis diagram. This is just showing two stages of the trellis. In general, the trellis would be the state diagram unfolded over the whole time interval of interest. So it's the same thing. It's just that we've-- let's see. We've chosen to write the states in binary counting order, so 0, 0; 0, 1; 1, 0; 1, 1. It was arranged slightly differently here. Apart from that, it's the same thing. So we're drawing the state diagram here. We're drawing the same state diagram here except this is representing the state possibilities at time-- at one particular time. And this is the state possibilities of the next particular time. So the state transition arrows are always going from one stage to the next, all right? So the arrow that we saw here, which takes us from 0, 0 to 1, 0 is going to take us from this box to this box. And what it emits on the way is the 1 and the-- sorry, the 1, 1. What it emits is the 1, 1. So each arc is labeled in the same way. This is just a rearrangement. Now, the nice thing I mentioned last time, the nice thing about this is, when you have this in binary counting order, then the upper arc of the two that emanate from each box corresponds to an input of 0. The lower arc corresponds to an input of 1. So you can actually dispense with the index or with the bit that's in front of the stroke there. So you can just make do with labeling by the parity bits. So you'll get used to that. 0 is the upward movement. And 1 is a downward movement. So if you're thinking at the transmitter-- by the way, I hope I've made these changes well. I had an earlier version of this slide, but I changed it to work for a new set of parity bits, which meant I had to go in and change every one of these transitions. So you might see the odd bug here and there. But hopefully this is correct and consistent with the state transition diagram I showed you. So what we're saying is now suppose you're starting off in the 0, 0 state. And you get the sequence of message bits. So 0, 1, 1, 1 is your message. And then you bring it back to the 0 state again by appending two 0's. What's the path that you traverse through the trellis? Well, you're starting off in the 0 state. Every time you have a 0 in the message, you take the upper branch. Whenever you have a 1, you take the lower branch of the two that are available to you. So you can see very quickly how to steer through this trellis for any particular message sequence. So this is the upper one of the two here, and then the lower one of the two here because it's a 1, then the lower one of the two here because it's a 1, and the lower one of the two here because it's a 1, and then the upper one because it's a 0, and the upper one because it's a 0. So that's your path through the trellis. It's told to you by the message bits. You should also remember, by the way, this diagram hides a little bit, because I have just a box here for something that's actually a pair of registers. So when I just show the box, let's say, at this point, this actually has the 1 and the 0 sitting in it. So if you were just looking at this box and what was in it, if I just gave you the contents of that box, could you tell me what the input was of the previous time? If I just told you that the contents of that box are 1 and 0, can you tell me what the input was at the previous time? Yes? It's just what got fed in, right? It's just the one that got fed in. So this diagram is fine, but we've suppressed a little bit there. There are occasions, especially on homework and quiz problems, where you're given the contents of the shift registers. And you're asked to figure out what happened at the last time step, what message bit came in. So really, don't forget that there is a link between the two. OK, so the steering is straightforward. Now, what's the code word that's emitted? Well, it's the parity bits that you encounter on the arcs. So on this upper arc here, you've got a 0, 0 that's emitted. So that's what you're going to emit. That's the part of the code word generated by that message bit. And then on the lower arc, you emit 1, 1. On the lower arc, you emit 0, 1, lower arc, you emit 1, 0, and 0, 1, and then 1, 1. So that's the code word. So the set of all possible code words that you can get with this convolutional code corresponds to the set of all paths you can take through the trellis. If you're starting at 0, 0, 0, then it's the set of all paths starting at 0, 0, 0. So let's see. Roughly speaking, can you tell me, if I've got l stages-- when I say stages, I mean time, if you want to think of these as happening on a clock. If I've got l stages here and I'm starting off with the 0 state there, for a large l, roughly how many possible paths do I have? Any thoughts? 2 to the l? Yeah. Because you see here, coming out of a box here on each stage, you've got two choices. And you've got those two choices for l stages. So you've got approximately 2 to the l possible paths. Now, I say approximately, because well, in this case, it's fine. But now if you're a lot to start from other starting states, then you will have to take account of that. But it's of that order. It's exponential. The number of possible paths that you can have, the number of code words is exponential in the length of the trellis, right? OK, so that's a large number of code words. Our focus, though, is going to be on decoding today. What I did so far was just review what we saw for coding. We're interested in decoding now. So at the receiver, what you have is a knowledge of what the code is. So you have the trellis. You know what the labels are. You know that things are going to start in the zero state. And then you get your received signal. Now, what I've shown here is that, actually, your received signal is not necessarily going to be 0's and 1's. It's probably going to be samples of some voltage, where you've got some waveform. You process it. And then you take a sample. And what you've got is a sample of some voltage. So you're typically looking at real numbers that you then have to decide whether to call as a 0 or a 1. OK, so yeah, maybe this is 0, 0, maybe 0, 1; 0, 1, probably 1, 0; 0, 1; 1, 0, yeah? So if you were forced to choose, if you had a threshold of 0.5, for instance, and this was the range, if nominally these were supposed to be at 0 and 1, then you might actually be willing to call this one way or another. So if I was to draw this on the real axis thinking of a voltage, so we've got 0 volts that we're expecting, 1 volt or something proportional to 1 volt that we're expecting. These are the two possible values depending on whether a 0 is sent or a 1 is sent. This is because we've coded the bits at the transmitter for physical transmission on a continuous time channel. And then at the receiving end, we're doing some processing and extracting samples, right? But because of noise, what might happen is that you get samples anywhere around the 0 or anywhere around the 1, depending on the particular transmission instance. It'll vary from one instant to the next. And if the noise is really bad, then of course, what started off as a 0 here with the noise added to it, by the time you sample it, might fall in a region where you call it a 1. So there is an intermediate step. And very often, you have access to that. And then you've got to figure out how to do your decoding. All right, is this the same slide? Or does it say anything different? OK. So what are we going to do now? We're going to, of all the paths available to us, we're going to try and find the path along which the emitted parity bits come closest, in some sense, to the sequence of samples here. That's, if you were doing minimum distance, in some sense, that's what you'd want to do. If you believed that errors further away from 0 are less likely than errors close to 0, then you would want to have a reconstructed set of parity bits along whatever path you choose to come close to the values there. Now, it turns out that it's actually simpler initially to think of first making a decision to call these 0's or 1's and then finding a path through this that comes closest to the 0, 1 sequence that approximates the voltage samples that you've actually got. So we make a distinction between what's called hard decision decoding and soft decision decoding. So in soft decision decoding, which we'll talk about later, you preserve those voltage samples. And you don't mess with them. But in hard decision decoding, at each stage, you just make a decision, on each sample, decide to call it a 0 or a 1 and proceed from there. OK, so which do you think is likely to get you better performance if you're doing the optimal thing after that? AUDIENCE: Soft. PROFESSOR: The soft? Yeah. Because when you make the decision at one stage, you're throwing away some information. You're not taking account of how these samples might relate to each other. You're treating that sample in isolation. If you know that what you're going to end up with is a code word that corresponds to a path through here, then there is additional information that actually couples the different numbers you're getting across there. And so you have a hope of doing better with soft decision decoding. So postpone the decision until later. But you pay a cost. Or you could pay a cost for that, because you've got to deal, for instance, with the real numbers and all of that. So hard decision decoding can simplify your processing. So what you'll say is, I'll just make a choice here. So I'll call this 0, 0; 0, 1; 0, 1, and so on, and then look for a path through the Trellis along which the emitted party bits come closest to what I've approximated that sequence of samples by. So we're talking about having distance again. And minimum timing distance is going to give us the most likely path, given that you've already committed to interpreting the received samples as 0's or 1's. So what you might imagine is, OK, you've got this received sequence. You've got a tabulation of all the possible paths through the trellis and the parity bits that are emitted along those paths. Each path corresponds to a different message. What you actually have here-- let's see. We've got 12 bits here, because in addition to the message, I'm appending a 0, 0 to each one, which forces the trellis back down to the 0 state. So what I'm actually doing here is, I actually have a message that's this followed by two 0's. And so if you're trying to connect these two columns with the trellis that I had on the previous page, that's how you should think about it. But with any particular message, you navigate up and down on the trellis. This particular one, you navigate up, up, down, up. And that's the sequence that's generated. That's the code word that you would expect if this was the message. What you'll do is you'll search over all possibilities. At least that's one way to do this, in principle, search over all possibilities for the code word here that's closest to what you received. The trouble is, that's a lot of code words. That's a lot of code words. So this can quickly get out of hand. If you've got long sequences, which is exactly where you want to do convolutional coding, you've got a very long table. So you really want to find an efficient way to do this matching. I just wrote down the Hamming distance that happens to hold for what is the message that was actually sent, which also will be the message that you will recover at the receiver if you do the optimal thing and you don't get fooled by the errors. So what I've got here as a 2 is the Hamming distance between the code word here and the received message. So if that 2 was the smallest one in that whole stack-- I haven't filled them all out-- then that's the one that you would call. OK, so a much cleverer way of doing this was invented by the Viterbi, who did his bachelor's degree here, then moved to the West Coast. He was very involved in the JPL program. But he was also a founder of, well, a succession of companies, but most recently, Qualcomm. And he's a big friend of the department. He's on our visiting committee. Or he has served time with the visiting committee. So this is an algorithm that he developed in the early days. And we're going to talk about it. I think I'll put it all up on the slide. And then let's talk. All right, there is a lot there. I don't want you to struggle through that. Let's talk about it here. And when we're done, I think what's up there will make sense. That's for you to refer to from the slides later. And it's my little checklist to know that I've spoken about everything, but don't try and navigate that just yet. So here is what Viterbi says. He says, we're starting off from some initial state. This is the zero state. At an intermediate state, intermediate time-- sorry, I shouldn't say state. I meant stage or time. At an intermediate stage, we have these four possibilities. What I'm going to do for a given received sequence-- and let me actually put the received sequence I'm going to use in this example. We've got a received sequence. Let's say it's 0, 0 on the first stage, and then 0, 1 on the second stage, and 0, 1; 1, 0-- AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah? Did someone say something? No? I thought I heard-- OK. We'll park the question for now and check in again later. OK, here is the received sequence. What we're trying to do is find a path through the trellis where the emitted bits come closest to this in Hamming distance. Here is what Viterbi proposes to do. He says, from the starting state, let's find the optimum path to each of these states at any particular time, let's say a time i here. Here is time i. Let's find the optimum path to these with the associated minimum cost. So let's assume that I have that. So what I'm going to do is, for each of these, I'm going to put in some number. This won't be exactly the notation that we have on this slide, but it's streamlined. These p's correspond to what we call path metrics. And I should actually have an index i here to tell you that I'm doing this at time i, but I'll just leave that off. p sub 1 is cost along optimal path to state one. OK, so assume that, magically, you've computed the optimal path to this starting from the initial time. So what that means is maybe you've gone down to a particular stage here. You've gone down further. And then maybe you've come up here. Maybe that's the optimum path. So what you're going to keep track of is, for each of these times, for each of these states, what's the cost, the optimal cost? Or what's the cost along an optimal path there? OK, now what do I mean by cost? I just mean Hamming distance between what I received and the parity bits emitted along the way up to that point. So Viterbi is going to keep track of this for every stage as you step along and for every one of these states. Now, let's take this particular one. If I'm transitioning to state one here, let's see. This emits a 0, 0 if I go from-- let's go back to our trellis. I should draw this up, actually. But if I go along the top, I am at 0, 0. What other state comes into the top one? This comes in. And this emits 1, 1. So what's the cost I incurred if I take the upper path? The cost is just the Hamming distance between the 1, 0 that I received and the 0, 0 that I have here. So there is a cost of 1. Let me, again, use colors for costs here. What's the cost I incur if I instead come to this point from p2, again cost of 1? So this is the generic picture. What you're going to do is, you're having this-- you have this at any stage. You compute the branch costs and continues. And now suppose p1 was equal to 3 and p2 was equal to 4, and you wanted to figure out what's the shortest way, what's the minimum-cost way to get from the origin to this point, to p1 at the next time instant? What's the minimum cost? And what's the root? AUDIENCE: [INAUDIBLE] PROFESSOR: If you came from here, you've incurred a cost of 3 up to this point. And you're adding an additional cost of 1. You'll end up with a cost of 4 to get to here. If you get to hear from p2, well, you've incurred a cost of, let's say, 4 up to this point. And now you're going to incur an additional cost to bring it to 5. So your best route to p1 at this time is to come from p1 at this time using this arc. So if you've built it up at a particular stage, then it's actually very straightforward to figure out what you should do at the next stage. So let me now start putting some time indices on this. This would be p1 at time i is equal to 3. p1 at time i plus 1 is equal to 4. This is p2 at time i. So you can actually forget about this arrow, because there is no way you're going to use that arrow. Whenever you come to this stage at this time, you're going to come via the upper branch. So at every stage, you're going to do this. And it's a very simple calculation. So now we've got slightly more elaborate notation up on the board, but I hope you have the general idea. This is an instance, by the way, and a way of thinking about such problems that's referred to as dynamic programming. It works for these sorts of rooting problems. We're rooting ourselves along a trellis where the total cost of taking a path is the sum of the costs at every stage. So the total Hamming distance between the bits you emit along the way and the bits that you've received is made up of the Hamming distance between the branch here and the piece you've received here plus the Hamming distance on the branch here plus the piece you between the branch here and the branch-- sorry, the received segment over there, and so on. So the total Hamming distance is made up of the sum of the Hamming distances along the way. In all such situations where you've got a total cost that's additive over the path and you've got to do an optimization, dynamic programming is something you can think of. And the idea we've used here is actually one that you might come at naturally. If you found the best way from here to the Student Center, and it happens to go through Lobby 7, what's your best way from here to Lobby 7? Presumably, it's going to be the section of the path that you would take to the Student Center that passes through Lobby 7, because if you had a better way to get to Lobby 7, you would have used it to get to the Student Center via Lobby 7. It's just that idea. So on an optimum path where the costs are additive, it must be the case that the optimum path to an intermediate point is exactly the section of the optimum path to the point that you're looking at, a simple idea. OK, so let's go back to the more formal way it's written up here on the slide. So we talk about the branch metric. That's just the Hamming distance that we computed here for the branch. It's the difference between what we received and what would be transmitted if you moved along that arc. So that's the branch metric. It's the piece contributed by the branch. This is the notation we've used. We've already talked about this, that you could either do a hard decision kind of rule where you've already set these to 1's and 0's. Or you could stick with the original samples. If you've already converted them to 1's and 0's, there is a natural notion of distance, which is the Hamming distance. And there is a probabilistic reason why you would want to do that. So we're sticking to the Hamming distance setting right now, so hard decision decoding. And the path metric, this is a more elaborate notation than what I have here. So instead of a subscript to denote the state, this has got the state index here, and the time index here, and pm for path metric instead of just p, but it's the same thing. So for each state and at each stage, so for each of the four states and for each of the stages, you're going to compute this. And you can the path metric up to time i is the smallest sum of the branch metrics over all the sequences that will get you to that place. And if you assume you have that at any stage, then the computation that takes you to the next stage is an easy one. I think I've said all this. You can come back to it later. So let's actually just step through this. So we're at some intermediate stage. We're just doing the same thing I had on the board. I'm doing it again in pictures here so you get to think about it one more time. Suppose we've received 0, 0. We first label each of the arcs here by the Hamming distance between the bits we'd emit along the arc and the bits we've actually received, so Hamming distant 0 here between what we would emit and what we received, Hamming distance 2 here on this arc, Hamming distance 2 on this arc, and so on. So the red numbers here are below the top two just are the costs on the arcs. Actually, I don't like the last line of that slide. So you may want to strike that. We're not going to really be talking about the most likely branch metric. We're only going to make decisions once we're done with the whole path. So we assume at some stage, that we have the path metrics up to that point. And then we do the computation that I just talked about. So let's see. In this particular case, what would be the path metric value in this position? It's the same thing we did already, but just another chance to look at it. What would be the value of the path metric there? 3? Because you can either do 3 plus 1 on that arc. Or you can do 2 plus 1 on this-- sorry, 3 plus 1 on this arc for a cost of 4 or 2 plus 1 on this arc for a cost of 3. So it should be a 3 there. And this is the arc that you would pick. And similarly, you can do it for all of them. So once you have one stage, you can fill out the next stage completely and then keep track of the arcs that lead you there. And at some point, you'll-- at each stage, actually, you can prune away things that you're not going to be using. So you're never going to use that edge. So you don't have to worry about it anymore. You're never going to use this edge. There are also stages where you might have two different ways of getting to a box and incurring the same cost. And then it doesn't matter which of them you pick. You can pick one or the other. In terms of the overall cost, it's not going to matter. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, so what you're saying is that, if there isn't a unique way back, then you're not certain. Well, you're never certain here in this business. You're just doing your best guess. So what you would do when you commit to one particular choice when there are two equally likely costs is you're saying, the probability of error is going to be the same with this choice as it will be with the other choice. And in the end, that's what we have for the metric. It is unsatisfying, perhaps. Now, there are schemes where you'd keep a list of the possibilities and try and do something with that, because maybe there is some higher level thing that would help you disambiguate between possibilities, but that would complicate the processing. But as far as this goes, you make a choice. And you move on. So you can imagine actually working through this whole thing. If you knew you were starting from the zero state, you'd start off with a zero cost there. So you're at infinity here, which is going to force all the optimal paths to come from 0. And then you'll continue from there. So I just wanted to show you a few things that come with this. Actually, I might have shown you everything I want on that. So let's just go back to the soft decision decoding. So how might things differ if you go back to soft decision decoding? So let me find that slide. The Viterbi algorithm doesn't care how you come at these costs. The Viterbi algorithm is just dynamic programming on this trellis. It finds you the minimum-cost path. It's up to you how you decide what cost to attribute to an edge. So the question is, are there other costs that you might have come up with? Well, if the received sequence has been translated already to 0's and 1's, then Hamming distance is the natural thing. But if you're keeping particular numbers there, then it turns out that you might want to do things differently. So suppose at a particular stage, what you got was not-- let's see. Did I put numbers up there? Suppose it wasn't 0 and 1, but it was some particular numbers, let's say, 0.3 and 0.7. And you had actually translated them to a 0 and a 1 in your hard decision decoding. If you decide not to do that, but to stick with these numbers, then what you have is the task for any particular edge that you're traveling on of finding the distance between the parity bits you would emit on that edge and the samples that you have here. It turns out that a very widely used cost for soft decision decoding is the sum of squared differences. So what you would have is 1 minus 0.3 squared plus 1 minus 0.7 squared. So it would be the first bit that you emit on this arc minus the first sample that you got squared error plus the second sample, the second bit that you would emit on this arc minus the second sample, whole thing squared. If there was another arc that was a 1 and 0 arc, then what you would computers 1 minus 0.3 squared plus 0 minus 0.7 squared. So it's just a different way of coming up with the cost. The rest of the Viterbi algorithm is exactly the same. the navigating through the trellis is exactly the same. It turns out that there is a logic and a reason behind this particular metric for situations where your voltage samples are distributed in the familiar bell-shaped fashion here, what's called a Gaussian distribution. We'll talk about it more next time. So what we're saying is that if you send a 1, you get a spread of possible values. The probability of your values falling in some particular range here can be computed by the area under this particular curve. It's got an analytical expression. So this is the most likely spot. But there is certainly probabilities of falling in any particular interval here. Well, what does the Gaussian distribution look like? We'll talk more about it. the essential part of it is e to the minus-- let's see. Let me put some labels here. This is where it's centered. Let me call it mu. And let's say x is the value along the axis. So we'll have e to the x minus mu all squared divided by some normalizing parameter. Well actually, let's just call it capital N. Think of capital N as a noise variance. Actually, let me just call it N sub 0 so you don't think it's a counting number. Think of it as a noise variance. So the larger that N is, would you spread out more or less here? Well, just from the fact that I call it a variance, maybe you would guess that, if N is larger, you're going to spread out more. Well, in this kind of setting, when you take log likelihoods-- you've seen that computation in the chapters-- what ends up appearing in your cost criterion is x minus mu squared. So it's the squared difference from the mean that you want to be looking at. And that's exactly why, in that kind of setting, this is what you end up choosing as your cost metric. But once you're done computing those metrics, the rest of the Viterbi algorithm is the same. So once you have the convolutional coding in hand, you know how to decode, you can start to do some comparisons of how these different codes perform. There is an extensive discussion in the chapter. Let me just give you some highlights here. OK, so what are we plotting here? What we're saying is we send a whole bunch of message bits through the channel. And then we decode at the other end. And what we're talking about is-- let's see. Here is the binary symmetric channel. Here is the error probability on the channel. You can see to well what this is, but it's the-- why is that chopped off? It's the probability of error overall end to end, not of the channel, but after you've done your coding and decoding. Let's see. Do we recognize any of these codes? Here is the uncoded case where basically you're exposing the stream directly to the error on the binary symmetric channel. We expect higher errors when we have higher probabilities of flipping a bit on the channel. So this is the uncoded case. What does the Hamming code do? That's a Hamming code probability, 0.74. So the Hamming code performance, you can see here end to end what it looks like. What's the rate that goes with that? What's the rate of that Hamming code? 4 over 7, right? Because n is the number of bits in the message-- sorry, in the code word. And 4 is the number of bits in the message, so 4 over 7, something over 1/2. Let's see. Do we know what that code might be? Any codes you know about that take 4 message bits and pad them to 8 code word bits? You've seen at least one such code. AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry? AUDIENCE: Rectangular parity? PROFESSOR: Rectangular parity, right? If you didn't have that corner parity bit, but you just did the rows and columns, then you'd arrange 4 bits in a 2 by 2 pattern and then have 4 parity bits. So that's a rectangular parity. That's rate 1/2. What this denotes is a convolutional code. It's actually the code we've been-- sorry, no, this one is the code we've been looking at. So let me explain to you what that notation means when you're reading the chapter. This code is represented as-- the one we've been talking about as represented as this. So what this is is the constraint length. And what this is is just to tell me that the generator bits I used for my party generation correspond to the binary representation of 7 and the binary representation of 5. So remember that for my first parody bit, I chose xn plus xn minus 1 plus xn minus 2. I picked all three of them. For my second parity bit, I took xn plus xn minus 2. I skipped the middle one. So the notation that's used to denote a convolutional code with these two generators is, just for compactness, the 7, 5 there. Let's see. Is this redundant, the k, the value of k? Could you have figured out what the constraint length is? AUDIENCE: Yeah. PROFESSOR: Yeah. It's already staring at you here what the constraints length. So this is a little redundant. It's just that it's a convenient way to distinguish a convolutional code from a Hamming code. Now, we have to be a little careful comparing these codes, because the rates are all a little different. Here the rate is 1/2. What's the rate for this convolution for the two convolution codes here, the 3, 7, 6 and the 3, 7, 5? AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry, what's-- AUDIENCE: 1/2 [INAUDIBLE]. PROFESSOR: 1/2, right? So the rate is 1 over the number of parity bits you're generating per message. One message bit, r parity bits, therefore, a rate of 1 over r. So these are rate one-off codes, just like the rectangular case. This is constraint length 4. And you can actually write out what that would be there. How big is the trellis for the constraint length 4 case? How many stages? This last one down there? 8, right? Constraint length 4, that means k equals 4, 2 to the k minus 1, so 2 cubed states. So there is 8 states that we're talking about. The Cassini convolution code that I showed you last time had a constraint length of 15. So how many states there on the trellis? 2 to the 14, that's a lot of states. So that's a lot of computation there happening. And actually, there is no hope of that having been done if it wasn't for the Viterbi algorithm. All right, we'll talk more about comparison between these codes next time. Sorry, I should do one more thing here. I did talk last time about this notion of free distance. Let's just stare at this a second. We said the free distance was the weight of the smallest non-zero codeword. And it gave you a handle on the performance of the code. It was the Hamming distance for the set of code words you could generate between 0, 0 here and 0, 0 there. Can you see by inspection here what might be a candidate free distance here? I think what we had last time was 5, right? This is the 1, 1; 0, 1; 1, 1. And we'll pick up 1, 2, 3, 4, 5, a weight of 5. And it turns out there is no other path that's smaller. So the performance of this particular code is indicated by that number 5. It tells you that you can correct two bits. But actually, it tells you much more than you would typically try and extract from a typical block code where you would say, if there is Hamming distance 5, you only can correct two codes. Here you've got message bits that go on for a long time, thousands of bits. So what this is telling you is that in a duration that's of the order of five or six message bits, you can, with this scheme, correct up to two bits. You can have bursts of errors that are very frequent and correct them with this Viterbi decoding. So the free distance is an important notion. So when you do the examples in recitation tomorrow, please look out for what the free distance is for your codes and compare with what we have here.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
9_Transmitting_on_a_physical_channel.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I wanted to begin by just picking up a couple of things from last time. This is part of our sneaky agenda of trying to teach you some probability as we go along. This is maybe a little less crucial than other things we've been doing, but to make sense of the last couple of slides from last time, there was actually stuff I swept under the rug that it won't hurt you to know. Just a reminder, we talked last time about the PDF of a random variable. I neglected to explicitly say that about something that's got to be non-negative for all values of x. And the reason is that if the area under this thing is going to be the probability for any interval, no matter how small, if the area under it is going to be the probability that the random variable falls in that interval, since probabilities have to be non-negative, well, this function itself had better be non-negative. OK? So that's an explicit condition, and then there's the normalization. The question now is, if you're dealing with multiple random variables, how does the story change? So if you've got two random variables, let's say x and y, so two things that can take numerical values, say height and weight of a randomly picked person. We use a very analogous object. It's the joint PDF of the two random variables. So it's some function of two variables, non-negative, so you can imagine it like some probability mass that sits on the plane normalized to unit area. And the amount of mass over any particular piece of area tells you what the probability is that you fall in that region. So the expected value, we talked about how to get expected value in the single dimensional case. In the 2-D case, it's a natural generalization. So the expected value of a function of x and y, just take that function under the integral signs, so you're taking an average with respect to the PDF. OK? So it's a very natural extension. This is for the case of two variables, and in the same way for m variables. Now last time, I talked about a very special case that involved multiple random variables. These were the random variables corresponding to the noise samples. So we sent out a nice clean looking signal. This was our x of n. And then what we received was something that was perturbed. And so we would have liked to get in the noise-free case the same thing, but what we got was this perturbed by a certain random amount. So the wi's this was w1, for instance. It's the amount that you add on or subtract from a given value to get the actually received values. OK? OK. So these are the wi's or wn's. So we had multiple random variables because we were taking many samples in the bit slot. And in particular, we looked at taking the average of a bunch of measurements. But now, what you've got is a function of many random variables. So what does it mean to take the expected value of a function of many variables? What does it mean to find the variance of that? Well, we didn't actually get into the details of it. But it turns out, there's special structure here that made those computations very simple. So one key thing was we said that these noise samples were independent from one time incident to another. And that's really the crucial thing. The other piece was just for-- well, it made sense for our application because of the central limit theorem, but we assumed that these noise samples were Gaussian with varying sigma squared. There's a term I didn't use, by the way, for this kind of noise, but you'll see it in the notes. It's what's referred to as additive white Gaussian noise. The additive part is clear. The white means that it's IID noise. The reasons for that name become clear when you think about frequency domain, so we won't get into the origins of the name. But the key thing here is that these are independent random variables. So we should have actually been talking about the joint density of all these random variables when it came to computing expected values and variances. But it turns out that there are actually some simplification, so I'm just going to give two statements which are the things I want you to carry away. One is that expectation is always additive. So when you take the expected value of-- I'm doing this for the case of two random variables by the way, but the same thing goes for m. If you've got the expectation of a sum of two functions of these random variables that you're interested in, the result is the sum of the individual expectations. And that's just the consequence of the fact that expectation is defined through integrals and integrals are additive in their arguments, in the integrands. Right? So that's all that's involved there. Now, the particular use we make of it is for actually for a sum of this kind where one of the functions is a function of just one of the random variables, the other function is a function of just the other random variable. And so if you apply the result up there, you get sum of the expectations. And the nice thing now is that each of these is just a 1-D expectation. So we never have to deal with joint PDFs, joint distributions, multiple integrals, and so on. It all stays as simple as the 1-D case. And the reason is that in every instance we talk about, when we have the expectation of a sum of functions of multiple random variables, well, the sum actually involves functions, each of which is a function of only one of those random variables. So it actually becomes very easy to compute. Here's another thing that's interesting with expectations, which is that under independence, you can actually have expectations be multiplicative. In fact, let's see, so the expected value of the product of a function of just one of the random variables and a function of just the other factors into the product of the individual expectations if these two random variables are independent. So that, again, ends up being used. For instance, when you're computing the variance of, let's say, w1 plus w2, so the variance of this is going to involve computing-- this is the 0 mean case. So you'll find that you're computing the expected value of w1 squared and then the expected value of w2 squared. And then you've got twice the expected value of w1 w2. OK? So while this is easy, this is sigma squared, this is sigma squared, what do we do with this term? Well, if it was a general function of two random variables, then you've got to pull in the joint density, and it becomes a big operation. But if these are independent, then this expectation factors into the product of the individual ones. These are 0 mean random variables. And so this goes to 0. OK? So this kind of computation was going on in the results I quoted or claimed in the last lecture, and I just wanted you to have that in mind. OK. But what I really want to talk about for the rest of the lecture is going back to understanding and modeling the single link. So we'll leave the probabilistic stuff and that for now. OK. So we're back to this picture of bits coming in, being converted to signals. The signals are in discrete time here. They're being adapted then to transmit on an analog channel, which has the noise in it, and then at the other end back to discrete time through an inverse kind of operation and out to bits again. So we're going to look in more detail at what goes on in there. What we did with all our noise analysis last time was really focused on this box where the decision is. Right? You get a noisy sample and you're trying to decide what you have. So the lecture last time was focused on that box, and we're going to look at the other parts of the picture this time. OK. So the digitized symbols that we talk about, here's an example. We saw this last time. So what you're doing is taking the bitstream and deciding that you're going to represent it, for instance, as a voltage one held for a number of samples to indicate a 1, a voltage of 0 held for a number of samples to indicate a 0, and so on. OK? So this is the signal that you want to get across the channel. And then at the other end, that signal will get interpreted as a string of bits. OK? When you do the optimal detection that we talked about last time. So this link actually has-- it's a very hybrid kind of thing because you've got clock discrete time stuff happening here. This is a digital-to-analog converter, so you're going from clock discrete time to continuous time. You typically have a continuous analog channel. And then at this end, again, back to discrete time, so analog to digital. Digital, by the way, we use that word a lot. What we typically mean is something gets sampled, it's a discrete time signal, and there's often the implication that it's quantized to one of a set of levels. It's basically the signal you'll deal with in a processor whereas all of this is the signal that you deal with in the physics, in the analog part. OK. Then you're back to the screen time here. And actually, there are two clocks even there because there's a particular clock that drives all of this signal processing. But then when you come to spitting out the bits, you're only going to spit on one bit per bit slot. So you've got many samples per bit slot, and then when you come all the way out, you're only going to report one number, a 1 or a 0 per bit slot. OK? So there's all of this mixed together in the system. All right. So let's look at the particular case you're going to be seeing in lab. I put this up on the slides last time as well. So we're going to talk about the specific case of a channel that's just an acoustic channel. So we're going to have sound coming out of a loudspeaker, that's your transmitter, sound getting picked up in a microphone, and that's your receiver, and then all of the signal processing. OK? So labs four through six are going to be centered around this. So the challenge then is taking the digitized symbols there and putting them to a physical channel. So what is it that happens in between? Let's see. The D to A converter, we don't say much about that, but a typical D to A converter is taking a sequence of samples which are just numbered and then converting the samples to a continuous waveform, which is on a time axis. So a discrete time sequence is typically on an integer axis, but we convert in a D to A converter, what typically happens is that you're converting this to a continuous time waveform. And the simplest way to do that is through what's called a zero order hold. So you take the value here, hold it for some interval of time, t seconds, and you take the next value, you hold it for t seconds and so on. And then when you get a change in value, you change to the new value. So you would hold this here, and then come down here, hold it. So at the end of it, what you've done is convert a discrete time sequence into a continuous time waveform that can then be applied to something like the loudspeaker. Right? So you'll have to specify in the D to A converter what your reconstruction interval is. This kind of a D to A converter would be called a zero order hold. Zero order because it just looks at the most recent sample and holds it. A first order hold would look at the last two samples and do a linear projection. So you can imagine more elaborate ways of doing the digital-to-analog conversion. So what I want you to imagine is that when we get to the DAC finally, it's going to be something like this. So all my pictures will be discrete time sequences, and I won't say much about what goes on here. So I want you to imagine that whenever I have a sequence like this, and then I end up putting it on the physical channel, there's been a conversion of the state. OK? That's what you're D do A card will do in the computer at this point. So you feed it a bunch of numbers. You give it a sampling rate or a reconstruction rate. And then it does this kind of interpolation. OK. So let's see. Is this a good voltage to put on a loudspeaker? If I wanted to signal a one or a whole series of ones, do I want to put a constant voltage on a loudspeaker? A loud speaker is not very happy getting a DC voltage on it. Right? So the point is here that you have to think about your transmission medium and what its happiest responding to. So you've got to adapt your signal to the capabilities of the physical medium. And that's what modulation is all about, or at least that's a key part of it. Another part of modulation, we'll see later, is to allow you to have multiple users share the same channel. But a big part of it is just adapting your signal to a form that is comfortable for the channel. So here's what you might try in the case of the loudspeaker. So what you've got is the acoustic channel. You would like to transmit these two levels, v0 and v1, to represent the 0 and the 1. I'm just generalizing here. I'm just saying that there's some level v0 that represents the 0 and there's some level of v1, and I will allow you to pick different possibilities here. So what's typically done is instead of trying to transmit the DC, you transmit a burst of a sinusoid because loud speakers like sinusoids provided they're at the right frequency. So again, you've got to think about what frequency makes sense. It's this cone trying to move a massive air to try to make a massive air oscillate. There are particular frequencies that are good for that, so you have to think about that. So you might, for instance, say that you want sinusoidal bursts at two kilohertz, and you're going to modulate the amplitude. So you'll send a burst v0 cosine 2 pi of ct to represent the 0, and you'll send a burst of v1 cosine 2 pi of ct to represent the 1. So if it's simple on-off keying that we've been talking about, you'll have v0 equals 0. In other words, you'll signal a 0 by sending nothing out on the loudspeaker. And then when you want to signal a 1, you'll have a cosine of amplitude capital V. In the other case here, what you do is you have minus v cosine going out to signal a 0 and plus v cosine going out to signal a 1. So that's basically just a 180 degree change of phase. Every time you want to shift from a 1 to a 0 or a 0 to a 1, you're going to change the phase by 180 degrees. That changes the sign. Right? Because it's fixed amplitude, and you step the phase each time you want to change. So this would be a natural way to do it. Why two kilohertz? Well we know that this is the kind of frequency that a loud speaker likes. And the more general principle is whenever you're trying to radiate energy, the size of the antenna that you use, the antenna element that you use, for efficient transfer of power, has to be comparable with the wavelength of the signal that you have. So for instance, if you're talking about sound at two kilohertz, the speed of sound and air at room temperature is something on that order 340 meters per second. If you do the computation of the wavelength, and I always do it with the units, so I may get it wrong here. But 340 meters per second, and for wavelength, I know that I want the answer to come out in meters. I've got two kilohertz, so I've got to divide by 2000 per second. Right? That's the units of frequency, and that's going to give me something in meters. And if I wanted to come out in centimeters, then it's 340 divided by 20, so that's 17. And, well, 17 is in the ballpark for the dimensions of a speaker. You actually-- it depends on the details of how this is done, but you might be satisfied with a quarter wavelength of the transmission. Quarter wavelength of 17 is really very well within the range of what a speaker might be on a laptop, depending on the size of your laptop, I guess. OK. So all of this goes on in trying to figure out how to modulate the signal onto a channel. So these are instances, actually the very simple instances, of what's called amplitude modulation. Actually, in the very first lecture, I mentioned when we were trying to distinguish analog communication from digital communication, I said that the typical analog communication scheme might be AM, amplitude modulation, where you take a carrier and you modulate its amplitude. And the amplitude is what carries the information. So this would be something of the type x of t cosine 2 pi of ct. All right. So it's a carrier, which is a pure cosine, and you have an amplitude that slowly varying. And it's the amplitude that carries the information for the analog communication. So the receiver what would be done is figure out some way of extracting the envelope here. All right? That's classic AM. Ours is a very simple case where we're saying rx of t is either 0 or v in the case of on-off keying, or it's a minus v or plus v in the case of bipolar keying. But it's still on the AM kind of modulation. OK. So what I have up here is actually just to remind you that this also happened in the neighborhood here. This is Fessenden in 1906 is credited on Christmas Eve with making the first voice transmission, wireless voice transmission, as opposed to Morse code transmissions which had been around for a while. And what was his oscillator? Well, what was his antenna? It was this 420 foot thing, that's about, I think, about 120 meters. So if you actually put in the speed of light, you'll see what kind of antenna size you need, sorry, what kind of frequency you need to excite this with for the wavelength to be the comparable to the size of the antenna. So let's see, 3 times 10 to the 8 is speed of light, and I've got an antenna that's 120 meters. So let's say 10 to the 2, so it's about 3 times 10 to the 6 Hertz that I need to be exciting this at. Well, he wasn't able to get anywhere near that. He actually had this big electrical machine that could generate a sinusoid of about 50 kilohertz for which he'd have needed a much taller antenna to have efficient transmission, but it was enough for the signal to be picked up a few kilometers away. And he claimed that it was heard all the way down the coast of Virginia and so on, but there's some controversy about that. Anyway, he's credited with developing a lot of the basic technology for AM and for developing these machines and all of that. I like the name of the cocktail named in his honor by the city of Marshfield. Here's a picture of the antenna from an old postcard. Looks very Cape Cod and Marshfield-y. He had a companion, a system built in Scotland, but a careless workman, at one point, disconnected a particular cable that was tying it to the ground, and the whole tower collapsed. So his transatlantic experiments were set back for a while. OK. So how is this done? Well, for our setting, it's actually quite easy. We've got our digitized symbols coming out here. This is the x of n, something like this if we're doing bipolar signaling. And we're going to multiply it by a cosine. So here's our cosine carrier. We'll use capital omega to denote frequency in these discrete time signals. And I'm using angular frequency, so this is 2 pi times whatever other frequency you're used to thinking in terms of, but this is typical for discrete time signals. And so this is what it looks like. This is for the specific case of the x event I showed earlier. So let me flip back to show you that. This one. OK? So we're taking this waveform and multiplying it by cosine. So what you're going to have is a burst of cosine and then 0, then a longer burst of cosine and then 0, and then a burst of cosine and then 0, and then a burst of cosine again. So that's what we're seeing here. OK. A short burst of cosine. This is 16 samples long and then 32 samples of 0 and then 48 samples of cosine and so on. All right? So your loudspeaker is emitting power and then turning off and emitting power and turning off, and this is what gets picked up by the microphone. So the microphone has to figure out-- this is the particular case of an on-off signaling scheme. So the microphone has to figure out what's being sent. OK. So any particular thoughts on how you might recover things at the receiving end? So what I'm not showing you is the D to A converter, which is going to take this thing and interpolate these points and put it out on real time access and put it out on the channel. OK? I'm imagining all of this kind of stuff going on, but we're just going to look from discrete signal to discrete signals, so I'm suppressing all the stuff in between. So at the other end, you somehow-- let me assume there was no distortion on the channel, and you figured out a way to get exactly this after sampling at the receiving end. If you got the signal, what might you do to it to recover those 0 and 1? Any particular ideas? If you had to write an algorithm to do that? So I'm saying imagine no distortion on your channel. You hear a sound on your microphone, you take samples of it, you get a signal like this, and now you've got to decode. Anything? It ought to be pretty simple. Right? Yeah. STUDENT: Take the absolute value [INAUDIBLE].. PROFESSOR: OK, yeah, I like that. So let's take the absolute value. So you might get something like this. And then there's a gap, and then you get something like this, and so on. Right? I'm drawing these as though they're continuous, but actually, what you're getting is a bunch of samples. And these are supposed to be sections of sinusoid. This is what we call a rectified sinusoid. The electrical engineers say rectified, it's just taking the absolute value. Right? Once you have this, you're probably in better shape to try and figure out where there is signal and where there is not. So what could you do? What might you try doing? Felix? You want to continue? What's the next step? STUDENT: So you can take the moving maximum [INAUDIBLE]?? PROFESSOR: OK. So you're saying, let's have a sliding window of some kind that looks to see how much of the signal is in there. Is there any window size that will actually give you a constant signal if you're in the body of this? I mean, if you took a window that was equal to a period here, this is periodic. Right? While the sinusoid is ringing, its periodic. So if you took a window of this size and then slid it along, at least in the body of this, you're going to get a constant because you're picking up the average value of the rectified sinusoid. And then near the ends, you're going to get some effects whatever they are. But that should be enough to give you a good stretch of signal. And again, you're not getting a continuous green thing, you're actually getting samples. Right? But that should be enough to help you figure out where you have zeros and where you have ones. So very simple. What if we have a bipolar scheme? Suppose we have a signal that can be plus or minus. And so what we have is not this, but we have a phase change every time we go from a 1 to 0. Then, if you take the absolute value, you've lost all the information. Right? So we've got to figure out something else. OK. So the more general way to do this, for instance, for the case where you have the bipolar transmission, is-- well, this is actually just to-- it turns out that I had the same idea as you did, which was take absolute value and then a local average over a half period. It's a very natural thing to think of as a way to extract that. That works for the on-off signaling, but for the bipolar, it's a little trickier. So here's what a typical general demodulation scheme is for amplitude modulated signals. Here's the transmitted signal. It's been received. I've converted it from analog to digital and so on. I'm going to do the same thing again. I'm going to multiply it locally by a cosine at the same frequency. So I have a local oscillator at my receiver that's got the same frequency as the carrier frequency that was used for transmission. OK? When you tune your radio on AM, what you're doing is actually adjusting the local oscillator frequency to match that of the station frequency. OK. And the station frequencies, by the way, should be obvious by now when you have, what is it, 820 AM or whatever, what they're announcing is the frequency of their carrier. That's how they're known. They're known by the frequency of the carrier. OK. So here's what the result is. You've got the signal that you transmitted multiplied by the cosine. What's the signal that you transmitted? Well it was the signal you wanted to get across but multiplied by the carrier at the transmitting end. OK. You've got the cosine squared there. We have a standard identity for cosine squared. That's half of 1 plus cosine twice the angle. Right? Yeah? STUDENT: What if you're given a phase shift? PROFESSOR: OK, yeah. Good question. The question is what if you have a phase shift because this assumes not just that you have the right frequency but that you have exactly the right phase. And it turns out that that's something that has to be dealt with. So there are mechanisms for doing that. So basically, what you end up doing in one way of tackling that is you multiply by cosine, you multiply by a sine. And by looking at the outputs of both of those, you can actually figure out what the right phase shift is. So that's a good question. It's not just phase shift, there's also time delays and propagation and so on, so it's a real issue. But let's just deal with the simplest case for now. OK. So what you have after the multiplication is something that actually has the signal you're interested in, that first part, just scaled by a factor of a half, and then it's got some stuff that you don't want. It's got a double frequency component which you have to try and get rid of. But the nice thing here is if you were able to get rid of the double frequency component, then you have x of n there whether or not it's plus or minus. So the sign is not lost. It's not the absolute value of x anymore that we're recovering, it's the actual x. So your x can go positive or negative, and you'll pull it out. OK? So this is better than just taking the absolute value and doing a local filtering. OK. So here's what that looks like for the particular example we have. So you can see the double frequency cosine over here, and then the average value that that cosine is riding on is going between 1 and 0 in this particular case. OK? So this was a 0-1 waveform that we transmitted, but we're demodulating it using a scheme that could have actually handled a signal that went negative as well. But this example doesn't have a signal that goes negative. It was the one that I showed you earlier. But you can see that the average value here is picking out exactly what you want. So your challenge now is to get rid of the double frequency piece. So does someone want to suggest to me how you might do that computationally? Yeah. STUDENT: Question. PROFESSOR: Yeah. STUDENT: If you wanted to extend [INAUDIBLE] by extra sign, why don't we want to multiply by it again? PROFESSOR: Oh, I see what you're saying. What you're saying is, we could have done this more simply if what came in-- that's a good question. We could just divide by the cosine here and get what we want because what went down was the transmitted times the cosine. I never thought of that. Could there be a problem with it? STUDENT: Maybe if cosine has a value of 0. PROFESSOR: Yeah. So you see the point is that the cosine has multiple zero crossings. And in the discrete time case, of course, it depends on what that frequency is. You might not go exactly through 0. But then you're going to be horrendously sensitive to noise and other things in the system. So that's a good thought. It's like in the Viterbi case, by the way, if you're not thinking of noise, then the very simple ways to combine the parity stream to recover the input. But as soon as there's some noise, all of this falls apart. So the scheme is robust up to a point to noise. I mean, only up to a point, you know if you've listened to AM radio that it can get annoyingly noisy, but up to a point, it's fine. OK. So we were here, and I was asking, how might you get rid of that double frequency component? Any ideas? Someone who hasn't spoken today maybe? I want to do some filtering operation on this. I want to run some algorithm on this that's going to eliminate the double frequency piece and just get me the nice waveform back. Yeah. STUDENT: You can do the same thing. PROFESSOR: I can do the same thing. OK. So what interval would I pick then? STUDENT: It would be not the double frequency, the single frequency. PROFESSOR: OK. I could get away with, let's see, I could get away with the period of the double frequency. Right. If it was equal to the period of the double frequency component, then as if I take the average with that, then I've got a full cycle of the double frequency component, and it will go off to 0. Right? So that's what I need to do. And that's the simplest way to do it. So the filter here, the simplest one, just puts out a signal that sums L plus 1 of these values where L plus 1 is 8. That's exactly the period of the double frequency component. Remember, we had 16 samples for the original carrier, so double frequency component has a period of 8. And so the 2 omega piece gets eliminated, and here's what we get. OK. So there's some transition at the ends because when you get to the ends of something like this, and you're doing the averaging, well, now you've got a little bit of the previous bits worth and a little bit of the current bits worth, so there's going to be a transition. But it'll still leave you with plenty of room in which to make your decision as to what you have. OK. So now what I want to do for the rest of the lecture, and we're going to continue is well into the next few lectures, is say we've understood, at least at some level, how you might get across the analog channel. So let's now just focus on input to output here. So we've got a discrete time signal here, and I get a discrete time signal coming out there. And I can think of that as my channel input and output. I can suppress all this other stuff. Knowing that all of that goes on, but in terms of my designing what I want to do with the signals, I can just look at it end-to-end. So we're going to talk about models for end-to-end behavior from the discrete time sequence that goes in to the discrete time sequence that you reconstruct at the other end. OK? So abstractly, what we have is some system, S. We've got an input sequence, an output sequence. And this is typically how these things are drawn. I just want to caution you on this. When you see a diagram like this, you want to think of it as a snapshot of the system at a time n. You don't want to think of it as saying that the value of y at the output at some particular time n is determined by the value of x at that same time n. OK? So the real story is that, in general for such systems, the value of the output at any particular time n is determined by all the input values of the input. If it's a causal system, then it only depends on the present and past inputs. But in some settings, you can think about non-causal kind of processing. So in general, when you see a picture like this, think of it as a snapshot of the system at time n. But it's not telling you that x and that one value gets mapped to yn that one value. That's not what's happening. I'm a little fuzzy about this because, often, that's glossed over, and then it leads to confusion. So more, generally, if I want you to be thinking of the signal as a whole, I'll use this notation. I'll just put a dot there to say it's the entire waveform that I'm referring to because I don't want you to fixate on a particular time instant. OK. So that's our system. Here's a little more fussing about notation, but I don't think I want to bother with that now. Please look at it when you review the slides. Let's go to talking about some particular signals, and these may be ones you've seen in 601 is convenient signals to talk about. We'll do a lot with unit step functions. So these are signals. This is a signal that is 0 for all negative time. And then at time 0, it goes up to the value 1 and then stays at 1. So that's the unit step. And our standard notation for it is u of n. So when you see u of n, that's what you want to imagine. And u of n minus 3 then, well, this will have the same value at n equals 3 that this had at n equals 0. So the point of transition for this waveform must be at n equals 3. OK? So it's the same signal but just delayed by 3. So u of n minus 3 just has it step three instance later. Here's another very special signal that we use a lot. It's what's called the unit sample function or unit sample signal. So it's an entire signal. It's not just one value, it's signal denoted by the symbol delta of n. It's 0 for all positive and negative time. But at time 0, it has the value 1. So that's the unit sample. And so if you had, for instance, delta of n plus 5, it's the shifted version of that function. And what happened that n equals 0 here will now happen at n equals minus 5. In other words, the spike up there is at n equals minus 5. So we're going to need to get comfortable with unit step functions and their shifted versions and the unit sample functions and their shifted versions. OK. And there's a relation between the two as well. So you should see fairly clearly that you can actually write the unit sample function as a difference of a unit step and a delayed unit step. OK? Now, we can do standard algebraic operations on signals. So I can take a unit step function. I can do things like u of n plus 3 times u of n minus 7. And what this means is draw the signal u of n. Draw the signal 3 times u of n minus 7, which is just u of n minus 7, each value scaled by 3, and then just add them instant by instant. So it's the most natural way of adding signals. It's exactly what you would do if you're adding or multiplying the functions of continuous time from your calculus course. It's the same idea. So we can do all of these algebraic operations. Now, the response of a system to the unit step or the unit sample is interesting. It's not interesting for all sorts of systems, but it's interesting for the class of systems that we'll be focusing on. So let's talk about that. So the unit sample response of a system is just the output signal that you get when the input signal is the unit sample. OK? This is the traditional symbol for it, h of n. So when you see h of n, you're typically thinking of unit sample response. Similarly, the unit step response, put in a unit step. The signal that you have the output is the unit step response, and we'll use the symbol s of n for that. So the reason that these are useful in many contexts is that you can take a general signal, for instance, the one at the top there and represent it as a weighted sum of scaled and shifted unit sample functions. OK? So this is just saying I can think of this function as being advanced unit samples scaled by something and another one scaled by something else and another one scaled by something else and so on. Here is the analytical expression that goes with that. So any signal can be thought of as being made up of a bunch of unit samples appropriately scaled and appropriately delayed. The same thing with the unit steps. And these are waveforms, as you've seen. This is the kind of waveform we work with a lot in the context of communication. So we've got these sorts of rectangular waveforms. And it's very useful to think of them as being combinations of unit steps. So the waveform at the top can be generated by having a unit step at this time climbing up. And then I've got to calculate its effect on this next transition, so I put a negative going unit step. And I want to bring it back up again at that time, so I put another unit step and so on. So you can synthesize a signal of this type as a linear combination of unit steps scaled and delayed. Now, this is actually important when you get to particular classes of systems for which you can exploit these properties. So let me tell you what linearity is and what time invariance is, because the rest of our talk about systems is going to be focused on linear and time invariant systems. So let's start with what a time invariant system is. It's just a system whose response to a given input doesn't depend on the day of the week that you do the experiment. If you come back tomorrow and do the same experiment, you'll get the same result except it's happening tomorrow instead of today. Right? So it's a system where if you delayed the input by some capital N, the response is the response you had previously but just delayed by the same amount. So time invariant system is one where the response doesn't depend on absolute position on the time axis. If you shift the input by some amount, you get the same response but shifted by that same amount. So it's a very easy idea. OK. So for instance, if you had a time invariant system and you put in a delayed unit sample, this is a unit sample that has the value 1 at the point capital N. Your response will be the unit sample response but correspondingly delayed. Right? So time invariance of a system allows you to do that, and that's very convenient. Here's the other property that's crucial, which is linearity. And we've talked about linearity a lot along the way. Here is a definition in this context. OK. So we say no the system S is linear if you can do the following. Take the response y1 to an experiment with an arbitrary input x1. OK? So you put in an arbitrary input x1, you get the response y1. Put in an arbitrary input x2 and another experiment get out the response y2. If it's true that any weighted linear combination of those inputs from the previous two experiments will give rise to a response that's the same weighted combination of the original responses, then you have a linear system. OK? So if superposition of inputs, according to this formula so a weighted sum, leads to a response and output, that's the same weighted combination, then what you have is a linear system. OK? So if this is true for arbitrary inputs and arbitrary scale factors. So one important conclusion from that, by the way, is if you have a linear system, and you put in the all-zero input, the response must also be an all-zero response. And I'll leave you to think about why that might be true. OK. So the systems that we are going to be focusing on will be linear and time invariant. OK? LTI. And so we're going to be thinking of end-to-end models of the channel, our digitized sequence xn, all the way through the processing that happened in between. OK, so what did we have? We had modulate. We had D to A conversion. We had the channel. We had A to D. We had DMod. And there was a filtering operation, as well, there as part of the demodulation. So all of that, end-to-end, we're going to try representing that as a linear time invariant model. So it'll be an approximation. Real channels are more complicated. But it turns out that LTI is a very good place to start, and not just for the communication setting, for a whole variety of settings. So it turns out that if you're just talking about small deviations from some nominal operating point, a linear model is not bad. And the reason is, well, it goes back to Taylor series kinds of thinking. You can have a very non-linear function, but if you're looking in the neighborhood of some operating point-- stick it up there. So here's some operating point, and you're only looking at small perturbations around there. The linear approximation is not a bad one. OK? So it's really essentially that idea that first order Taylor series kinds of approximations are good. And so linearity works. Time invariance works because many systems are inherently time invariant. Now, that's not always true for communication channels. If you've got a mobile device, for instance, the channel is changing all the time. Well, if you're mobile with your mobile device, the channel is changing all the time. So you need to reckon with time varying channels or what are called fading channels. But for many situations, a time invariant channel is reasonable. So time invariant and linear is a good approximation. Now, as soon as you invoke linearity and time invariance, you've got such a rich structure that opens up that there's a lot you can do by way of analysis and developing computational tools and so on. And that gives you a very good handle on doing design. So if you're trying to design something that's, ideally, linear time invariant, you have a large array of tools at your disposal. So you'll find that in engineered systems, people are trying to design modules that are thought about as LTI systems and then interconnecting them, maybe, in non-linear ways or time varying ways. OK. We'll pick up on this in recitation tomorrow and next lecture which will be Wednesday of next week. And make sure you're aware of what the portions are for the quiz and where you have to go for the quiz rooms and all of that on Thursday evening next week.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
24_History_of_the_Internet_contd_course_summary.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So my goal for today is twofold. The first is, we've looked at the history of the internet in the '60s and the '70s. And today, my goal is to tell you about what happened in the '80s and the '90s, as well as the last decade. And we'll do that maybe 30, 35 minutes. But I want to tell you about two interesting problems that got solved, both related to topics we've studied. The first is on how the internet dealt with a serious problem called congestion collapse, which happened in the mid-1980s, where TCP, the transmission protocol at the time, was still the dominant protocol in the world, didn't have many of the new ideas that it now has. In fact, the implementation of TCP in the 1980s was almost exactly the implementation of your sliding window protocol from the second lab, from the second task of the last lab. And in fact, as some of you are-- I think Tyler posted it on Piazza, there was a particular way in which he had dealt with the windowing scheme that happened to do better than the lab solution. And I remarked there that the reason the lab solutions are the way they are are a little bit more general than the particular topology. But the topology that I picked was specifically picked in the lab so you didn't have to worry about congestion happening. But today, I'll tell you what happens when congestion happens and the solutions that were adopted. And it'll be a little bit of a preview to 6.033. You'll study this topic at some length in 6.033. The other problem I want to tell you about today through these remarks is about how easy it is to hijack routes on the internet. And I'll go through some examples of this happening in reality. I mean, it is literally something a bunch of-- you know, you can do it in your dorm. You just have to convince some sort of-- pretend you're an ISP, or set yourself up as a small internet service provider. And you can actually wreak a fair bit of damage onto the rest of the world if you're so inclined, and then pretend that it was just a, you know, error. So those are the two technical ideas. I mean, none of this stuff is really on the quiz, though it's probably helpful to think about these things about route hijacking because those apply to some of the concepts we've studied. But a lot of this is just for-- out of your own interest. And I'm hoping to pique your curiosity so later courses in the department would be interesting. OK, so in the 1980s, the people designing the protocols of the internet started to get organized. And as you recall, Vint Cerf and Bob Kahn were the leaders of the effort. There was a big community of people contributing to what became the internet. And back in those days, it was still the ARPANET. ARPA had funded this entire project. And it was starting to be successful. In the 1980s, rapid growth started. People said how the internet is booming, and how it's exploding at 80% or 90% a year. It's been growing at about 80% to 90% a year since about 1983 or 1984. This explosive growth has been happening for several decades now. Dave Clark, who was a senior research scientist at MIT and on the faculty in our department, was designated as the internet's chief architect. And one of the things he did was to get the community organized and formalize the creation of internet standards. You know, you have all these different companies and organizations and universities coming together. How do you standardize protocols? And the argument-- the approach they came up with was called the Internet Engineering Task Force, or IETF. And they would write these proposals. There had been this trend of writing these proposals for people to comment on, called RFCs, or Request For Comments. And now, there are several thousand requests for comment. Nowadays, requests for comment are after the comments have been made. Usually, you go through an internet draft stage. And by the time it's a request for comment, it pretty much means that the comments have already been done. But historically, they were requests for comment, so they called it request for comment. So if you ever go to a company where you're asked to-- lots of things that are on internet request for comments. And often, people are asked to implement pieces of various standards. And you typically look at this document, and you try to see if there's someone written code around it. And then you adapt it or write it from scratch to the specification. So it specifies the protocols. And the interesting part of the standard was very much in keeping with the general approach of the internet ethos, which was to try to make everything be open. There are many standards bodies in the world. The IEEE has various committees. The International Telecommunications Union does the various telephone standards. There's many, many standards. And often, they're based on voting. And voting means that people horse trade. You know, you have a favorite feature, I have a favorite feature. They're both pretty crappy features, but you know, I'll vote for your feature if you vote for mine. And people end up horse trading. And in the IETF, in the good old days, this kind of stuff didn't happen. Now, it's changed. But in the good old days, there was this quote, it says, we reject kings, presidents, and voting. We believe in rough consensus and running code. The idea was that you show me what it does. Allow people to experiment with it. And only after we see some prototypes and some actual experiments in the lab will we even consider making it into a standard. And this was the good old days of the internet in which things were on rough consensus and running code. So it wasn't around, you know what, let's just vote, and we'll get all our lousy features in just because we care enough about it personally. A big break happened in 1982 when the US Department of Defense decided that proprietary technology was not the way to go with networking, and standardized on TCP/IP as the standard. And back in those days, the Defense Department was a huge consumer of IT. It still is, but back then, it was just hugely dominant. And they could really influence how the world went. In 1983, MIT created a Athena, the first really big, large-scale computing project. We still have Athena machines. And it showed how to build campus-area networks and campus-area technologies. They built distributed file systems, they built systems like Kerberos. And in fact, they were one of the first groups, the first networks to experience network congestion problems because everybody started using the network to do their work rather than have their own computers at their desk or use the terminal to log in to some remote mainframe, which was the way in which things used to be done. I mentioned last time that in 1984, we created the domain name system. This is the system that goes between domain names like mit.edu to an IP address. And before that, it was an FTP-- how many people have heard of FTP? A few of you, OK-- File Transfer Protocol. Nobody really uses that these days. But you used to download this-- every night, computers would download it from one machine located, I believe, in California. And it didn't really scale, you know? And people decided that you needed to get organized and build a domain name system. In 1985, ARPA was-- the Defense Department was starting to get out a little bit of maintaining the entire communication network. And the National Science Foundation started taking over the running of the backbone, connecting all the non-military networks in the United States. And this led to the NSFNET. I'll have more to say about NSFNET in a little bit. There were two big growth problems that were experienced. The first was congestion, and the second had to do with how addresses started running out, and we had to deal with problems in the routing system. So I want to talk about congestion. In 1986, the internet experienced the first of a series of congestion collapses. A congestion collapse is a situation where you end up with a picture that looks like this, where if you were to draw the offered load on a network, or any system for that matter-- which is just how many people are clamoring to get access to the network and send their data on the network. If you were to plot the throughput that you get, or the utilization that you get from the network, you typically would see a curve that looks like this. As the offered load grows, you might see a somewhat linear increase in the throughput with slope 1 because the network isn't congested, no packets are being dropped. You push some data in, you get the data out. The throughput tracks the offered load. And at some point, you reach the capacity of the link, or of the network in general. And you might end up with a flat curve like that. And that's fine. What you would really like to do is, as the overload load keeps increasing and the throughput saturates at the capacity-- of course, that means that-- what does it mean? Either the delays are growing to infinity or packets are being dropped, right? And what you would really like intuitively is for the sources to realize that packets are being dropped or they're not being delivered in time, so maybe slow down. Something has to happen. But congestion collapse is a worse phenomenon. What happens is, beyond a point, your throughput [INAUDIBLE] drops, sometimes precipitously, and it might go down to zero. So people were running network applications-- you know, FTP and remote logins and so forth, email and so forth. And what they were finding was that you could be running-- the ratio between what the capacity of the network was and what you were getting when you were running was a factor of 100 to 1,000 worse. So this is a real collapse. I mean, you wouldn't be able to get anything through. So people were talking about going from, in those days, tens of kilobits per second to bits per second. In fact, there was a joke that there was a path in Berkeley between the University of California and Lawrence Berkeley Labs, which is probably 400 meters from each other. And the network rate was such that you could actually run up the hill with a tape drive and you'd get 100 times higher throughput than what you were getting through the network. And you know, this was not even running up very fast. So this was a serious problem. This problem was dealt with by multiple people who were working on it. And it led to a set of algorithms where the idea was that you would like-- because all these switches and routers were already deployed, people were interested in end-to-end solutions to the problem. By end-to-end, I mean solutions you could deploy at the centers and the receivers in the network without worrying about what the network was doing in between. The actual algorithm that we all run today had its roots in an algorithm developed by Van Jacobson at Lawrence Berkeley Lab. And there's a lot of work that has been done since then by many other people in the community as well. And in parallel, there were people inside of Digital Equipment Corporation over here in Massachusetts working on similar problems. And they came up with ideas. And both these ideas were basically the same idea, more or less, except in the detail. And they also resembled what we've seen with MAC protocols. The idea is that if the network is working reasonably well, we're going to try to be greedy and start to send faster and faster. At some point, we're going to find that the network doesn't work so well. And we can determine that in one of a few different ways. One way to determine that is that packets start getting lost. And we will assume that packets are lost because queues overflow and congestion happens. We might also alternatively assume that as queues in the network grow, packets get delayed more and more when the network starts to get congested. And if we find that the roundtrip times are starting to increase, maybe we determined that congestion has happened. Now, there's been 30 years of literature. The problem has not yet been completely solved. And in fact, we have now an active research project going on in my group about how to deal with these problems in the context of video and video conferencing in cellular, in wireless networks that you run with your phone, on your phone. So it's still an problem, lots of interesting research. But the basic idea is that you have to adapt to what the network is doing. And the way you adapt is by watching things in the network. You watch where the packets are being lost. You watch whether delays are growing. You might watch what the receiver's getting-- you know, how fast is the receiver getting data from the center and the network. And the intuition is the following-- let's suppose that we were to pick the correct window size. This goes to the sliding window and the window size that you use. We said that the right value of the window size is roughly the bandwidth-delay product, where the bandwidth-delay product is the product of the bottleneck link rate multiplied by the minimum roundtrip time. But the problem is the bottleneck-link rate is not fixed. It is in the lab we studied, but in reality, you have many connections sharing the network, many applications sharing the network. And people come and go, so the rate keeps changing. At one moment, you might be getting 100 kilobits a second. The next moment, you might be getting a megabit per second. And on wireless networks, it's even worse. Quite literally, if I were to start an application on my phone now, connecting to Verizon or AT&T, and if I step out of this room, it's quite likely that the actual [INAUDIBLE] experience by my phone might change by a factor of four or a factor of eight within two seconds. And that has to do, of course, with the fact that the signal-to-noise ratio-- I mean, we're surrounded by thick walls and metal. And the moment I go out, it's going to be different. So how do you deal with this problem? There's this one basic idea that's used that's a fundamental idea, very important idea. And it's called conservation of packets. It's the same idea that you saw when you built your sliding window protocol. It says that when you put in a packet into the network, and the packet reaches the other end, and you get an acknowledgment for the other end, it means that one packet has left this pipe. If you view this as that picture, as you see in that picture up there, packets are like water entering a pipe. And then they leave the pipe, and then another one comes back. If you have managed to somehow, by some magic, pick the appropriate window size, then conservation of packets is a good principle for you to apply. Because what it says is that the only time you're allowed to put one more packet into the network is when you're sure that a packet has left the network. And the way you know that a packet has left the network is because you receive an acknowledgment for that packet. Now of course, this assumes that you know the right window size. Conservation of packets has another really nice advantage, which is that-- let's say that you think you have the right window size. But in fact, more traffic comes in, and the bandwidth, the rate at which you can send data, reduces. What's going to happen is that because other traffic came in, the roundtrip times are going to grow, which means that acknowledgments to your packets are going to come back a little slower, which means that you have a natural slowdown. Because acknowledgments come slower, you naturally slow down and send packets a little slower. And then of course, then, if the congestion persists, packets are going to get lost. And when packets get lost, the trick is you have to reduce your window size. So let's say you're running at a window size in your lab of 50 packets when you implement this. If you find that packets are getting lost, you should drop your window size. And one way to drop the window size that TCP uses is to reduce it by one half. And then every time you find that acknowledgments are coming back, you get a little greedy and you try to increase the window size. And there are many ways to increase the window size. Now, all of this stuff requires a way, when you start the connection, when you start an application, what do you do? And I think you pointed out an idea the last time when I first talked about this problem, which is in the beginning, you let your window size be one packet, OK? So you start your window size at one packet. So it's like stop and wait. So at the beginning of a connection, If I were to draw time here against the window size, you start at one packet, and you just ship that packet out. It takes an entire roundtrip to get to the other side, you get an acknowledgment back. At that point, with regular stop and wait, you keep the same window size of 1, and you send one more packet. You're not being greedy enough. So one thing you can do is, when you get one acknowledgment, you double the window size, or rather, increase the window size by 1. Every time you get an acknowledgement, you increase the window size by 1. So the rule is, on [INAUDIBLE] you take w and you go to w plus 1. What does that do? Well, after one roundtrip, if I draw this in multiples of the roundtrip time, this is one RTT, this is two RTTs, this is three RTTs, and so forth. So at the beginning, at the zeroth RTT, your window size is 1. You get an acknowledgment, you make your window size be 2, right, 1 plus 1. So at this point, your window size is 2. What is the window size after two RTTs? It's 4 because-- why is it 4? When you send out two packets-- yes? AUDIENCE: You get two [INAUDIBLE].. PROFESSOR: You get two [INAUDIBLE] back. So for the first one, you went from 2 to 3. Then, you went from 3 to 4. So your window size grows to 4. And then out here, if there's no losses, then you haven't yet reached-- the window size hasn't yet reached the bandwidth-delay product of the connection, you're increasing exponentially. So this is at 8, and so forth. You're growing fairly rapidly. And at some point, you're going to grow too fast. Your window size is going to exceed the bandwidth-delay product plus the queue size of the bottleneck, causing a packet to be lost. And at that point, you can do a bunch of things. But one thing you can do is to drop the window size by a factor of 2. So whenever that happens, if there's a packet loss, you drop by a factor of 2. And you could continue to try to grow exponentially, but that would be stupid. Because you would grow exponentially, and if the network conditions haven't changed, you're going to again drop. So at this point, you could do something else. And what TCP does is to start to grow linearly, rather than exponential growth. Once you experience congestion, you start to grow linearly. And then maybe you experience congestion here, you drop by one half and grow linearly. And you have this sort of sawtooth behavior. And for most web connections that involve downloading a small amount of data, you end over here. For video and everything else, you go all the way. And this is one strategy. There are many, many others. And like I said, in various kinds of networks, like wireless networks, it turns out this approach is not really that good. And there are open questions around how you should actually design the system. Does everyone understand the basic idea? This is an example of adaptive congestion control. You'll look at this in 6.033 and in 6.829 in more detail if you took those classes. Any questions? OK. All right, I'm now moving over to the 1990s. Nothing else interesting happened in the '80s. But in the 1990s, more things happened. ARPANET essentially ended as far as universities and everybody else was concerned. And in fact, it transitioned into-- you know, there were separate military networks. And in 1991, Tim Berners-Lee, who is also now here at MIT, a professor, invented a little thing called the WorldWideWeb, called with one world. WorldWideWeb was the name of the program. And I found this thing which was very interesting. So he wrote a proposal in 1989, I think, to CERN, where he was working, to his boss at CERN. And it was called, "Information Management-- A Proposal." And there were all these things, you know, about how what became the web should work, with links and so forth. And his boss at the time, on top wrote, "vague but interesting" as his feedback on the proposal. Presumably, the interesting part trumped the vague part, and allowed him to actually proceed on this project, which became the World Wide Web. Now, obviously, it's been tremendously successful. Now, by the mid-1990s, the NSFNET, which was the backbone connecting the US, various US organizations, the government decided to essentially get out of the internet service provider business. Or rather, the government decided not to fund that activity anymore. And many of you have probably heard about this joke about Al Gore inventing the internet and not inventing the internet, and people saying he didn't invent the internet. Well, there's sort of a little bit of truth in this. Al Gore was very instrumental in the government kind of getting out of the internet business, and was involved in committees that set up regulations that led to internet service providers, commercial internet service providers actually forming, and commercial ISPs starting to take off, which was a really, really big change for the internet. Because no longer was it the case that there's this one organization and one backbone network that connects MIT and Harvard and everybody else together, and all the companies together. You had many, many people who could offer internet service, and in fact compete with each other. The idea was the internet service providers or different network operators have to cooperate with each other because we are interested in connecting everybody on the internet together. But they also compete with each other. And their reason to compete is they compete for customers. If I'm a customer of Verizon, I'm not a customer of Comcast, for example. And yet, Verizon and Comcast and other ISPs have to actually cooperate to get packets through. So how do you do this? And it turns out that this is a tougher problem than you might think. And the world-- people invented this protocol called BGP, or the Border Gateway protocol, which uses an idea that we've seen. It uses Path Vector to solve this problem. And I'll talk a little bit about that as well. The other thing that happened in the internet was that IP addresses started to get depleted. And we saw why the last time-- everybody wanted those Class B addresses. And now, in fact quite literally, there are no more IP version 4 IP addresses. And so there was a lot of work done on moving to other versions of IP. But the part that is interesting is this idea of classless addressing. So the idea was rather than have organizations that either have to have 2 to the 24 addresses, or 2 to the 16 addresses, or 2 to the 8 addresses, let's allow organizations to have any number of addresses. So I want to tell you a little bit about what an IP address means, because everyone has seen an IP address. I want to explain what it means, and how it really actually works. So if you look at an IP address, 18.31.0.82, which is one of my machines, that dotted decimal notation with human-readable numbers used to make sense in the old days. It used to be that this is a Class A address from MIT. It's 18 dot whatever, and MIT owned all of that stuff. Now, that also happens to be true, but as far as the network infrastructure and the switches are concerned, this thing is nothing more than a 32-bit number that looks like that, OK? Now, when a packet with that number shows up at the switch with a destination address with that number, really what happens is that the switch, the router does not have an entry for every one of those destinations in the world. If it did, it just would be too much information. So what it has is information corresponding to a certain prefix. Now, that prefix could be of arbitrary length. It could have an entry in it with just the first 8 bits, which would signify that all packets that show up with that first prefix of 8 bits would have to be forwarded according to a rule-- according to the link that was set for those first 8 bits. Or it could have an entry in the routing table for 16 bits. Or it could have an entry in the routing table for 19 bits, or whatever. And that depends on how the routing system-- how we advertise the routes, and what it contains. So there's an important lesson here. When a switch advertises a route for a destination, on the internet, the destination is not the destination of-- is not the IP address of an endpoint. But what that destination is is a prefix that represents a range of IP addresses, all of which are forwarded the same way by the switch. So one way of writing this in notation that we can understand as human beings more conveniently is this idea of writing it as 18 slash 8. What that means is-- this notation stands for all IP addresses which have the first eight bits in common, which will be 0001011010, which stands for the human readable number 18. And it contains all 2 to the 24 addresses corresponding to that prefix. So as another example, if I have that in my routing table with a slash 17, it stands for 2 to the 15 consecutive IP addresses, all of which share the first 17 bits in common, OK? Does this makes sense? So this is what an IP address means. And as far as a switch is concerned, a routing table entry is not the IP-- it's not a destination IP address. But it's something in this form. It contains a prefix, and it contains a [INAUDIBLE].. So in human notation, 18.31 slash 17 would be 17-- 2 to the 15 bits, which share the first 17 bits in common. So what this means is that with this notation, inside the forwarding table, you can have an entry for one IP address, or two, or four, or eight, or 16, all the way up to whatever the maximum is, right? So it allows us to build networks of different sizes, and let that network's identifier be known to the rest of the internet. Now, in principle, you could put every host in the network. And that would mean that you have an entry that-- each of which looks like a slash 32. If I did a slash 32, it meant that it's an individual IP address. But that wouldn't scale. And so we want to allow people the flexibility of having very different ranges sitting inside the routing system. And there's one more rule. And this rule is important because I want to tell you this rule-- because I'm going to tell you how YouTube was hijacked by an ISP in Pakistan. And it relies on your understanding this particular forwarding rule. And then I'll tell you about how an ISP in China hijacked 15% of the internet for a couple of hours, for a few hours. And that didn't require this rule, but it's two examples I want to tell you about. But let me explain the rule-- the forwarding at a switch uses an idea called the longest prefix match. So what that means is that if you have entries in your forwarding table-- let's take those two examples. And let's say I have an entry in the forwarding table. This is a particular switch, I have a forwarding table or a routing table. And the first entry says 18 slash 8. So what this means is it's 2 to the 24 addresses that share the first eight bits, which is whatever corresponds to 18.0.0-- whatever it is, right? And then let's say I have another entry, which is some 18.31 slash 17. And what that would be, of course, is 2 to the 15 addresses. And the prefix would be shown in that picture there, whatever the first 17 bits-- so some 17 bits. Now, if the switch received a packet with an IP address, and that IP address matched multiple entries-- you know, you might have other entries sitting here. Let's say that I have 128.32 slash something. And I might have various other entries sitting in my forwarding table. When a packet arrives, it may, in general, match multiple entries here, right? Because the packet has a certain bit string in its destination address. And that destination address now matches multiple of these. It could match one or more of these entries here. What an IP router does when it gets such a packet is to find the entry which matches in the longest prefix. In other words, the routing table entry that corresponds to the longest prefix match between the destination address of the packet and between the entry in the forwarding table is what you use to send the packet on. So if you got a packet that was 18.31.6.5 and it happened to match this entry, it would go on one link. Let's say this was link 1. And if you have got this other thing that didn't match this, but matched that, you might have a different link. Let's call it link 0. So the output link that you use depends on the longest prefix match. And MIT and many organizations use this extremely well. So I was remarking to some of you at the end of last time that-- You know, it's funny, MIT has multiple internet service providers. And it turns out that if I use this network here and I download-- you know, I go to linux.org, which is a place you could download Linux code. If I go from here, it so happens that MIT users level three, which is I think the world's biggest, or US's biggest internet service provider to get those packets. The same thing-- if I just go up to [? Stata, ?] and I connect to the [? Stata ?] wireless, and go to linux.org, packets from linux.org come back to me through a different ISP, Cogent. So MIT has decided that it wants to load balance its traffic. So it advertises the prefix corresponding to the network in this room, whatever the Wi-Fi network in this room, through one of the ISPs. And it advertises the other prefix in Stat through the other ISP. And it does it presumably to load balance traffic. And it also has this idea that if one of those links were to fail, it would switch the traffic through the other link. Organizations do this because they would like to provide good service to the people inside their organization. So the longest prefix match is very crucial to how this stuff really kind of works. So keep that in mind. I'm going to come back to this. On to the next slide. In the rest of the 1990s, a few interesting things happened. One of them was that work started on this new proposal for IP called IPv6, which said, let's not use 32-bit addresses. Let's go to 128-bit bigger addresses and try to solve the address depletion problem. IPv6 has taken a really, really long time to get deployed for reasons I won't go into here. But it seems to be happening now. But people keep saying that it seems to be happening now. I said that three years ago when I did the wrap-up in this class. So some time-- at some point in the future, that statement will actually be true. Now, you know, everybody knows about Google reinventing how search is done, and it starts to dominate. Another thing that happened in 1998 was content distribution networks started getting created. And these are networks that you deploy as overlay networks atop the internet to serve content better, in a more reliable way. Now, in the 2000s, the internet matured. And I have the top five things that I think happened in the networking industry and the networking world in 2000. So this dot-com bust happened, and then 9/11 happened. The first thing that happened was the rise of peer-to-peer networks. I'm sure many of you have used this-- Gnutella and Freenet, BitTorrent is the latest one. There was a lot of research done, including here at MIT, on how you build these peer-to-peer networks, the idea being, you don't have a central point of failure. And you can use this to distribute files extremely efficiently. I mean, BitTorrent is still highly dominant. And distributed hash tables like Chord, which was developed here, and other schemes that are used now by systems like Skype, they are used inside data centers like Amazon. If you go to Amazon and buy stuff, it uses a system called Dynamo, which is a key value store that basically builds a distributed hash table. So it's had a lot of impact, both in data centers and in systems like Skype. The second thing that happened was that in the early to mid-2000s, security became a huge deal. People started attacking the internet, which came as somewhat of a surprise to people who grew up in the good old days of the internet where, as I mentioned last time, computers with root passwords that were empty because everybody could be trusted. And then they found that as people started making money on the internet, people started trying to attack the internet as well. Denial of service attacks started, where people would launch attacks on websites. And they would often use it to extort money. This would be like, if you don't pay me, I'm going to continue to pummel your website so that you can't sell flowers, or whatever it is you were doing on the web. People found vulnerabilities in software, and there were many worms that spread, often pretty quickly. SQL slammer is a particularly interesting one of these. We studied this stuff in 829 and in 6.033 in some detail. But this was remarkable because in 30 minutes, it clogged the world's networks. I mean, here's a picture of a screenshot. This was at 5:30 in the morning, I guess UTC, Greenwich Time. And you know, nothing's going on. And then half an hour later, the blue splotches show the networks that were clogged. And almost every computer that was vulnerable to this-- there weren't that many computers, relative to the world's computers, that were vulnerable to this. But all these networks got hammered, and in fact, traffic came to a halt. And this worm showed that the power of spreading-- if machines trust each other, either implicitly or explicitly, it's very easy to actually find a vulnerability in one and then spread very, very quickly. So a lot of work was done on how to handle worm attacks. A lot of this has to do with putting things inside of networks, which is running at high speeds, to identify patterns that-- in the payload of-- in the data that's being sent in packets to quickly identify that this corresponds to a worm, and then throw those packets away before they hit the actual endpoints. Right now, we don't see too many worms spreading. The ones that spread now are slow-spreading worms that are often spread by human contact. It's like people clicking on links they shouldn't click on. It runs the program, finds a vulnerability on their machine, and then it resides on their machine. And often, these are then used to create these big botnets that are used to launch denial-of-service attacks, or are often used to send spam and do other things like that. So they're still going on, but you don't hear about them in the newspaper. Spam became a huge problem. And it continues to be somewhat of a problem, though these days, the distinction between spam and internet marketing is kind of coming down-- the gap's closing. But by and large, spam now is, while I wouldn't say it's a solved problem, it's generally combated by big organizations that have enough data, enough email coming in that they can identify spam and then filter those away. Route hijacking is the other problem. That remains a huge vulnerability. So I want to tell you about two examples of route hijacking. And I don't think there's easy-- the technical side of this problem, we understand how to solve. But how to deploy good solutions is still unclear. So the first example is from-- this problem has been going on. Every three years, you see a big route hijacking problem. The first one was from 2008-- I think it was 2008, where YouTube was unavailable to people for a few hours everywhere in the world. Now, you could argue whether watching cats dance or whatever is not important. But the fact is that YouTube has a lot of money. Google has a lot of money, and even they were vulnerable to this trouble. The second example was a little different. China Telecom managed to get about 15%, roughly speaking, of the internet traffic to go through them. Now, the interesting thing about that attack is that it wasn't clear it was an attack. I should just say that error, or failure, was that people didn't even notice. Because unlike YouTube, where you go, you couldn't get your data, and then people noticed it, and then they were scrambling to solve the problem, with the Chinese attack, or the Chinese vulnerability, what happened was that China Telecom managed to divert the traffic that wasn't supposed to go through them-- to them. And then they forwarded the traffic on to the rest of their destinations, the actual destinations. So you would find things like, instead of my latency being 100 milliseconds, it might be 500 milliseconds, which is-- you may not even notice. Or sometimes, you notice it, and you go, ah, yeah, that's just the internet being the internet, you know? Sometimes, that happens. But the fact is that they were able to-- an ISP was able to essentially divert a large fraction of the world's traffic. And both these attacks fundamentally stem from the following problem, which is that at the end of the day, despite all of this investment into the internet, and the importance of the internet, and the amount of money in it, internet routing works because of essentially an honor code. It's like, ISPs at some level, internet service providers and organizations trust each other. And there's this transitive trust, which is I might trust you, and you might trust her. And implicitly, the way routing works is that ends up in me trusting her. Because I trust everything you tell me, and you happen to trust everything she tells you, which means that if she were to make a mistake, and you were to believe it, then in essence, everybody else in the world is vulnerable to this problem. So let me explain what happened in the case of YouTube, because it's reflective of how things really work. So here's this little ISP called Pakistan Telecom. Tiny, tiny ISP, you know? Hardly anyone uses it outside-- I mean, everyone in Pakistan probably uses it. But in the grand scale of things, it's completely tiny. So they end up connecting to a bunch of people outside in different parts of the world. And one of the people they ended up connecting to was another ISP out in Hong Kong called PCCW. And these guys connect to the rest of the internet. And presumably, Pakistan Telecom connects to other people-- I don't know. Now, here's what happened. Now, where does YouTube fit into all this? You know, YouTube is sitting somewhere over here. And presumably, it's not directly-- I mean, these guys have nothing to do with each other. These guys are connected to some other big ISPs and small ISPs. And eventually, there a set of internet service providers that somehow constitute the internet. And each of these is independent. Each of these is known as an anonymous system, or AS. And each of these autonomous systems has a number, a 16-bit number. MIT is an anonymous system. And because MIT was very early in the internet, MIT'S autonomous system number is 3. But right now, there are many, many ISPs. You know, right now, you can have-- there are tens of thousands of autonomous systems-- I don't know, 35,000, 40,000, 45,000, something like that. We're number 3! So anyway-- AUDIENCE: Who's number one? PROFESSOR: Who's number one? BBN. Yeah, BBN. I don't know what AS2 is. BBN, of course, is number one. But actually, that number is owned by somebody who acquired BBN and who acquired-- somebody acquires it. Now, here's an interesting thing that's happening-- these autonomous system numbers-- I remember I was a very young graduate student when people were talking about these autonomous numbers. And I remember these mailing list discussions. I wasn't working on this problem then, I worked on it much later. But people were saying, yeah, 16 bits is plenty enough for an autonomous system identifier. Because remember, NSFNET was one. There was one internet service provider. And in the early '90s, mid '90s people were talking about ISPs, and they said, oh, 16 bits is plenty. And I remember there were some people, actually graduate students who were saying, maybe we should make it 32 bits? Because people remembered that-- you know, the internet started-- you remember those old things on the internet where people said, 8 bits for a network identifier are plenty enough. And of course, they got screwed. So anyway, what's happening now, of course, is the older guys were saying 16 bits is plenty enough because we don't have too much overhead on packets. And they put in 16 bits. And now, we're at 45,000 or 50,000. And guess what, there's a proposal now on how do you-- how the heck do you extend this to more than 2 to the 16 autonomous system, because now, the internet is growing. So you know, if ever you're given an opportunity to design the number of bits for something-- and you will always have to do something-- just pick something much, much bigger than you imagine, and then double it. Because it's always-- you'll never get it right. So anyway, each of these autonomous systems has an identifier in it, OK? And each of these guys, when they make an announcement in the routing system, the way it works is that you create your identifier, and then you tell people all of the IP prefixes that you own, OK? So when I create my distance vector, or in this case a path vector advertisement, if I am autonomous system 3, I have a set of IP addresses. So MIT might have 18 dot whatever slash 8. MIT has 128 dot something slash-- let's say 19. MIT has a whole slew of these IP addresses that they've acquired. And what they're going to say is, I'm autonomous system 3, and I know how to get to these guys because I own these guys. This is the origin announcement. And then other people-- you know, MIT might-- this is AS3. MIT sends it to its ISPs, and they send it to their ISPs. And every time an autonomous system receives multiple advertisements along different paths for the same prefix, they pick one. They select among them. They have some rules to select among them. And these rules have to do with the length of the path between autonomous systems. They have to do with how much you're paying. So for example, MIT might be getting a better deal from Cogent than from Level 3. And so it would want more of its traffic to come from Cogent. And therefore, it would decide that it would only advertise certain of its addresses on certain paths. So there's lots of policy. It's very, very complicated. But yet, you know, some miracle, the whole thing works. So anyway, these things go through from autonomous system to autonomous system. So what's this path vector, right? Remember, I told you about the path vector. The path vector is a sequence of paths. So it could be 3, 17, 26, and so forth. So I have an animation of this thing. So I want to show that to you because it's totally interesting. So there's this website called [? BGPlay, ?] if I can find it. So there's a way to get MIT'S route advertisements over the past month. And you can kind of see how these stats change. So anyway, what happened to YouTube? What happened to YouTube was the government of Pakistan decided that what they would tell Pakistan Telecom was to not allow their users to go to YouTube. So what they did was, rather than simply drop those requests, they wanted to get Pakistan Telecom, when somebody clicked on a YouTube link, to go to a website that Pakistan Telecom would run that basically said, you're not allowed to use YouTube, but would you like to see something else? OK, so the way they did that was, they decided-- YouTube has some address. Let me call YouTube's address Y something, something. Let me just call it Y, all right? Y is a set of IP addresses that correspond to YouTube. It's not one, they have many machines. So what these guys did was, they did something they thought was very clever. They have a whole network of users there. They decided they would advertise a route for destination Y inside their network. But rather than use the actual route advertisement for Y, they would actually send it to their own machine. So remember, they changed the routing now. They're telling the users-- they're hijacking the route. They're telling their users that to go to YouTube, you should not use the actual link that I'm telling you to use, but instead, go to this other place inside my network, where I can show you this other website-- maybe pretend it's YouTube, or whatever. Now, everything is so far so good, right? I mean, people do this all the time. When you go to a hotel or any internet kiosk, internet place, you take your laptop, and you go to cnn.com. And the next thing you see is, would you like to sign in? How does that work? Well, that works because they essentially hijack the route. They make it look like you're going to CNN, but in fact, you're going somewhere else, right? After all, your computer wrote to go to CNN. And the IP address used was presumably CNN. But then they made a mistake. Some guy here-- probably, he was too tired-- set up a configuration. So he advertised his new route to Y that he created out to PCCW. Now, PCCW was to some degree at fault. Because PCCW should have known to some degree what actual IP addresses Pakistan Telecom owns, and only honor route requests, route advertisements coming from those things, right? PCCW needs to know how to go, how to send packets to nodes inside of Pakistan Telecom. But clearly, this guy has nothing useful to say about how to send packets to YouTube. But yet, this guy honored this message. So there were two mistakes here. Actually, there were many mistakes. But the two big ones were-- there was a mistake made here sending something out. There was a mistake made here honoring this request. By this time, you have transitive trust. Because PCCW would find-- And what they also did was, they made this a more specific prefix. So YouTube was advertising a slash-- I believe it was slash 21, which meant it was many, many bits in the prefix, 11 bits in common. And there were-- sorry, 21 bits in common, a slash 21. But these guys advertised-- to guarantee that all traffic would come in, they advertised to slash 24. So it's more specific. So what happened was PCCW believed that, and they advertised that to their ISPs, and then to their ISPs, and so forth. Now, the reason why this really got-- all of the internet's traffic were sent toward this poor guy in Pakistan was because everybody is doing this longest prefix match. And it's true that there's a legitimate route to YouTube in those routers. But they're ignoring it because they find a more specific route that somebody had advertised. So if you want to get traffic sent to Google, figure out a way to convince some bigger ISP to take your route to a more specific prefix that you know is owned by Google, and just advertise it out, OK? You're probably going to get a lot of traffic. Now, you may not want all the traffic, but you'll get it. So you see how this stuff spread, right? How do you solve this problem? All right, this is going on, and then people are not able to see their cats. And they're-- they're scrambling. I mean, this is literally the whole internet wasn't able to get to YouTube, which admittedly is not the biggest problem in the world. But still, if you're YouTube, this is a big problem. So how do you solve this problem? What do you actually do? I mean, the whole world has this now, this bad entry in the routing table. Now, there's this long-term solution, which is of course, let's figure out a way to authenticate the advertisements, and use some public key, and this and that. And you'll study this in 6.829 and other courses. But today, we don't have that. And this problem exists, so how did we ever come out of it? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, actually-- you know, it's interesting, you and I-- you certainly thought about this in 30 seconds. But YouTube actually did-- it took them a while to figure out what was really going on. Because one of the problems is, you don't want to do something like that without knowing for sure. But eventually, they did exactly this. They figured out the prefixes that were being advertised. And that took a while, because you don't know somebody else's-- what's in your routing table? I don't know. I have to pick up the phone, or email. And email may not work because it's coming back to YouTube. But whatever, there's a way-- there's a way to figure this out, phone and [INAUDIBLE] Gmail or something. And then they figured out what it was. And then they inserted slash 25 that were more specific. And then once the problem kind of resolved itself, they got out of it. So let me jump on and move on. Give me two more minutes, and I'll finish up. A similar problem happened with China Telecom. They didn't advertise a more specific route, so they only got about 15% of the internet. And they were nice enough to forward it around to the actual destinations. But this was a problem. I'm going to skip through the decade ahead because you know what, you'll find out soon enough. All right, what I do want to do is to summarize 6.02 in one slide. This course was about how to design digital communication networks, right? I'm sure you all kind of know that. We did this with three layers of abstraction, very simple story-- bits, signals, and packets. And I think as far as these kinds of courses go across the world, it's a pretty unique storyline. There aren't very many courses that we know of which have this vertical study across all the layers. And there are some schools that we know of that are starting to adopt this idea. And we feel like this is a pretty unique way in which to teach this, because it demystifies all of the layers. So we didn't cover anything with-- we didn't tell you 15 ways to solve any given problem. But we told you one good way to solve each of the problems that you will see. So we went from point-to-point links to multihop communication networks. And across these different topics-- and you can see that we studied these different topics. The two big themes that I want to leave you with are reliability and sharing. Because that's really what makes our communication systems work. How do you make it reliable? We don't have a perfectly reliable communication medium at any layer. So how you make things reliable is important. We studied this over and over again. And how do you share? Those are the two big topics.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
3_Errors_channel_codes.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at NCAA ocw.mit.edu. GEORGE VERGHESE: I wanted to give you an overview of what the system is that we're talking about, the communication network. We have a source that's trying to communicate to a receiver. We've talked about converting-- well, we'll talk some more, actually-- about converting the source information to binary digits. And then where we've spent a lot of time is talking about source coding, which is trying to extract the redundancy in the message that you want to send so that basically every binary digit you then put on the channel carries as much information as possible. So now I'm going to really stop making a distinction between bits and binary digits. I'll just say bits when I might mean binary digits. But once you're into the system here and you've done your source coding, a binary digit carries a bit of information, in general, if you've done a good job of extracting the redundancy. So we're talking about a bit stream here that you're trying to get across to the other end. At the other end, the bitstream is received. There's the decoding step, which is what we've seen with Huffman or LZW, the decoding end. And then you do whatever it is you're going to do in the application. So we've really said all we're going to say about the source coding and decoding, and the rest of what we're going to do is focus on what happens inside here. Now what happens inside there at some stage involves a physical communication link. So you might be talking bitstreams at either end, but somehow you've got to deal with the fact that most of these channels, most of the channels of interest, they're physical channels, they work with continuous valued, continuous time quantities. For instance, the voltage on a cable might be used to transmit information, light on the fiber, electromagnetic waves through space, acoustic waves in air or water-- in fact acoustic waves in air is something you see a lot of when you come to the later labs-- indentations on vinyl or plastic. Let's see, that's records or CD's. And that actually brings up a point. We don't often think of storage as being communication, but storage is really communication with potentially a very long time delay in the channel. You put something on the storage medium, and then weeks or months or years or centuries later, you're trying to extract that information. So we can still think of all of this as a communication channel, and indeed, decoding ideas are essential to making CDS work, to having a CD resistant to scratches and thumbprints, and everything else that, all the other indignities that they're subject to-- magnetization. So all of these physical modalities that are used to translate information. Here's one you may not have thought about, mud pulse telemetry. So when you're drilling for oil, you'd like to get information from the drill bit at the bottom. And normal electronics doesn't work too well, because the temperatures are fiercely hot down there. You need that information to help you steer the drill bit, to get information about what sort of rock you're going through, and all of that. So they actually seriously do use pressure pulses and in the slurry that's cooling the drill bit to try and convey information back to the top. So they'll modulate the pressure down at the end of the drill bit, and hope that you can detect it at the top. One word over there that stands out is they talk about digital information. So even in the context of communicating through mud, they're thinking about how to actually have bits that they communicate on this analog channel. So this is very much in the flavor of what we're trying to do. We're trying to communicate digital information, this is sequences of numbers or sequences of signs or symbols, but we're trying to do it over a physical channel that takes continuous valued, continuous time, waveforms. So that's really some of what we'll be talking about. So the kind of link that we have starts off with bits. But in the middle, has to deal with the physical link on which you have signals. We'll refer to these continuous time waveforms. They're not always continuous time. You could sample them and get discrete time waveforms, as well. But the signals that you see in the physical medium will refer to-- the quantities you see in the physical medium will refer to as signals. You need some way to map the bits to the signals. You've got a bit sequence, you need to convert it to a continuous time waveform in some fashion. And then at the other end, to recover the bitstream. And you might do that by some sampling and processing, and then a translation process back. The little lightning there is meant to suggest that you're subject to all sorts of noise and disturbances when you're on that physical link. So that's something that's critical to the design of the system. You have to design your overall system so that your robust to perturbations in that middle section. So the particular application is dealing with your specifics and with producing a bitstream that's got the redundancy mapped out of it. At this point, it doesn't matter to me for communication across the channel what that bitstream is or where it came from. I'm just trying to do a good job of delivering it to the other end where the user can extract that. Now here's the funny thing. We've just done a lot of work to extract redundancy from the messages here. We're going to put redundancy back in. Because the way you guard against disturbances in the channel is by introducing redundancy. You need to give yourself a little room to recognize that something bad has happened to your signal, or something bad has happened to the data you're sending across, and then to recover from it. So we will actually be talking about how to reintroduce redundancy, but this is introducing it now in a bitstream where the binary digits are essentially equally likely. You pulled out all the application-specific knowledge, and used it to do the source coding. Now you've just got a stream of zeros and ones, each one carrying a bit of information, presumably. And now you want to protect it for transport across the channel, and you're going to do that by putting in additional bits. So you're going to introduce bits in a structured way to provide some redundancy. So we'll be talking about that in more detail. So this is the single link picture. When you've got a network, it's a little different. So I haven't shown all the other links here, but you should imagine a network with many possible paths from a source to a destination. Some of these links might come down, and so you'll want to find an alternative path. Some of these links might be busy, and you want to find an alternative path. So there's a whole network of links in here. In this setting, it turns out that the good way to do this is to break up your bitstream into what are called packets. These are maybe 1,000 bits or 4,000 bits, or whatever your protocol is. They're broken up into packets of chunks of bits, and then treated as packets for transport along the network. So the point is that one packet to a given user might travel one particular route, but another packet might travel another route. And then these get reassembled at the destination. So you think in terms of packets when you think in terms of rooting on the network. This idea of packet communication, by the way, there's a name associated with that, which is Kleinrock, again, a PhD student at MIT in the same golden years that I keep referring back to. But it's a very broad area. So you've got the packets. Those arrive at the links. They're actually switches here that decide which of the links emanating from the switches you want to send the packet on. So, again, imagine all the links. I haven't drawn them in. So the packet gets treated as a unit for shipping on the links. But once you've committed to a particular link, it's like transmitting on a single link, again. So you've got to go through your packets to bits to signal to bits to packets transformation. So it's not that there's a particular transformation in going from packets to bits, you're just viewing it differently. You're not treating it as a packet, you're looking in on each bit. You're looking to code each bit on an analog signal, get it across the physical medium. So that's really the key to this. What we end up doing is coding, or mapping, or the word modulating is used, and we'll see more of that. We modulate the desired sequence onto a continuous time waveform. So what you might imagine is, you could have a sequence 01101, and what you're going to do with your mapping is try and generate a continuous time waveform, which in some fashion codes that sequence. And it could be very simple. It could be a voltage level of 0, held for some interval of time to represent the 1-- sorry, to represent the 0, held for some time again to represent the second symbol, which is, again, a 1. And then you come back down to 0, back to 1 again. So it could be as simple as this. OK, so this is now-- you can think of it as some voltage. We use the word voltage a lot, but we just mean an analog signal. We'll use the word voltage, we're thinking of voltage on a cable, but it could be any analog signal. So this is really the digital signaling end of it. So you take the bit sequence represented now on, coded onto a continuous time waveform, or modulated onto a continuous time waveform. We'll see richer uses of that word as we progress. The particular scheme I've shown here is what you'd call a bi-level signaling scheme, the two voltage levels that you use. We refer to that also as bipolar signaling, although sometimes that's restricted to the case where the two voltage levels are opposite in sign. So you can imagine a signaling scheme that uses this for 0, this for 1. So this could be a bipolar scheme. And then this continuous time waveform gets put on the physical channel, presumably gets distorted, it gets some noise added to it. So at the other end, you get some approximation to this. And then you've got to reconstruct the bit sequence. You might do that by sampling this and processing the samples, and then taking some measure of the waveform. We'll see more of that later, you'll do that in lab 2, in labs, as well. So in some fashion, you recover from this your estimate of the bit sequence. But you can imagine, the amount of waveform like this goes through a physical channel with distortion, noise, and then re-sampling and processing and so on, you are likely to get errors back at the receiving end. Ideally, you would get exactly this waveform at the receiving end, you'd have no trouble distinguishing between the samples of the two levels, and you could reconstruct the bit sequence. Now we are going to, in the middle section of this course, say a lot more about the signals aspect of it. But for us now, it's actually helpful to stick to thinking in terms of bits. So we've got bits in, bits out. Somewhere in here, we've got signals in the physical channel, signals, noise, physical channel. And then we've got whatever it is that does the transformation from bits to-- so this is some kind of a transformer, let's say, from bits to signals and from signals to bits. But let's look at an abstraction that's end to end here, bits to bits. OK what's coming in as a bitstream, what you're receiving is a bitstream. There's an idealization of this channel that's used a lot, and that's referred to as the binary symmetric channel. And what that says is, you got a 1 coming in, that's most likely going to be received as 1, but there's some chance it's going to be received as a 0. And let's put probabilities here. I think the notes use epsilon. I seem to have used p, little p, on my slide, so let me stick to little p. So a 1 coming in is transformed in error to a 0 with some small probability, presumably small, with 1 minus p it's intact, and then the same thing on the side. A 0 comes in with some probability, comes out as a 0, but with some probability it actually gets flipped to 1. Now, we use the word symmetric here to say that we're assuming identical probabilities of going 1 to 0, 0 to 1. You can easily imagine an unsymmetrical channel where you have different probabilities in the two directions, but we'll stick to the symmetric case. Binary, of course, because we're dealing with binary sequences of the two ends. And what we're imagining is that this is a memory-less channel. In other words, I can look at this transmission by transmission. So a bit comes in, this is how the output is determined. And then the next bit comes in, and it knows nothing. There's no memory in the system, it knows nothing about what decision was made in the previous case. Now you can imagine more complicated channel models with memory, but this is a good starting point. So that's the binary symmetric channel. So question now, if we wanted to get a bitstream over reliably, any ideas on how we can counteract the effect of this p, probability of flipping? Yeah? AUDIENCE: You could have like a range [INAUDIBLE].. GEORGE VERGHESE: Well, at this point I'm back to the ones and zeros. There's no signals. The signals are in here. So what you're thinking of maybe is, how do I reliably map bits to signals? And what you're saying is, you can design your signaling here in a way that reduces the p. The p that I'm thinking of here is the end to end error probability. If I designed the inner part better, I might lower the p. But for a given p, is there something that I could be doing to improve my chances of getting the bit across correctly? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: OK, so the suggestion is that we introduce redundancy by just repeating it. So send the 1, repeat the 1. Repeat it many times if you need to. And so what would you suggest that the receiver should do if you do a repetition like this? How should the receiver decide? If I send five ones in a row-- yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: I'm sorry, say that again? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: You're talked about requesting a-- I'm not hearing well what you-- AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: OK, so you're using the word request. You're talking about the receiver sending something back to the sender. But if we're with this channel and the sender has to make its own decisions about how to get things across without a possibility of feedback. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: OK, so I think I understand a bit now. So if you're repeating this, the chance of more than half of them being wrong is very small. I think that was the idea that you had, as well. So repetition is likely to reduce the chances that you go wrong here if you use a majority rule. Majority rule would be a simple rule. Send five repetitions, and if only two are flipped, well, you just decide in favor of the majority. Because it's more likely that none or one or two are flipped than that three or four or five are flipped. So that's the idea. So this is what's called a replication code, and actually, it can work very well. So what you see on the horizontal axis is the replication factor. You can replicate it 5 times, 10 times, 15 times, and here is the probability of error. And it actually goes down. And you can do the computation. This is actually a fairly simple computation, you're basically doing coin tosses and seeing what's the probability that more than half of the coins I flip come up one way. And you're counting that to decide how the majority rule works. I'm sorry, for the epsilon here, it's supposed to be the p. OK, so is this good? Good enough? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: The more you send, the more you're wasting time on that one bit. So this is the point, that you can do the replication and reduce your probability of error, but what's the information you're getting across? You're doing all of this to get that one bit across reliably, but you've got a lot of bits backing up, waiting to get across. So the code rate, the rate at which information gets across is a 1 over n if you're doing n replications. If you're doing n replications, it's only 1 over n. The rate at which you're getting information across in terms of bits is 1 over n. So you're dropping the probability of error, but you're also dropping the transmission rate. So this is really unacceptable. It turns out, though, that we can do a lot better. We can do a lot better. What I'm going to do now is say a little bit about what Shannon had to say about it. I hope you'll allow me to teach you about something that we're not going to test you on just so you can learn a little bit about this, and then I'll get back to stuff that we will test you on. OK, is that all right? I know you didn't pay for this, but we'll do it anyway. So here's Shannon, defining something that the thermodynamics people and so on didn't really think to do. They may have done it indirectly, but it didn't arise in where they were working with entropy, and all of that. Shannon defined something called a mutual information, given by this symbol. X and Y are random variables, random quantities. What we know about H of x is the entropy in X, so it's our uncertainty about X. It's the expected information when you're told something about X. This symbol denotes the uncertainty in X, given information about Y. So it's the conditional entropy here. And so what this is asking is, how much is our uncertainty about X reduced by having information, having a measurement of Y? That's very relevant to a channel where the input is X and the output is Y. We're saying, we see the output of the channel, we want to infer what happened to the input, what the input sent. The mutual information between these two random variables surely has to be important. So what's the reduction and uncertainty that results from having a measurement of Y? That's a question of interest not just in communications as such, but in all sorts of inference questions. OK, I'm going to have a slide of equations. They might look scary, but actually they're very simple, given what you already know how to do. First, I have to define for you what I mean by conditional entropy. So I'm saying it's the entropy of X conditioned on having a particular measurement of Y. So suppose you know that Y takes the value of little y sub j. You use the same formula that we've used for entropy, except your probabilities are all now probabilities conditioned on that information. So instead of just p of xi, you have p of xi given yj. So it's the same formula. But if you've been given information, then you have to condition on it. So that's the definition. This is the conditional entropy given a specific value for y. But if all I tell you is I'm going to be giving you a value for Y and I haven't told you the value yet, what's the conditional entropy? Then what you want to do is average over all possible Y's that you might get. So you're going to take this conditional entropy for the given Y, and then take the weighted average of the probabilities. So that's how you compute this quantity, and it's quite straightforward. It's not very different from what you have. And then if you put in what you know about how joint probabilities and conditional probabilities worked-- this was the definition of conditional probability that we had in, I think, the first lecture-- you discover that actually the joint entropy of these two random variables can be factored in two particular ways. And that allows you to deduce that the mutual information is symmetric. In other words, the mutual information between X and Y is the same as the mutual information between Y an X. So there's no difference in that. That might be a little surprising given that we were thinking of X as the input to the channel and Y is the output of the channel, but it turns out that that's the case. So let's actually compute it for the channel that we know. This is the binary symmetric channel. Let's compute the mutual information between the input and output for the binary symmetric channel. So here is the definition IXY. We've just shown that it doesn't matter which order you take it in, and it turns out the computation is easier if you flip the order. So I'm going to write this as uncertainty in Y minus the uncertainty in Y given the measurement of H-- of X, sorry. So I'm going to compute it in that fashion. What's the uncertainty in Y? Actually, I should probably have said here that, let's assume X takes 0 and 1-- I might be wrong in saying that this doesn't depend on the distribution of the input. Let's assume 0 and 1 are equally likely at the input. If the 0 and 1 are equally likely at the input, what's the uncertainty in Y? There's a little bit of uncertainty. It's equally likely to be a 0 or 1. I had actually written that assumption in, and then I took it out, but I think I'm wrong in saying that. So here we have the 1 for the uncertainty in Y. What about the uncertainty in Y given X? So I give you a particular value for X. Let's say X is equal to 1. So I give you a particular value for X. What's the uncertainty in Y? Well, Y is 0 with probability little p, and it's 1 with probability 1 minus p. And that's really the binary entropy function that we had drawn last time. So you can actually work out all these pieces and discover-- let's see, here is the binary entropy function. Just to remind you, this is for a coin toss. If something can be 1 with probability p, 0 with probability 1 minus p or the other way around, the entropy associated with that is H of p. And we have 1 minus H of p for the mutual information between the input and output of the binary channel. So here's what 1 minus H of p looks like. All right, so what's a low-noise channel? A low-noise channel is one with a very small value of p. And what this says is that the mutual information between the input and output is on the order of one bit. So if you're told what Y is, you've got a very good idea what X is. That makes sense, because it's a low-noise channel. But if you get to a channel that has around the 0.5 probability of flipping the bit, then the mutual information is very small. So it doesn't reflect what you'd like to see. Here's another notion, which is entirely Shannon, which is the idea of channel capacity. And what he's saying now is in order to characterize the channel, rather than the input or the output, let's ask what the maximum mutual information is over all possible distributions that you might have for X. So I'm not going to specify X being 0 and 1 with equal probability. If you go through that computation, you find that it's exactly the shape that we had before. It's exactly, for the binary symmetric channel, that happens to be exactly this curve. So the channel capacity for the binary channel is exactly-- for the binary symmetric channel-- is exactly this curve. So that gives us an idea of the maximum information that you could be transmitting across the channel. Now that's just the definition, but it turns out to have some very practical implications for how fast and how accurately you can transmit data on a channel. And here's Shannon's result. What he says is that you can theoretically transmit information at an average rate below the channel capacity with arbitrarily low error. So that's the shocking thing, that as long as you stay below channel capacity, you can transmit with arbitrary low error. If you try and get to rates above that, you're going to run into trouble. You can't get that probability of error to vanish. Now, how do you do this? Well, the prescription is take long strings of that input message stream, take k bits of that input message stream, code it onto it n larger than k code words, send that through the channel. If n is very large and k is very large, the rate at which you're transmitting is k over n, you can transmit at a rate k over n that lives below C with as low an error as you want. The way to make the error smaller is to take longer and longer blocks. This was kind of an existence proof. He didn't actually show you specific instructions, necessarily, in that proof for how to introduce the redundancy to make this happen. But it was actually a result that said, you can't be satisfied with the replication code. You can do a lot better, and how much better you can do is indicated by that channel capacity. OK, let's come back to testable stuff. We're going to actually design ways to introduce redundancy, motivated by this [? Shannon ?] result, for very practical settings. And a key notion we're going to use is that of the Hamming distance between two strings. So the Hamming distance-- you've seen this in some recitations-- basically, the Hamming distance between two strings is just a number of positions in which the two strings differ from each other. so the Hamming distance here between these two strings, let's say string 1, string 2, is what, 1, 2, 3. These strings differ in three positions. Another way to think of it is, how many bits do I have to flip in one to get the other one? So how many hops does it take, in some sense, to get from one to the other? All right, so here's how the notion of adding redundancy comes in. Suppose we have a 0 or a 1 to send across. This is our bit for that transmission. What we're going to do is actually code it not as 0 and 1, but as 00 and 11. If we've got just a single bit corrupted, we go to something that's not a code word. We go from 11 01 or to 10, or we go from 00 to 01 or 10. We receive something that's not a code word. That allows us to detect that an error was made. So what we've essentially done is, in Hamming distance, we've introduced some distance between the code words that we're using to transmit on the channel. It takes two hops to get from one code word to the other. There's a Hamming distance of two. And, therefore, if you only have a single bit error when you transmit on the channel, you're not going to get all the way to another code word. You won't be at any code word you recognize, and you'll know that you made an error. So this is an example of how you start to introduce redundancy in the stream so that you can detect and perhaps even correct errors. Here's another example. Now this is-- these, by the way, are still looking replication codes because to send a 0, we're repeating 0 three times, and to send a 1, we're repeating 1 one time. But we're going to do more elaborate versions of this that are not replication codes. But imagine now I've drawn the corners of a cube. Each circle here across an edge is Hamming distance 1 from the adjacent one. Right So to go from the sequence that I'm using to represent to 0 to the sequence that I use to represent a 1, I've got to do 1, 2, 3 hops, there's a Hamming distance of 3 there. So if I had only a single bit error, is it possible for me to correct? If I know that my errors are limited to single bit errors, can I correct when I receive an incorrect string? If I start with 000 and I have a single bit error, I can only go to these adjacent vertices. And those are not going to be confused with vertices that are one step adjacent to the 111. So I'll know that I've made an error, and I'll know to correct it back to 000, provided I know that only one error has been made. Now, I might just assume that only one error has been made. And so once in a while, I'll think I'm correcting, but I'll be getting something wrong. And that has to be contended with and calculated, but this is the basic idea. More generally what we're thinking about is taking key, message bits. So this corresponds to 2 to the k possible messages. We're going to embed this in ended code words where n is greater than k. So if you imagine a generalization of this picture, what we've got is a hypercube with 2 to the n possible nodes, corresponding to all the possible combinations here. We're going to assign 2 to the k of those nodes to code words, and the rest will be left free to just leave some space between the code words. the rate of the code then, the rate of the code is, how many message bits are you're getting across on average per transmission? And so the rate is going to be k over n. Because for every n bits that you send, you're getting across k bits of information. This k message bits and every n transmitted bits. So here is the general statement in terms of Hamming distance and what you can do with the code. So first of all, I've got a set of code words. What's really important is what's the minimum Hamming distance between my code words. Because that's the point of greatest vulnerability. That's where I'm most likely to get confused. So if you give me a set of code words, I can look at the Hamming distance between any two of them. I've got to search all these pairs and find out which is the minimum Hamming distance. That's the point of maximum vulnerability, and this is what we're calling d, little d. So the picture I like to think about is I've got some valid code word here. I've got some other valid code word here. And if I told you that over all the code words in my set, the minimum Hamming distance I find is 3, what that means is I've got to do three hops to get to the other code word. This hop means I've changed 1 bit in the valid code word to get to some other sequence, not a code word. And then 1 bit to get to this one, and 1 more bit to get to this one, and this is now another valid code word. OK, so if I say the minimum Hamming distance is 3, that means that you will find a pair of words with these three hops to get you from one to the other. How many errors can you detect with a code like this? So you send a valid code word across the channel, bits get flipped, up to how many errors could you detect without being fooled? If I tell you this is less than or equal to e errors, how large can e be to guarantee that you won't get a transmission of one valid code word that ends up as another valid code word? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Sorry? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: I'm talking not about correction but about detection of an error. How many errors could you detect in this kind of a picture? So I can afford to have one error, two errors, and the third hour will bring me to about valid code word, and I won't know that I've made an error. So if you want to look at how many errors you can detect, it's what's given by the upper one there, d minus 1. So if the minimum Hamming distance is d, you can detect up to d minus 1 hours. What about correction? How many errors could you correct here? Can you correct any errors here? Up to one, right? And then you can look at this more generally, and you see the general formula is d minus 1 over 2, the floor of that. So that tells you how many error you can correct. So the minimum Hamming distance is actually a key thing. Now, how do you build codes which have desired characteristics? For instance, suppose you want to send 4 bit messages. So k equals 4. You want to have single error correction. So that means you want this kind of a picture here. You need Hamming distance 3, at least. How will you produce a code? All right, so this is not an obvious thing at all. Here's an example of one that satisfies the construction. You need to actually expand to sending 7 bits, so n is equal to 7. How many messages do we have? We have 16 messages, so that corresponds to a k of 4, 2 to the 4 is 16. So we've got 16 different messages. We could have counted those messages with 4 bits, but we're going to add in redundancy to get 7 bits per message, resulting in these code words. These code words, this set of code words has the property that the minimum Hamming distance is 3. So you can correct up to a single error, here. But it takes-- in principle, it takes a search, and it's not necessarily easy to do. But we'll see how to do that efficiently. All right. Let me show you how-- and this is something that you're probably quite familiar with-- how by making n equals k plus 1, you can already-- suppose I choose n equals k plus 1, which means I'm taking the message bit and adding 1 bit. We're going to add what's called a parity bit. And you can do this in different ways, but we're going to do-- let's see-- what I'm going to do with this is guarantee that the minimum distance between valid code words is at least 2. So let's see how to do that. And there'll be some computation in not just the parity calculations, but other stuff we'll do that builds on computations of zeros and ones. The computations are what you've probably seen elsewhere with Boolean algebra. This is what's called computation in Galois field 2. So GF2 is another symbol you'll see. 0 plus 0 is a 0, 1 plus 0 or 0 plus 1 is 1, 1 plus 1 is 0. So this is like an exclusive or addition, and multiplication works in the usual fashion. So all our computations are with zeros and ones, and you want to keep that in mind as we go through this. So here's what we do for a simple way to add redundancy. We'll take the message and not a single bit to make the total number of ones in the resulting code word even. So this is what's called even parity. You can have the opposite choice of odd parity. So if you now receive a code word with an odd number of ones, you know you've made a mistake. How do I know that the minimum Hamming distance is 2 in this case? I have to be able to produce for you some other code word that I've had with two hops. Any ideas there? So I give you a code word, which is the original message word with a parity bit. Can I make two bit flips in that and get a new code word, I mean, a valid code word? Because then I'd have Hamming distance 2, right? Can you think of what to do? Yeah? AUDIENCE: If you flip a 1 to 0 or 0 to 1, then n equals [INAUDIBLE]. GEORGE VERGHESE: But that's not yet given me-- so the suggestion was flip one of the bits. AUDIENCE: So then for the second one, you'll either still have an odd number or [? even. ?] GEORGE VERGHESE: So if you flip, for instance an easy way to see this is flip one of the message bits and flip the party bit, for instance. No that doesn't do it, does it? Because then you've changed two 0's to 2 or two 1's to a 0, and your parity is wrong, then. So you can have a two bit error that ends up not being detected, but all single bit errors will get detected. That, again, correlates with the fact that we set the minimum Hamming distance at 2. The number of errors you can detect is d minus 1. That's 1. And the number of errors you can correct, well, it's d minus 1 divided by 2, the floor of that, and you can't correct any errors. All right, now we're going to be building more elaborate codes than parity or replication. These are going to be called linear block codes. And there are different ways to set this up. Here's one way to think of it. If you're comfortable with the matrix multiplication, here's one way to think of it. And we're going to be using this actually. So if you aren't already comfortable with matrix multiplication, maybe you should get comfortable soon. What we have is a vector here, which has our message in it. So we stick our message in there. This is just a bunch of zeros and ones. I've got a matrix here, which I call a generator matrix, generator matrix. And this is a matrix of zeros and ones, as well, and so on. So we've got things in there. How do I generate my code words? I just put in my message, carry out this multiplication, and see what I get for a code word. So for instance, if I my message is 1 and all 0's, what's my code word? Well, I take this and I multiply it all the way through the matrix. Because of the special structure here, all of these are zeros, so the rows below the first one don't matter at all. What I get for a code word is 0101101. In other words, I get the first row of g. If I had a 1 and a 1 with 00, I'm going to get the sum of the first two rows of g. And all of these computations are done with the modulo 2 arithmetic, so in GF2. So this is one way to think of what a linear code is. Another way to think of it is, every bit in your code word is a linear combination of the bits in the message. It's just that you have more bits here, so you're taking multiple linear combinations of the bits in the message to get the bits in the code word. So this is a highly structured kind of code. And the key fact about this is that the sum of any two code words is also a code word. And we'll leave you to look at that in recitation. So it's true that any code word generated this way plus any other code word generated this way will give you a code word generated this way. can, you deduce from that that the code word of all 0's has to be in any linear code? Why is it true that every linear code has to have the all zero code word? In this instance-- well, you can see it has to have the all zero code word. Why am I doing that? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Yeah, so clearly from this picture, if your input is all zeros, you've got to have the zero code word. Can you tell me from this statement that the sum of any two code words has to be a code word? Can you deduce from that the all zero code word has to be in there? AUDIENCE: So that's [INAUDIBLE] subtraction. GEORGE VERGHESE: Subtraction or addition. So suppose I take a code word and add it to itself. What do I get? In GFW, if I take a code word and add it to itself, I get the all zero code word. So the all zeros has to always be in there. If you don't have the all zeros code word, you know you don't have a linear code. Now it turns out that for a linear code, it's easy to determine the minimum distance between-- the minimum Hamming distance between words, which we saw was crucial to establishing what the error correction or detection properties were. If you've got a linear code to determine the minimum distance between words, you only have to look for the distance-- the minimum distance between the zero code word and all the other code words. So it turns out that in a linear code, the minimum distance that you find between any two code words is the same as the minimum distance you'll find between the zero code word and any other code word. Now what's the distance between the zero code word and some other code word? It's just the number of ones in that other code word. So all you have to do for a linear code to determine the minimum Hamming distance is look at all the non-zero code words and see which one has the minimum number of ones. So let's see, it's not obvious that these are necessarily linear codes, but they turn out to be. In this particular case, here's a code with n equals 3. We've got only two messages being sent, so what's the value of k? Two messages means k equals 1, because 2 to the k is the number of messages. So n is 3, k is 1, and the minimum Hamming distance is 3, which is the weight of the-- smallest weight you find among the non-zero words. Here's another instance. This is again a linear code. Well, is it a linear code? Yeah. The minimum weight you see among the non-zero code words is 2, so the Hamming distance is 2. So the way we denote this code is the value of n is 4, because there are four different code words-- sorry, the four different bits here. 4 bits, sorry, in the code words. It's not four different code words. 2 is the value of k, because 2 to the 2-- 4 is the number of messages that you have. And the minimum Hamming distance is 2. So with each of those, you can actually compute the associated rate. And just to wind up, these are not linear codes. How do we know they're not linear codes? Well, some two of them, and you'll discover that in some instances, you don't get the remaining one in the set. This is the code set that I put up earlier. It turns out to be a linear code. So if I claim that it's a linear code, can you tell me what the minimum Hamming distance is between code words here? 3. You find a code word here of weight 3, and you don't find any code words-- here also another one-- you don't find any code words of weight less than 3. All right, so this is enough to get you going. We'll quit with this, you'll continue in recitation, and we'll pick it up again in lecture next time. Thank you.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
4_Linear_block_codes_parity_relations.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GEORGE VERGHESE: So we're going to continue talking about coding. We're going to focus on linear block codes, which I introduced briefly last time. But just to step back a bit and remind you, we're talking about this piece of the overall channel. So we've got the source that's done this source coding, compressing all the bits coming out of here. So that one bit, one binary digit, carries a bit of information. And now, we're actually reintroducing redundancy in a controlled way, so that we can protect the message across the physical channel with its noise sources and distortions and so on. Actually, I should be saying binary digits at this point. Because again, at this point, the binary digit doesn't carry a bit of information. We're introducing redundancy, but I'll leave you now to make the distinctions. OK, at this point here outside the source coding, one binary digit is one bit of information. But now, when you start to introduce the redundancy, you've got binary digits that are not necessarily one bit of information per binary digit. In fact, it won't be. And then across the channel at the other end, you do the decoding to try and recover from any errors that the channel might have encountered. And what we said last time is that the key to this is really to introduce some space around the code words that carry your messages. So you might want to expand your set of messages into a longer code word, such that a small number of errors on each code word will not flip you over into another code word. So you'll be able to recognize the neighborhood of the valid code words. That's the basic idea. So you're trying to put some space around things. So if you've got k bits in your original message, you've got 2 to the k messages, right? So k message bits, 2 to the k messages-- and what we're planning to do now is, with this input stream that's coming into our channel coder, we're going to take the stream and break it up into blocks. So each block will have k message bits. And then out come a series of blocks, but each block now has the large number. So we've got n bits. OK. So we've done some padding here. n is greater than k. And so you have the possibility of 2 to the n possible messages in those n bits, but you're not going to use all of them. You're only going to use 2 to the k, and so you'll leave some space around each valid code word, all right? So the code words are selected from 2 to the k code words selected from 2 to the n possibilities. OK. You get the idea there? Yeah. And we introduce this notion of Hamming distance then to measure the size of the neighborhood around the code word. So we have the notion of a Hamming distance, which we'll abbreviate to HD. And this is the Hamming distance between two bit streams or between two, let's say, blocks. And what this is is the number of positions in which they differ. OK. So it's a very simple notion of distance between bit strings or binary digit strings. All right. And what we then said is you get certain desirable error detection and error correction properties based on the minimum distance. minimum Hamming distance of a code, we use the simple little d for that. That's the minimum distance you find between any two code words in the code. So based on that, we said that-- we wrote it slightly differently last time. I'm writing it to give you yet another way to think about it. What we basically said is, for instance, if you had a valid code word here, valid code word here, this is just a schematic. One hop, meaning one bit change, brings you to some other word which is not a code word. Then another bit change brings you to some other word, not a code word. And a further one brings you to a new code word. That's Hamming distance three. So if the minimum distance you find among all the spacings between code words is a distance of three, measured as the Hamming distance, then you can detect up to two errors. So if you went from this code word in two hops, you'd still not be at a new code word. So you know you've made a mistake. If you wanted to correct errors, you could correct up to one error in this case assuming that you have no more than one error. If you ended up here, you'd know it had to have come from this code word. If you ended up here, you know it had to have come from the code word on the right. Now, you have to be a little careful if you're trying to do correction and detection at the same time. So for instance, if you end up over here and if it's possible to get up to two errors, then you might think you've had one error that brought you here. And you might correct to this point. But if the way you actually got there was two hops from over here, then you've done an incorrect correction. OK. So you've got to be a little bit careful. And that's what the third case tries to deal with. It allows you to deal with combinations of correcting up to a certain number of errors and then detecting a certain number. So basically, what it's saying is that this entire distance here has to end up with a little gap. You've got to be able to make a number of hops equal to the number of areas you want to detect and still leave enough space to get to a code word unambiguously. So in this particular case, for instance, you couldn't unambiguously detect up to two errors if you were doing error correction for one. But if I had this picture, OK, this is now Hamming distance 4, 1 2, 3, 4. I could correct single bit errors, and I could detect up to 2 because 2 errors would bring me up to this point. That's clearly an error that I wouldn't try to correct, but I'd recognize it as an error. OK, so that's what the third case tries to account for. You won't believe how much time I spent trying to distill that statement down into a bullet. And I don't know if I got it right here, but that's the idea. OK. So our focus today is on linear block codes. We're not talking about codes in general, but linear codes. This would be a general statement for a block code. I haven't said anything about linearity up to this point. All I said was take blocks of k bits, expand them to blocks of n bits, and pick subsets in this fashion. That's just a general statement about coding. There's nothing linear about this as stated. So if you want to impose linearity, then you've got to introduce this additional piece, which is to say that every bit in your code word is going to be a linear combination of bits from your message. And the easiest way to understand that is the matrix representation I had last time. Do I have it on this slide? Not yet, OK. But I probably do on the next, so let me pull that up. OK. So basically, you're going to generate your code words, c. So that's c1 up to cn is going to be d1 up to dk times some matrix, which is k times n matrix. And we'll call it G. OK. So that's referred to as the generator matrix for the code. We're talking about binary code. So all of these are 0s or 1s. And all of these entries are 0 or 1. And all computations are done in GF(2). They're done modulo 2. Well, let me just say that all operations are in GF(2). OK. So this is modulo 2 operations or Boolean operations. So if I'm working with the symbols 0 and 1, what is 0, minus 1? How am I to interpret this? I haven't quite defined it. But how would you interpret that? You can think of it as the thing I need on the right-hand side that, when I add 1 to both sides, I get 0. Is that one way to think of it? So 0 minus 1 is 1. Or another way to say that is minus 1 is the same as plus 1 in this setting. OK. So you just have to get used to working with only 0 and 1 in GF(2), but we talked about that last time. All right, so back to the statement. This is for a linear block code. You're going to see matrix multiplication throughout your careers here. So if you haven't already seen them, this is a good opportunity to learn about matrix multiplications. So let's see, could somebody tell me what procedure I go through to, let's say, get the i-th position here in terms of what I do on the right-hand side if I want to get the i-th position on the left-hand side? What's the operation that I'm thinking of? Or let me ask you this. Is the entire matrix G relevant when I'm just interested in the i-th position here? Or is this some part of G that's what I should focus on? Yeah. AUDIENCE: The i-th column? GEORGE VERGHESE: It's just the i-th column, right? So we think of matrix multiplication as sort of being in the simple case. If you want the i-th position here, it's kind of the dot product of this row with the i-th column. So if you want the i-th position here, let me give you a particular example. If I've got 1, 0, 1, 1 here and I have 1, 1, 0, 0 here, then this position is going to be this is in the i-th column. What I'm going to find in the i-th column is 1 times 1, which is 1, plus 0 times 1, which is 0, plus 1 times 0 plus 1 times 0. So I just got a 1, right? So I'm just going to get a 1 or a 0 depending on the specific entries here. So look what we've done. We've found a particular position in the code word as a linear combination of the bits in the message. We took the combination of these bits with the weights that are displayed out here. So that's really what this statement was on the previous slide. We said that a code is linear. Well, each of the k message bits is encoded as a linear transformation. Sorry, each of the code bits is encoded as a linear transformation of the message bits. I didn't quite say it that way there. Let me say it here on this slide. So each code word bit is a specified linear combination of the message bits. This is what I'm referring to. If you wanted to find any particular bit, you're going to take a linear combination of these bits with the weights that are here, OK? There's another way to think of this also which is the other blue line out there, which is to think of the matrix G as being made up of rows. OK. So can someone describe to me what we're doing with these rows to get this into our code vector? Yeah. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: OK. So basically, the way matrix multiplication works, if you think about it, is what we're going to be doing is 1 times the first row plus 0 times the second row plus 1 times the third row plus 1 times the fourth row. So another way to think of matrix multiplication, it's going to generate this vector as a linear combination of the rows of this matrix, OK? What linear combination-- well, the linear combination that's described in the message part of this. So this is the message part. So that's the other statement out here. So each code word is a linear combination of the rows of this generator matrix. So these are concrete ways to think about what a linear code is. But we also saw that there's another way to think of it which is in terms of this property, that the sum of any two code words is also a code word. So if you have a set of code words with the property that the sum of any two is another word and that set, another code word and that set, then what you have is a linear code. And we argue that the all 0s code word must be in there. Because when you add a code word to itself, you get the all 0s code word, right? OK. So that's the class of codes we're going to be focusing on. But I'm going to make a further restriction, which is that I'm going to look at code words that are of a very special type. So I'm going to limit myself to code words that have this structure. So here's my data bits, d1 up to dk. I'm going to pick my code word, so that, let's say, the first k bits are precisely the data bits. And then I'll pick the additional ones to be some set of what we'll call parity bits. So this is p1 up to pn minus k. All right, so I'm not going to have an arbitrary transformation here. I'm going to restrict myself to transformations that have the property that, when I multiply by the data vector here, what I get is the data vector reproduced in the initial part and then a bunch of new bits representing the redundant relationships that I'm computing. We refer to them as parity bits. It's not so important that the data bits be in the first k position, so I'm willing to tolerate variations of this where the data bits are somewhere in this code vector. But the key thing about what's called a systematic code is that, when I look at the code word in designated positions, I find the data bits and the other positions are the so-called parity bits that are obtained as linear combinations of the data bits, OK? So if you are familiar with matrix operations, then that what I'll need is all the way down the diagonal to have a matrix that has 1s along the diagonal, 0s everywhere else, and then something here. Let me just call this matrix of left over 0s and 1s, matrix A. OK. So I've got here a k times k matrix with 1s down the diagonal and 0s everywhere else. And then I've got a matrix which has 0s and 1s in it. This is going to be, what is it, k times n minus k, right? So do you buy this? So think about how matrix multiplication works. If I want the first column on the left, I take this row inner product or dot product with the first column here. That just selects out d1. And indeed, I get the d1 there. And that happens for the first k positions. Beyond that, I'm taking linear combinations with whatever sits here. OK. It turns out that this is not really a special case. It turns out that any linear code can be transformed to this form by invertible operation. So basically, if you use invertible operations on the rows here and some rearrangement of the columns, you can bring any code to this form. And then the resulting code will have effectively the same error correction properties that the code out here did. OK. So we're just going to limit ourselves to thinking of linear codes, which are in the so-called systematic form. In other words, some part of the code word is the message bits and the other part is parity bits, OK? So let's look at a specific code that is of this form, very simple code, referred to as a rectangular code. And you see a particular example on this slide here. So what do we do? We arrange our data bits into a matrix which could be rectangular or square depending on what you have. So we're going to have r, rows, and c, columns, with the data bits in here, so D1, D2, all the way up to D sub, let's say, r times c, right? In this particular case, r and c are both 2. And then you're going to generate the parity bits in the simple fashion. What you're going to do is choose a parity bit associated with the first row that basically makes sure that in the first row, including the parity bit, you've got an even number of 1s. OK. So this is a choice for even parity here. Similarly, P2 will be chosen such that the second row has even parity. In other words, you've got an even number of 1s there. And for the columns, similarly, P3 will be chosen such that D1 and D3 and P3 together have even parity. In other words, the number of 1s in that column is even. Again-- the same thing for this column. So what you're trying to do is sort of have sentries on the rows and columns that will signal when something has happened to a bit of the intersection. That's the general idea here, all right? So you'll take this out and arrange it then. So you've got your parity bits. What's the sequence I used-- P1, P2 and then more parity bits here, OK, so row and column parity bits. So here's a way to think about what these are explicitly. So what you'll do is P1 is D1 plus D2. When I say plus, of course, I mean in GF(2). So that's modulo 2 addition, right? Does that simple formula ensure that I've got an even number of 1s in that first row, right? If D1 is 0 and D2 is 1, then I'll make this equal to 1, which is what I need. If D1 and D2 are both 1, I'll make this 0, which is what I need and so on. So this simple expression captures it and similarly for the r-th row. So for each row, you make the parity bit equal to the sum of the data bits in that row, similarly for the columns, OK? Another thing, by the way, can you tell me what P1 plus D1 plus D2 is going to be in this case? If I pick P1 in this fashion, what does it guarantee for P1 plus P2 plus D2? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Sorry? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: 0, right? Because really I'm taking P1 and adding P1 again. And when I take something and add it to itself in GF(2), I get 0. So this is equal to 0. So these are just two different ways of thinking about the parity bit here. So this is how you compute the parity bit, whereas this might be referred to as parity relation. It's a linear constraint relating the parity bit and the data bit. In fact, we might try constructing this matrix as we go, right? So we've got D1, D2, D3, D4, P1, P2, P3, P4. Whoops. And here is D1, D2, D3, D4. I'm going to have my generator matrix here. It's got the identity matrix in this first part. We use the symbol capital I for identity matrix. So when you see identity matrix, you know it's a square matrix with 1s down the diagonals. OK. So what goes in the next column over here for this particular example? The next column over is going to be P1. P1 is D1 plus D2. So what I need is 1, 1, 0, 0, right-- and similarly for the other parity bit. So once you're told the rule here, it's easy to generate the matrix that goes with it. OK. So let's get some practice figuring out what's what here. This is all we're going to be aiming to do in this lecture is construct codes that correct up to a single error. So we're focusing on Single-Error Correction codes or what are referred to as SEC codes, OK, Single-Error Correction. So assume that only one error has happened or zero errors. You don't get more than one, let's say. If you receive this, I've just rearranged the code word into the pattern that allows you to look at this very easily. So here's D1, D2, D3, D4, and so on. Any errors here in this? You can see that, if I look along the first row, I've got even parity, even parity, even parity, even parity. So everything looks fine. And I'll declare that there are no errors. On the other hand, if I receive that, OK, so here I have a parity check failure, right? And so I know that something is wrong in this column. I look along the rows. I see a parity check failure in that row. So I pinpoint the error as being at the intersection of those two. And I know that's the bit that I have to flip, OK? And another case, here there is a failure on a row, but nothing on the corresponding columns. So what that tells us, it's actually the parity bit that's failed, right? Everything else looks fine, but the parity bit has failed. If there's a single error to correct, it would be to convert this 1 to a 0. And then all parity relations are satisfied. OK. So you can get errors in the parity bits as easily as you get them in the data bits because the channel doesn't know the difference. The channel is just seeing a sequence of bits. All right, so this is how you work backwards to figure out what's going on. Another way to say it-- and we'll see this later-- is you get what should be D1 and D2, but you're not sure yet whether they're in error or not. So let me call them D1 and D2 prime for now. So you compute your estimate of the first parity relationship and compare it with what's sitting in-- well, let me say, are these equal? So what you're doing is you're computing your estimate of the parity relationship based on what's sitting in the code word in these positions and seeing whether it's equal to what you think it should equal. OK. And if it's equal, then you say that parity relation is satisfied. And otherwise, you try and make a change. Now, we'll see how to do this more systematically next lecture, actually, when we'll go further with the matrix story. But I'm just trying to get you a little oriented here. OK. So you probably believe by now that this code can correct single errors, right? The rectangular code can correct single errors. Basically, an error in a message bit is pinpointed by parity errors on the row and column. A message in a parity bit is pinpointed by just an error in the parity row or column. And if you get something other than that, then you say you have an uncorrectable error, right? You're not set up to do things with other errors there. But now, how do we know the Hamming distance is 3? The minimum Hamming distance is 2. We know that, if the minimum Hamming distance is 3, we can correct a single error. But it's possible that the minimum Hamming distance is greater than 3 for this case, which might mean we have more possibilities. So how can we establish what the minimum Hamming distance is? Any ideas? Yeah. AUDIENCE: [INAUDIBLE] the case in which the Hamming distance is [INAUDIBLE] change one of the data bits and and the two parity bits that correspond [INAUDIBLE].. GEORGE VERGHESE: OK. So am I going to search all pairs-- so the suggestion was change something until you find the Hamming distance of 3. And presumably, you won't find anything smaller, right? OK. Because we know we can correct single errors. But am I going to search through all pairs of code words to do this? Or can I do something better? Yeah. AUDIENCE: [INAUDIBLE] if you have [INAUDIBLE].. GEORGE VERGHESE: So you're giving me a particular computation here, but I don't know that you've answered my question, which was, am I going to have to search through all pairs of code words to see that I can establish a known distance of 3? Or is there something simpler than that that I can do? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Can you speak up? My hearing is not great, sorry. AUDIENCE: [INAUDIBLE] high dimensional [INAUDIBLE].. It's whatever minimum Hamming distance is [INAUDIBLE].. GEORGE VERGHESE: Oh, so you've got a general formula, OK. Can you invoke linearity in some way? Because I haven't heard you use the linearity of the code in anything you've said. Were you going to offer a suggestion? Yeah. AUDIENCE: [INAUDIBLE] what happens when you [? put one ?] [INAUDIBLE] when you [INAUDIBLE] parity [INAUDIBLE] you pick one bit you have to put two other bits. [INAUDIBLE] GEORGE VERGHESE: OK. So I think I get what your argument is. You're saying start from an arbitrary message bit. And then if you make any flip, you'll get at least 3. That may have been the earlier argument, which I missed, right? Is there a way to invoke the linearity of the code in making these arguments? You're on the right track, but I just want to see if linearity can be invoked. Yeah. AUDIENCE: I think you can use the all 0 codes and [INAUDIBLE]. GEORGE VERGHESE: Right. So what we've established is that, for a linear code, the minimum Hamming distance is the minimum weight among all non-0 vectors, right? So all you have to do is start with the 0 code word, everything 0, and then flip a bit in the message and then see if you get Hamming distance 3 or greater. OK. So we can start with the 0 code word. I guess that's the point I was going to make here. OK. There is another expression that popped up there before completing that argument. Do you agree with what it said about the code rate? It says the code rate is rc over rc plus r plus c. Do you agree with that? Yeah. Because we have the number of message bits being rc. And then the total number of bits is rc plus r plus c. So the rate is, indeed, what's given by that expression. OK. And then we'll go on to make this argument about the three cases here. So you can actually go case by case. And the argument here is actually closer to what was being described out there. It doesn't start with the 0 message. But it says, if you've got two messages that differ in 1 bit in the message area, then they're going to differ by 1 bit in the associated parity areas. And, therefore, the overall code word has moved by 3. And then you go through each of the cases, and you argue that you've moved by at least 3. So this argument is actually closer to what was being suggested earlier. OK. So you can go through each of the cases and discover that the nearest code word you can get to is Hamming distance 3 away, all right? Why is it that we're flipping a bit in the message section to decide what's a new case? The way we count our code words is by arranging through all the possible 2 the k, right? So we've got to flip bits in the data section to get to another code word. So we're saying we have a code word corresponding to some set of data bits. We'll flip a bit there, and then look to see what happens, OK? OK. So here's a little modification to the code, which actually puts in an overall parity bit, P here. So what this is is the sum of all the entries and every other position. OK. And if you go through the argument there, what you'll discover is what you've done is go from-- do I still have it on the board? I might have it on the board here. No. What you've done is go from the rectangular code that had this structure, Hamming distance 3, to now one that has Hamming distance 4. OK. So adding that overall parity bit has increased the minimum Hamming distance of the code from 3 to 4. Does that improve error correction capabilities? You still can only correct up to one error. But the difference now is that, if you get two errors, you can actually detect it accurately as a 2-bit error. OK. All right, so I'll leave you to go through that analysis. This we've pretty much done already. This has just filled out the rest of the matrix. You see that we filled out this column on the board. That was this case. But you can actually fill them all out once you have the description of the parity check bits. OK. So these other columns you can fill out similarly. And this is for the case of-- let's see, what case is it referring to here? n equals 3. Let's see-- sorry, n equals 9. k equals 4. d equal 4. So what rectangular picture am I talking about here? This is a rectangular code. What rectangular code has these parameters? So I must be talking about 2 by 2 for the data bits, 2 rows, 2 columns, and then 1 overall, right? The overall, what gives me the clue is I've got a minimum distance of 4. If it's a rectangular code with minimum distance 4, then I know I must have an overall parity bit. k is 4 because I just have 4 data bits. And overall, I've got to send 9 bits in each block. OK. So for that particular case, this is what the matrix looks like. So the only difference is there is an overall parity bit here, P5, which is the sum of all the data bits. Actually, all the data bits on the parity bits, but this is what it works out to be. OK. And we've pretty much talked through the decoding here. Let me put it all up there. So you calculate all the parity bits. If you see no parity errors, you return all the data bits. If you detect a row or column parity bit error, then you make the change in that particular position. And otherwise, you flag an uncorrectable error. So the correction is straightforward. If you look on the slides later, you'll see a little quiz that you can try for yourselves. Or you might try it in recitation. But let me pass on that. OK. So the question arises, is a rectangular code using this redundant information in an efficient way? Or could we do better? So let's see, we've got a code word that's got k message bits. And then it's got n minus k parity bits. OK. So here's the data bits. Here's the parity bits. We want to use the parity bits as effectively as possible. How many different conditions can we signal if we have P bits that can only take value 0 or 1? Just 2 to the n minus k conditions, right? So if we're looking at the code word and trying to deduce something from the parity bits, how many different things can we deduce? Well, n minus k bits can signal 2 to the n minus k different things, right? What do they have to signal? What do the parity bits have to tell us? They have to tell us either that an error didn't occur or that an error occurred in the first position or second position or third all the way up to the n-th. So the number of things we want to learn from the parity bits is n plus 1. So you would hope that this is true, that the number of things you can signal with the parity bits is at least equal to the number of things you want to get in the case of single-error correction. All right, we're only trying to correct a single error here. So we want the number of possibilities that the parity bits can indicate to include the case of no errors-- that's the 1 over there-- plus the case of an error in the first position or second position or third position and so on. OK. If you plug in the typical parameters for the rectangular code, you'll see that you're actually exceeding this wildly. Let's see. For that particular case, 9, 4, 4, what do we have? We have 9 plus 1 on this side. And we have 2 to the what-- 9 minus 4. So what's that-- 32 on the right-hand side and 10 on the left. So you've got a big gap. If you were going to allow me 1, 2, 3, 4, 5, parity bits, I could do a lot more than tell you what you're asking me to tell you in this particular code. So I'm not using the parity bits as efficiently as I could. And that motivates the search for better choices. So this is a fundamental inequality here, something we'll keep referring back to. So make sure you understand where that comes from. And that leads us to what are called Hamming codes. So Hamming codes are codes that actually use the parity bits efficiently in that they match this bound with equality. OK. So can you think of the smallest k and n pair that's going to satisfy this with equality just playing with some small numbers? Maybe I shouldn't play this game since we're late on the lecture. Here's a suggestion, nkd. The Hamming code is going to be a single-error correcting code with minimum Hamming distance 3. So the 3 will always be there. This is n, and this is k. And you'll see that this is satisfied with equality. But there are other choices. This is the smallest choice, the smallest non-trivial choice anyway. But you can go to more general possibilities. So this code is called a perfect code because it matches that inequality with equality, but actually that doesn't necessarily mean it's the best code. It turns out to be a good code provided you're picking these parameters appropriately for your application. But this is perfect code in a very technical sense, meaning it's a code that attains this inequality with equality. OK, that's all that it means there. OK, so what's the idea on the Hamming code? Let me put it all down there, and then we'll talk through it. So this little Venn diagram conveys for you how the parity bits are picked. And they end up actually being picked in a very efficient way to provide the coverage you want. So this is the case that was mentioned before. This is the, was it, 7, 4, 3. Is that what we had? Yeah, 7, 4, 3, right? So let's give ourselves some space here. So with 3 parity bits, we're actually going to indicate whether there was 0 error or whether the error occurred in the first position, second position, and so on, all the way up to the seventh position. So we're going to use 3 bits, 3 parity bits, to indicate eight possibilities, which is what we know you should be able to do. And here's the arrangement. This picture conveys it. So basically, P1 is D1 plus D2 plus D4. So P1 fires if D1, D2, or D4, if any one of those is 1 and similarly for these other things. OK. So this picture tells you what data bits are included in the coverage of each parity bit. So that's the way to think of what this picture is. So these are apportioned carefully. So for instance, let's see, if you discover that P1 and P3 indicate an error, that means some data bit in the coverage of P1 and in the coverage of P3 have an error. But P2 didn't have an error. OK. So what does that tell you? We're only considering up to single errors. If P1 and P3 have an error, but P2 doesn't have an error, well, P1 and P3 share D2, D4. But P2 didn't have an error, so D4 must be fine. So D2 must be the one that's an error, OK? And so you get full coverage by that kind of reasoning. One way to think of this, and this is actually how Hamming set it up, was he actually arranged the parity bits and the data bits a little differently down that code word. He had parity bit 1 in the first position, parity bit 2 in the second position, parity bit 3 in the fourth position. If you had a long code word, the next one would be in the eighth position. So it's 2 to the 0, 2 to the 1, 2 to the 2, 2 to the 3, and so on. So those are the positions in which he puts the parity bits. Everywhere else are the data bits. And then the data bits that feed into parity P1 or parity relation P1 are the data bits and positions that end with a 1. OK. So if the positions end with the 1, you stick them in the coverage of this parity relationship, so D1, D2, not D3, but D4. For P2, similarly, the parity relation P2 includes the data bits that have a 1 in their second position, so D1, not D2, yes D3, and yes D4. OK. So the nice thing about that is that, when you get a particular pattern of errors, it actually leads you exactly to the right position in the code word. So I don't want to actually spend a lot of time here. I want you to look at that separately. But just to go over the process, here's what happens. We know that parity bit P1 was D1 plus D2 plus D4. So we know that this parity relationship was satisfied at the transmitting end. By the time you receive all of this, all of it might have been corrupted. That's why I put a little primes next to these. Not all of them, but one of them may have been. We're limiting ourselves to a single-error correction, OK? So the D1 prime may not be D1 because of an error. So what you do is you compute these so-called syndrome bits. If there were no errors, then E1 should be 0, E2 should be 0, E3 should be 0 because that's how it was on the other end. If there's a single error in one of the bits covered by the appropriate relationship, you're going to get the associated syndrome bit going to 1. So you compute these syndromes and then line them up as a binary number. And it turns out that, depending on the pattern of the syndromes, it'll tell you exactly the position and the code word in which there's an error. So it's kind of cute and powerful. You can correct up to t errors. And there's a natural relationship that extends from this. I wanted to make one final point, which is that these error correcting codes occur all over the place, not just in the setting of binary. And one thing you might try and DO if you're carrying a textbook with you, look at the ISBN number. The ISBN number is a 10 digit number x1 up to x10. And it turns out that 1 times x1 plus 2 times x2 plus 10 times x10 is going to be 0 modulo 11. So try that out on the ISBN number of any book you're carrying. What you'll see is that this is a parity relationship that guards against errors in any single ISBN number or a transposition of two, which turn out to be the two most natural errors. OK. Look for parity relations in other places.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
8_Noise.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today we're going to dig a little deeper into the system that we've been talking about. So we've already talked about source coding and source decoding. And then we talked about channel coding as we've just finished talking about block codes and Viterbi-- convolutional codes and Viterbi decoding. So that's the coding here and the decoding. And now we're going to drill down to the next level to start to talk about the actual signals going across physical channels. So this is going to actually extend over the entire next module of the course. I want to describe this in the context of something you're going to be doing in labs 4 through 6. You're actually going to experiment with a specific channel. What you'll have is bits coming in, code words coming in, being translated to signals. In this case, discrete time signals, and I'll give you an example shortly of that. The signals will then be adapted through the modulator for transmission on an analog channel. So there's a modulation process and there's a digital-to-analog conversion process. You'll be generating waveform that you apply to the speaker in your laptop. That's going to be a transmitter. The channel is going to be just the air around you with all the disturbances of room acoustics and noise and all of that, all the distortions from that. And then you'll pick up the signal on the microphone on your laptop or an external microphone if you want. Conversion from analog to digital, demodulation and filtering to undo the modulation, and we'll be talking about this in more detail to get another sequence of samples. After which you have a decision rule that then looks at the samples and says, did I get a 0 or a 1? And you spit out the bits of your code word. OK, so this is what we're going to be looking at. So here is what you might be sending at the transmitting end. You've got the bits coming in. You're going to convert them to signals, and we're going to think of discrete time signals. So this is a signal x of n-- n takes integer value, so that's my discrete time clock. And the typical waveform might look like this. I might decide just very simply to have levels held at 0.5 for, let's say, 16 samples per bit, and then held at 0 for 16 samples to denote a 1 and a 0 respectively. So here's a 1, a 00, 111, 0101. So we're converting two samples. This is a sample number, and then the next step will be to actually-- in your computer you'll send this to your digital to analog converter which will-- with a particular clock cycle, convert this to real time. What you might imagine is that the actual waveform that goes out on the channel is somehow related to the continuous waveform that you get by just connecting the tops of these discrete time values. The actual mechanism for transmission through the air we'll talk about next time. So right now we're just going to focus on the level of the discrete time signals. And at the other end, after you've done your transmissions through the channel and you've demodulated and filtered, you get a sequence which ideally is a replication of the sequence that you sent in. It can't have a scale factor, scale factors don't worry us. In this case, you see that the amplitude is divided by 2. But basically you see the trace of what was sent at the transmitting end. There's some distortion that's introduced by the dynamics of the channel, and we'll be talking about that in more detail later. So we aren't getting quite the straight edges. But after a brief transient period, the waveform seems to settle to the constant value that we had of the input. So this is our received set of samples. Now in this figure, I've assumed that there's no noise, only the distortion. This lecture is going to be about the noise. I wanted you to get the sense of what distortion does, and then we'll park that issue and come back to it next time and actually for several lectures after that. But this lecture we're going to focus on noise. Before we look at noise, this is what a noise-free received signal might look like with just the distortion in it. OK. And now you've got to convert to a bit sequence. So a simple way to do that is pick an appropriate point in each bit slot. Each slot of 16 samples long. Pick an appropriate point, taking account of these transient effects and so on, and then sample. And if the sample value is above a threshold, you'll declare a 1. If the sample values below the threshold, you'll declare a 0. And so you reconstruct the sequence that went in. So we have the sample and threshold feature here. So we're just taking one of the samples in the bit period, comparing with the threshold, and making a declaration. That's a very simple-minded decision rule. OK. So we'll come back to distortion. Today I want to talk about noise, and I want to then suppress distortion. So let's forget about distortion. Let's assume that the received signal yn is exactly what was sent except for some additive noise. So what we're imagining is you send a nice clean set of samples here into your digital-to-analog converter, and what comes out ideally would be the same set of samples, but actually what happens is that each of these samples is perturbed by noise. And so you get something that might look like this. OK, so this is y, then, and what we had before was x of n. OK. So nominally you'd get the same thing. The only thing that's different now is you've got an additive noise. We're going to assume that this noise sample wn is independent from one sample to the next. So when the channel and the processing and so on decides to put a noise sample on this, it doesn't pay attention to what noise sample was out of on either side. So every noise sample is picked independently. And it's picked from the same distribution. That's with the identically distributed part of this mean. So the characteristics of the noise are the same right through our signal. That's what we're assuming. That's the identically distributed part. It's a statement about the stationarity of the noise characteristics. All of this can be generalized, but this is where we're going to have our story and that's all we're going to consider. OK, a key metric, then, is what's the signal-to-noise ratio? This is something that you see all over the place, the SNR. Usually what people mean is signal power, and power is usually the square of a signal-- that's what you're thinking of. If you think of voltages, for instance, the square of the voltage gives you power in the resistor. So you think of the signal as being x, Its power as being x squared. Except you've got to decide, do you want to talk about the peak power or the time average power or some other measurement of the signal power? So that's the signal part of this ratio. And then the noise part of the ratio is the noise variance. So we have a noise component wn, it's the expected squared amplitude of that. Oh, by the way, I didn't-- this is on my slide, but I didn't say it yet. I'm going to assume the noise is zero mean. Which means that these excursions from what you expect on average are at 0. If there was a systematic bias to the noise, if I knew that there was a non-zero mean, I could just factor that into my processing and think of my expected received signal as taking account of that non-zero mean. So there's no loss of generality, really. I'm assuming a zero mean noise. OK. Now when you come to actually computing numbers, this is another example-- showing another kind of waveform, this is the sum of sinusoids, I assume, to which you're adding some noise. And in this particular simulation, by tweaking the value of A there, that's the-- it's a gain factor on the signal. You can actually vary the signal-to-noise ratio and get a feel for what difference signal-to-noise ratio is represented. So at high signal-to-noise ratio, the noise isn't perturbing what went down very much. But when you get the lower signal-to-noise ratios, the noise is actually distorting the signal that you started with quite substantially. Now the SNR here is described in dB, decibels. And so let me just say a word about that. That's a unit you'll see all the time. You've seen all the time. So we're really trying to measure a signal-to-noise ratio. So this is what you would normally think of. But in many applications, a logarithmic scale is really what you want to deal with. For instance, if you're measuring the response of the ear to noise intensities, it turns out there's a logarithmic feature built into our sensors. So usually want to be measuring power and power ratios in terms of a log scale. That should have had a capital B there. So here's the definition of what a ratio is on dB. It's the ratio log to the base 10 times 10. One caution here. I told you that when we talk about powers, that's the square of the amplitude. So if you're going to compare amplitudes, ratio of amplitudes on a log scale, then actually what you end up doing is taking 20 log 10 ratio of amplitudes. So you'll sometimes see this definition as 20 log to the base 10 ratio of amplitudes, and what people are doing, then, is comparing amplitude ratios, not power ratios. You have a question? AUDIENCE: Why do we define power as amplitude squared? PROFESSOR: In sum-- so the question was, why do we define power as amplitude squared? If you think of an electrical circuit with some signal applied across it, a voltage, the instantaneous power dissipated in the resistor is given by that. So people start to think of square of a quantity as power. In the continuous time domain that's very natural in signals that come from physics, and that terminology is just being carried over to this kind of a discrete time setting. So when people say power, they mean square of the signal. It could've been called something else. OK. So you can actually span huge ratios in power on this log scale with much more better behaved numbers. 0 dB, then, is a ratio of 1. 3 dB, this is good to carry around in your head. 3 dB, it's actually 3.01-something, but 3 dB is a factor of 2 on the power ratio, or square root of 2 on an amplitude ratio. So let's actually go back to what I showed you on the previous slide. So here, for instance, is an SNR of 0.4 dB. If I figure that that's close to 0 dB, then I should expect that the noise power and signal power are about equal, and the noise amplitude and signal amplitude are about equal. So what I expect to see is perturbations of the original signal that are comparable with the signal values themselves, and that's sort of what we see here. The shape of the signal is pretty distorted at this point because the typical amplitude of the noise sample is comparable with the signal sample that I'm interested in. OK, so when you get to 0 dB, you're starting to get quite disturbed-looking waveforms. When you have 20 dB in power, that's actually 100-- ratio of 100-- sorry, what is that? Yeah, that's a ratio of 100, isn't it? On par? So it's a ratio of 10 on amplitudes, and that's what you're seeing. The noise excursions are about a 10th of what the signal amplitudes are. All right. It takes a little getting used to, but it's fairly standard. OK. So now we want to figure out how to describe noise and work with it. So let's look at a typical run of a noise sequence. What I've done is just extracted the noise piece of a typical received signal. So it's got excursions above and below 0. Remember, I said it was a zero mean random variable that we're thinking of, zero mean noise. And you can describe how these values are distributed by just doing a simple histogram. And if you only take a few values like 100 samples, you get a pretty messy-looking histogram, it doesn't seem to have much structure. But as you take more and more samples, you'll typically find that the histogram actually settles out to a nice shape, to some subtle kind of shape. Normalizing this to have unit area under it gives you what's called the probability density function for the noise. So this is a term-- kind of notion that's critical in working with noise. So here's a step of idealization. We're stepping back from thinking about histograms to just a mathematical way of talking about how random quantities distribute themselves. So we'll talk about-- by the way, we've been using W for the noise and X for the signal, but if you look in probability books, the first variable that people-- the first symbol people reach for and they want to talk about a random variable is X, and I got stuck with a whole bunch of figures that had X in them, so I didn't want to change it to W. This is anything. We're going to apply it to our W, but for now it's some capital X. The other convention when you talk about random variables as you tend to use a capital letter to denote the random variable. OK. So we say that X is a random variable governed by a particular probability density function. If you can compute the probability that X lies in some particular interval by taking the corresponding area under that PDF. So the PDF is the object that gives you probabilities from areas under the integrals. So if you want the probability that the quantity X, take the numerical values in this range, X1 to X2, then you integrate the PDF from X1 to X2, and this area is what you call-- that area is the probability. And the total area under the PDF, of course, has to be 1 because the probability that X lies somewhere is 1. The probability that X takes some value is 1. So this is how we work with PDFs. Again, you'll find when people want to sketch a PDF, the reflex is to sketch one of these bell-shaped things. And it turns out there's actually a reason for that. This bell-shaped thing or a specific bell-shaped thing called the Gaussian tends to arise in all sorts of applications, and that's a consequence of something called the central limit theorem. This is considered one of the most important results in probability theory. It actually dates back to about the 1730s as a conjecture, but it was Laplace who-- in I guess the late 1700s, early 1800s who actually proved it. And it wasn't actually called the central limit theorem until much more recently, till about 1930 or so. And was called that because it was the limit theorem that was central to all of probability, that was the thinking. So here is the central limit theorem. It says that if you sum up a whole bunch of little random quantities that are not necessarily Gaussian, and if they each have finite mean and finite variance, the sum is going to have a distribution that's going to look increasingly Gaussian. So you could start, for instance, with a random variable that's described by this triangular PDF. Take a whole bunch of random variables generated according to that PDF. When I say generated according to that PDF, what I mean is that the probability that you get a value between any two limits here is the area under that piece of the triangle. Generate a whole bunch of these and sum them together, you find that the resulting histogram starts to look Gaussian. You can start with another kind of distribution, and again, it starts to look Gaussian. And the more of these you add, the more it looks Gaussian. And so this can be actually made very precise. There's a very precise sense in which the limiting distribution in a situation like this is a Gaussian. So what is a Gaussian? I've got to describe that for you. I'll do it in more detail in a second. First, let me tell you how we defined these two key parameters. These are things that from other sorts of contexts. The mean and the standard deviation of the variance, you know it from quiz scores at least, but here is the mathematical definition in terms of a PDF. So if you have a PDF for a random variable capital X, the mean value of capital X is-- it's basically the average value of X weighted by the probability, which is what you expect. So it's X times the PDF integrated over all possible values. That's the definition of the expected value. And what we do when we take the expected value of the mean value on a quiz is a sort of discrete time version of this. So we're seeing how many people in a particular bin and multiply by the score for the people in that bin and sum over all possible bins. That's one way to think of what this is doing, assuming you've got the right normalization of the PDF. And the variance is the expected squared deviation from the mean value. So here's a deviation from the mean value. You square it, and now you want to take its expected value, so you weight it by the PDF of X and that gives you the variance. So the variance is the expected squared deviation from the mean value. OK, so the PDF is valuable in getting all of this. And to get a sense of what means and standard deviations and variances do-- I don't know if I said from the previous slide, by the way, that standard deviation is the square root of the variance. Did I say that? Maybe not. But I have it at the bottom of the slide, right? OK. OK. So shifting the mean of a random variable, if I define a new random variable with the same PPF except for a different mean, what that means is that-- what that signifies is that the PDF has just shifted over by that amount. So changing the mean and nothing else will just shift the PDF over to the corresponding position. Changing the variance from a small value to a large value will spread out the PDF because you're the variance is capturing the expected squared deviation from the mean. So a higher variance PDF has got to have a larger spread. But because the areas normalized to 1, if it spreads out this way, it's got to come down on top, and that's what you're seeing here. All these pictures actually turn out to be drawn for the Gaussian, but my statements are more general here. But here's the Gaussian itself. So now I'm going back to my notation W. We're going to think of a random variable W which is going to be typical of all my noise samples. It's going to have some mean which we'll be taking to be 0 and our examples. It's got a variance sigma squared. So if a random variable has this particular PDF, we call it Gaussian. That's the definition of a Gaussian random variable. The number here, while you've got to remember it at some point, but all it's doing is normalizing to unit area. So the key thing about a Gaussian is that it's an exponential with a negative sign there of the squared deviation from the mean normalized by the variance with that extra factor 2 there. So different choices of variance will give you different shapes here. So the smaller variances correspond to the more peaked and more sharply-falling PDFs. So let's see. How many standard deviations away from the mean you have to go before you have very low probability of reaching there? Anyone? There's no unique answer to this, but yeah? AUDIENCE: 3? PROFESSOR: 3 is not about idea. So let's see. Let's take sigma squared equals 1. That's variance of 1, so the standard deviation is 1. So for the red trace, by the time we get out to the number 3, we expect to actually see a very low value for the PDF. So 3 sounds about right. Does that hold up for the blue one? Sigma squared is 0.25. So the square root of that is a standard deviation, which is 0.5, so 3 times that. So when we get out to about 1.5, we should be essentially at 0. So don't forget the square root. The other thing-- actually, I should have commented on this earlier, let me show it to you-- on this slide that I had, I labeled this arrow here just schematically to show you that it's a measure of width. But the tag I put on it is standard deviation. Standard deviation is the thing that you want to use when you want to measure width on a distribution. That has the right units. Standard deviation, the square root of variance has the same units as X. If X is a voltage, the standard deviation is units of voltage. It would be a mistake to label a spread here by the variance. You want to think in terms of standard deviation when you're thinking about spread. So you define the variance and then take the square root to get the standard deviation. OK. So for our noise in this kind of setting, in our communications setting, we're going to assume that every noise sample was drawn from a Gaussian distribution with zero mean. Just the same kind of distribution that I showed you. So the only thing that's going to change from one example to another will be the variance. But for a given case, we're talking about IID noise. You're going to fix the variance, have zero mean, and all your noise samples will be pulled from that same distribution. If you were actually looking at data here for these excursions, if you were actually looking at what the excursions from the baseline are, and you wanted in a numerical experiment-- in a simulation setting, for instance, or in a physical experiment to get an estimate of what the mean and variance are, well, we've got very familiar expressions. You would take the sample mean or the sample variance. The square root of the sample variance would then be your estimate of the standard deviation. So we can come at the same objects-- well, we have the PDF, which is the mathematical construct, but in an experimental setting, this is how you would go about estimating these. And there's a whole big theory of estimation that tells you whether these are good estimates or not and offers alternatives, and we're not getting into any of that. We're staying close to the basics and close to what makes sense intuitively and what's essentially used all over. So now we have the task at the receiver of getting a bunch of samples like this and then trying to decide whether what we're seeing is a reflection of a 1 or a 0. If we had 0's sent from here, what we're going to see after we receive the noisy signal is perturbed samples. And so we're going to look at a particular sample and try and decide whether in that bit slot what was sent was a 0 or a 1. I'm going to actually use a scheme for illustration here that's not the scheme that I've suggested here. Here, I suggested something that's sending 0. If I'm communicating a 0 and I'm sending some other voltage level when I want to communicate a 1, I'm going between 0 and 1. It turns out on the physical channel, if you've got a transmitter with a certain peak power, you're probably better off using a plus V to indicate a 1 and a minus V for a 0 because you're using that transmitter at full power all the time. So you're actually trying to overcome the noise as strongly as possible. So that's the scheme I'm going to consider. I'm going to consider that when you want to signal a 1, what you're doing at the transmitting end is sending out L samples at plus some peak voltage Vp. And when you want to signal a 0, you send L samples at minus Vp. So this is what we refer to as a bipolar signaling scheme. So it would be something like this. This is the xn. And this is what I'm using to signal a 1, and this is what I'm using to signal a 0. But in terms of actual voltage levels, this is minus Vp and Vp here. And on the receiving end, what I'm getting at any particular samples-- so I pick one particular sample to look at, and when I look at that sample-- let's say at sample n sub j. So maybe I'm looking in the j-th bit slot and I pick one particular sample time, let we call that n sub j. And I have to decide, am I looking at plus Vp with noise or am I looking at minus Vp with noise? That's a decision. I know the Vp's-- assume that we've taken care of the scaling and so on across the channel. And I know the characteristics of the noise. I know that the noise samples are Gaussian, zero mean, and some variance. So if I draw a picture that's turned sideways here in terms of the received signal, let's see. I might get something centered around minus Vp or something centered around Vp. If a minus Vp was sent, then it's got a noise added to it. The noise has a Gaussian distribution. So this is the distribution of values I expect if a 0 was sent. So this is-- let me call it-- it's the distribution of Y-- I'm not going to put all the attachments here-- if a 0 was sent. Because my shorthand notation for the density of Y assuming a 0 was sent. I haven't drawn a very good Gaussian, but you get the idea. And here's the distribution of Y if a 1 was sent. So what I'm actually measuring is some number out here. I get some number. And I've got to decide, did that come from having sent a 0 and getting this much noise or did it come from sending a 1 and getting this much noise? That's the problem. So if 0's and 1's are equally likely, what do you think is a sensible rule here? Just pick a threshold where these two cross. Threshold in the middle. So if the sample is above the threshold, you declare a 1. If it's below the threshold, you declare a 0. What if 0's and 1's were not equally likely? Suppose it was much more likely that you would get a 1. And suppose we're still thinking in terms of threshold rules, what might you want to do? Suppose it's much more likely that we get a 1. AUDIENCE: Move the threshold to -- PROFESSOR: Sorry? AUDIENCE: Move the threshold to the left. PROFESSOR: Move it to the left. So you want to actually allow for the fact that most of the time you're getting 1's, and so you really have to get close to the 0 before you going to declare a 0. So your bias kind of gets built in. Now this is just thinking as an engineer what you might do. It turns out that for Gaussian noise, the optimum decision rule in terms of minimizing the probability of error is exactly a threshold rule of this kind. And the analysis will tell you where that threshold should be. So we're not getting into proving that this is the optimum, but it turns out with Gaussian noise, the minimum probability of error decision rule for this kind of a hypothesis test-- this is a classic hypothesis test-- is to pick a threshold. Now that's not true necessarily for other sorts of distributions, it's not true for the settings, but for the Gaussian it turns out it's what you have to do. So let's just assume equal prior probabilities. So 0's and 1's come at you with equal probability, and we now have to figure out what the probability of error is. So there's a slide here with some computation. Let me just walk you through that. We don't have to follow all the details and you can study it and more-- I mean, you can study it at leisure, but it's the same picture I showed. OK? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah? AUDIENCE: [? Sorry ?] [? to ?] [? interrupt, but I ?] have a question-- PROFESSOR: Yeah. AUDIENCE: --the Gaussian. PROFESSOR: [? About ?] [INAUDIBLE]?? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: Is that true when the two Gaussians have different variances? PROFESSOR: No. OK, I'm assuming-- OK, the question-- the comment was that this rule of the threshold being the optimum is not necessarily true if the Gaussians have unequal variances. But I'm assuming IID noise. I'm assuming Independent Identically Distributed noise. So the noise samples are governed by the same Gaussian right through, and then this turns out to be the optimum rule. Thanks for catching that. So you can imagine the picture with-- suppose the noise is very sharply peaked for one of these cases and very shallow for the other one. So there's high variance for the 1's and there's low variance for the 0's. You might then anticipate that if you got a signal way over to the left here, you're going to call it a 1, not a 0. So each case needs to be dealt with separately. But assuming these are equal variance, which goes with the IID case, this is the optimum rule. OK. So let me just step through this. What we're saying now is that what's the probability of making an error? Well, let me actually write down an expression here. So the probability of an error-- this is the general expression. It's the probability that I send a 0-- let me just say that this is the probability of sending 0 times the probability of declaring 1 given that 0 was sent. And then there's the other possibility. The probability that I sent a 1, and here's the probability of declaring a 0 given 1 was sent. So it turns out these are the only two ways you can make an error, and these are mutually exclusive, and so what you're doing is adding the probabilities of the two ways of making an error. You can either have a 0 sent, and then the question is, what's the probability of declaring a 1 if a 0 was sent? And then you have the corresponding term on the other side. If P0 equals P1-- in other words, if both of them are 0.5, this is going to be 1 minus P0. If they're both 0.5, then you can pull that out, and what you're looking at for the probability of error is just the sum of the areas under these two tails. Oh sorry, not the sum of the areas. If these are both 0.5, you pull out 0.5-- yeah. It's the sum of those two areas. OK. So 0.5 times the sum of those two areas. Well in the symmetric case, these two areas are the same. The area to the right of this threshold under the Gaussian here is the probability of declaring a 1 given that a 0 was sent. The area under the tail to the left here is the probability of declaring a 0 given that a 1 was sent. Those two areas are the same. So you'll discover that the probability of error is just the area under one of these tails. Just the area under one of those tails. So that's all you have to compute. So how do we do that? Well, as the area under a Gaussian. We write down the Gaussian. Let's pretend that this was 0 and this was Vp. It doesn't make a difference as far as the computation of areas goes, but it makes the expressions easier to write. So I'm saying that the area under the table here is equal to the area under the tail there. I can do it either way. I can either center the Gaussian at minus Vp and look at the area to the right of 0, or I can center the Gaussian at 0 and look at the area to the right of Vp. And the way the expression is written here, it chooses to do it the second way. So what we're saying is, here is the Gaussian. It's centered at 0, so I don't have to subtract any term off that term in the numerator. Here's the 2 sigma squared in the denominator. And I integrate it from Vp onwards. There's this notation introduced. Vp is square root of ES. The reason is that we're thinking in terms of the energy of a single sample-- or the power of a single sample, they turn out to be the same thing because it's just a single sample. So it's just the notation that's traditionally used. But what we're talking about as Vp there. So the area from Vp to infinity under the Gaussian with the normalization factor here. Now this is not an integral you can evaluate in closed form. It is a tabulated integral. Tabulated most conveniently in terms of something called the error function. And so when you work through the calculus-- and I won't show it to here, it's in the book, you might do it in recitation, you discover that the probability of error is this error function of the square root of ES over N0. N0's notation for 2 sigma squared. If I translate that back to notation we've been using, it's just Vp over sigma. So the error performance, the probability of error is a function of the ratio of the peak amplitude on the signal to the standard deviation of the noise. That's sort of the square root of the SNR. The SNR would be square of the amplitude to square of the standard deviation. So this is the square root of the SNR. And what does this function look like? We can plot it. So that's exactly that computation. This is a simulation on the theory overlaid on each other, but we have 0.5. This function is called the complementary error function. The C is for complementary, erf is for error function, and here's the square root of ES over N0 which we had in the previous expression. So you're really thinking of signal-to-noise ratio along this axis in dB and the probability of error on a logarithmic scale down here. So as the signal-to-noise ratio increases, as a signal becomes more powerful relative to the noise, the probability of error decreases. Visually what's going on? Let's go back to this picture. When the noise decreases relative to the signal, what's happening is that these Gaussians are getting more peaked and they're pulling in more tightly, and so there's less chance of confusing the two cases. So it's as simple as that. It's the separation between these two levels divided by standard deviation of the noise that's really going to determine performance. How far apart are these two cases relative to the standard deviation of the noise? That's the square root of the signal-to-noise ratio. That's what determines the probability of error in this case. OK. So are we done or could we be doing better? If you think of what we did, we looked at the samples in a bit slice, in a bit slot. We took one of those samples and we carried out this decision rule on it. Could we be doing better than that? Yeah? AUDIENCE: We could look at one more sample? PROFESSOR: We could look at more than one sample. This was a little bit arbitrary. It was conservative. Why you often do that is because the number of samples in a bit slot is small and you don't want to get near the edges because you're little worried about the transience. You've got a long enough-- if you've got enough samples in a bit slot and the transience have died out, then maybe you can just pick out a bigger chunk in the middle. And so that's what we're going to think to do here. OK. So it's the same setting, but we're going to average M samples. We've got L samples per bit. We may not be confident capturing all of those were averaging because there's some stuff at the edges, so let's pick M of them. Maybe less than L. Take M of them and compute the average. And I'm doing this just for one of the cases. You'd have to do the same thing for the minus Vp case. So the question is, what does the average do? So why did you want to average them? What was your intuition? AUDIENCE: Because that would-- it [? would be ?] [INAUDIBLE].. PROFESSOR: OK. So here's the key thing. If you've got independent noise samples and you average them, you're going to decrease the variance. If you've got M independent noise samples from an IID process, you decrease the variance by M. This doesn't hold if the noise samples are not independent. In fact, if one noise sample equals the other, then when you add the two, you get something whose variances is four times rather than just twice. So it's critical that these be independent. So if we've got independent samples-- independent noise samples from one sample to the next and we average them-- well, let's just average both sides of this equation. We've got the average of Y going to be the average of these values, which is just going to be Vp again because it's constant at Vp, plus the average of W. Here's the other interesting thing. We're not going to try proving this, but it turns out that the average of a sum of independent Gaussians is, again, Gaussian. You might believe that if you think of the central limit theorem. You think of each of these Gaussians being approximated by sums of random variables. So the sum of these Gaussians is then a sum of just a larger number of random variables that should still be Gaussian. So the sum of an independent set of Gaussians is, again, Gaussian. So all I need to know for this average W since it's Gaussian is what is its mean and what is its variance? It turns out if you add up a bunch of zero mean random variables, you get something with zero mean, no surprise. And if you add-- if you take the average, then the variance actually drops by that factor M. So what you're going to do is take the average of the signal, average of the noise. That shrinks the noise component. You have the same kind of picture but now with a higher signal-to-noise ratio. Now what you've got in the numerator instead of ES is EB, which is M times ES. You're summing the energies of all the samples that you've taken. And that's what we refer to as EB, it's the energy of the bit. All right. It turns out that that has all sorts of implications. You certainly want to be averaging if you've got this kind of setting, because otherwise you're leaving all these samples on the table and not making good use of them. So if you're really getting ambitious, you really want to be extracting all of that. Also, if you want to maintain the same error performance and the noise intensity increases, then you're going to want to have more samples per bit. You may want to slow down your signaling rate so you can put more samples per bit. It turns out in the deep space probe examples that we've been talking about, that's exactly what's happening. If you look at Voyager 2, it was transmitting at 115 kilobits in 1979. That's the year, I joined the faculty, that's a long time ago. That was near Jupiter. Last month-- I mean, it's gone past Jupiter, Saturn. The other planet I only like to say the Greek name of because it comes out wrong when I say it. It's Ouranos. And then Neptune. So it went past all of these. And now it's about 9 billion miles away. It's twice as far away from the sun as Pluto is. But look at the transmission rate now. It's 160 bits per second. So it's greatly reduced. And the reason is that over this extended interval, the energy per sample that arrives at Earth is just minuscule. I mean, it was small enough to begin with from Jupiter and look at what it does now. So it's about 1,000 times less in power and you've gone down 1,000 times less more or less in your signaling rate because you're trying to put that much more time in the signal. So these trade-offs are driven by trying to get the same energy per bit for a given noise to maintain the performance. As I was reading up on this, there were little references to things that went wrong. The only a handful of things that are listed as having gone wrong, but they turn out to be related to decoding. So there was a command that was incorrectly decoded and kept some heaters on for very long and caused some malfunction. Here was a flipped bit. This is one of only-- these are a few of only a small list of things that are listed as having gone wrong. But a flipped bit here caused a problem. You've got very few bits in these computers to begin with. Remember the numbers we had last time. So a flipped bit can cause trouble. OK. Let's do one last piece here. We're going to try and be even less conservative. So suppose I know that when a 1 is sent, what I receive is a waveform of a particular type. So the piece of the response corresponding to this has some particular shape. Suppose I know that. OK. So nothing is constant here. This is the actual y of n sequence. And then to this, I'm adding noise. So here's the thing. I've got a yn which is no longer just a constant plus noise, it's some known profile plus noise. That known profile is actually what the xn is going to look like when it goes through the channel. I should perhaps have called it y0 of n, but let's stick to x0 of n. So x0 of n is known, and we've got the noise. The question is, do you want to just be averaging or do you want to try something else? If I've got this kind of signal received and I've got the same amount of noise added to each sample, which of these samples is more trustworthy? Which sample do you want to weight more? I've got some amount of noise adding into all of these samples, so there's some standard deviations' worth on each of these. Which is the most trustworthy sample here? Yeah? AUDIENCE: The one on the right? PROFESSOR: Yeah. It's the one on the right because it's got the largest amplitude. By itself it has the largest signal-to-noise ratio. So if you're going to combine these samples, you would think that you would want to put more weight on the sample that was larger. So you can actually formulate that analytically. So we're going to combine the received samples with some set of weights an. Here's what it's going to do on the right-hand side of that equation. Again, when you take a weighted combination of zero mean Gaussians, as you get a zero mean Gaussian. So all you need to know is what's the variance of a scaled Gaussian? So let's see. If I have a wn having variance sigma squared, what do you think is the variance of 3 times wn? 3 times wn means the excursions are scaled by 3, so what's the variance? 9. So scaling by a particular number scales the variance by the square of that number. So the Gaussian you're adding in here has a variance which is sigma squared times the sum of the W squared. Sorry. The sigma squared times to sum of the A squareds. That's what the variance of the Gaussian is. So you can actually write a very simple optimization problem. What choice of weights maximizes the signal-to-noise ratio? And you discover, indeed, exactly that you're going to put the largest weight on the largest sample. And when you do that, the resulting signal-to-noise ratio is, again, energy of the signal that was transmitted divided by the variance. So if you do the optimum processing with this so-called matched filtering, you're going to get to energy of the sample-- sorry, energy of the bit over the noise variance governing the performance. So it's the bit energy over the noise variance that's going to determine performance provided you milk that bit slot for everything it's worth by doing the match filtering. OK. We'll leave it at that for today.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
14_Spectral_representation_of_signals.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR 1: OK, let's launch right into it. Jacob and Uri were inspired by the echo channel to try out a simulation of what I'd put on the board last time or on my slides. So what you're going to hear is Jacob's message going into the echo channel. Remember, that was something with a unit sample response of the type delta n plus. And actually, I think in their case, now it's 0.999 delta n minus 1, so it is something like this. Sorry? Oh, n minus 4,000, OK. And so you'll hear the original message, I think. You'll hear the message going through the echo channel. And then you'll hear the message cleaned up with the receiver filter. That's just the inverse filter. And then you'll hear what noise does to it and two flavors of noise, I think. ECHO VOICE: This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. [BELL DINGING] PROFESSOR 2: I'm just going to increase the delay, so we can hear the echo more clearly. ECHO VOICE: This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. PROFESSOR 2: So you can all hear the echo in that. And the next one will be it cleaned up-- ECHO VOICE: This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. PROFESSOR 2: So this was just one in deconvolution, assuming the channel had no noise. The next one is what happens if there's even a small amount of noise in the channel-- so little that you couldn't hear it in the echoed signal. ECHO VOICE: This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. PROFESSOR 2: So you can hear the noise building up because of the deconvolver. If you actually had noise at a particular frequency, you end up with an even-- ECHO VOICE: This is a test of the 602 deconvolution system. If this is a real deconvolution, you are instructed to go as quickly as possible to the frequency domain to make an assessment of the effectiveness of this approach. PROFESSOR 2: So one of the difficulties with deconvolution is it can be perfect if you have no noise. But even small amounts of noise can really mess up the deconvolution. It magnifies the small noise sources. PROFESSOR 1: OK, great. Thank you. Thank you. And that was a few lines of MATLAB, right? Same thing can be done in Python. All right, we continue. So we're going to talk today about spectral content of signals after having spent some time on the frequency domain-- the frequency response of LTI systems. Let's get this up here. So we've talked about frequency response. You've seen the definition. If I give you the unit output response of a system over here, you know how to compute the frequency response. And then we went through how you can go the other way with the DTFT to compute the frequency response and the inverse DTFT to compute the time domain signal from the frequency response. And what we said last time was that what you're doing with a unit sample response, you could actually do with any signal. So you can take any signal x sub n, compute from it the DTFT, the Discrete Time Fourier Transform, with the same formula. What you get is this object that can be used then to reconstruct the signal-- again, the same formula, all right? So there's just a change of perspective. There's nothing different here. The key observation now, though, is that the formula on the left, the inverse DTFT, is actually allowing us to represent x sub n, the time domain signal, as a weighted combination of exponentials of this type. And the reason that's important is that we know how to deal with signals of that type very easily. We already know that if you have e to the j omega n going into a system with frequency response-- h omega, so an LTI system with that frequency response. Let me make this a specific frequency omega 0. What comes out is the frequency response evaluated at omega 0, multiplying what went in. All right? Nothing more complicated than that. So what now if you had an x sub n going in that was a weighted combination of terms of this type? And I'm going to take a weighted combination that's actually a continuum. It's not just a finite number of terms. I'm going to actually take this particular weighted combination. So over some interval of length 2 pi, I take a weighted combination like this. So this is going over all frequencies in our interval. Let's take minus pi to pi. I might as well write in minus pi to pi-- keep it explicit. You can think of this as being approximated by a sum in the usual fashion. I'm not going to write this out. But we know how to approximate integrals by sums. What the sum will have is, for instance, a typical term of the type e to the j omega 0n. And the weight that multiplies it will be x omega 0 d omega, right? So we think of x omega 0 d omega as being the amount of the exponential at frequency omega 0 that's in x sub n. So here is a representation of how the signal is made up. So then what would you say is the output of the system? If that's the input, and this is an LTI system, what's the output going to be? Any ideas? Somebody? I thought I saw a hand. No ideas? What if instead of this, I had a1 e to the j omega 1n plus a2 e to the j omega 2n going in? Suppose that had gone in? What would be coming out? Folks, we're just two weeks from a quiz, here. Yeah? STUDENT: [INAUDIBLE] PROFESSOR 1: So can you tell me explicitly what I would get in this case? If this was x sub n, then y of n would be? STUDENT: [INAUDIBLE] PROFESSOR 1: a1h. STUDENT: [INAUDIBLE] PROFESSOR 1: Is it just omega, or? STUDENT: It's omega 1. PROFESSOR 1: Omega 1, right? It's the frequency response evaluated at the frequency that you're interested in, and then e to the j omega 1n, and then the response to the other term. This is what superposition is about. So that wasn't so hard. What if instead, x sub n is given by a continuum of such exponentials? Integrals are essentially linear combinations, but taken to the limit, where it's not just a finite number. So I'm not asking for a proof of anything. I'm asking for your conjecture as to what the answer might be. This is how math is done, by the way. You conjecture what the result might be based on gut instinct, based on well-educated intuition. And then you go back and construct a proof and hide all your tracks. But engineers like to work with intuition, and often will stick with that. Yeah? STUDENT: [INAUDIBLE] PROFESSOR 1: OK, so what would-- can you give me the explicit expression? What's your guess? This is going in. It's a weighted combination of exponentials with weights that are given by x omega d omega. So-- STUDENT: So you would get x omega d omega [INAUDIBLE].. PROFESSOR 1: Times what? I didn't hear the last piece. STUDENT: Times [INAUDIBLE]. PROFESSOR 1: Where's the j omega coming? What's the j? I'm missing-- oh, e to the j omega? All right. So start again. Oh, you don't want to? STUDENT: No. PROFESSOR 1: OK. You're on the right track. You got us started. Anybody else? I could show it to you on the next slide, but that will take all the fun out of it, right? STUDENT: [INAUDIBLE] PROFESSOR 1: OK, it's probably that I didn't hear. Can you tell it to me? STUDENT: x omega [INAUDIBLE]. PROFESSOR 1: X omega times h omega or e to the j omega n? What did you say? STUDENT: [INAUDIBLE] PROFESSOR 1: I'm willing to-- STUDENT: [INAUDIBLE] omega n times [INAUDIBLE].. PROFESSOR 1: That's still the part that went in, right? Now it's got to get mapped by a frequency response. STUDENT: Yeah, that's what I said. PROFESSOR 1: OK, so maybe you'd said that, and I didn't hear it. But now we've got to assemble this over all possible frequencies. Well, we have the 1 over 2 pi. Maybe this was said already, and I just couldn't hear it. All we're doing is we're saying here's a weighted combination of exponentials going in. The weights are given by the x omegas or x omega d omega, if you want to think of it that way. So what comes out is the combination of responses to each of those. These are the things that go in. For each frequency, you multiply by the corresponding value of the frequency response and do that for all the frequencies of the input. So this is just applying linearity and superposition. So if I compare that with this expression, which just tells me how the time domain signal yn relates to its DTFT, right? This is just writing the same thing for y that I wrote for x over there. If you compare the two, what you discover about the DTFT of the output? Where's the weighted combination? It's whatever multiplies e to the j omega n in this expression, right? So it's just going to be h omega x omega. So we've done a complete analysis of the input-output response of the system for essentially an arbitrary input with just a simple multiplication. So look what we've done. We've taken the input that we were given, computed the DTFT of it, which gives us the spectral content, and I'll spend some time giving you intuition for that. That's the spectral content of the input signal. What we've discovered is that the spectral content of the output is the spectral content of the input scale by the frequency response. And once you have the spectral content of the output, you can reconstruct the time domain signal. So the big difference here is there's no convolution. You're just doing a multiplication. Instead of doing y of n equals h convolved with xn, we're just doing a multiplication. So once again, you see the convolution of the time domain maps to multiplication in the frequency domain. All right, so we'll build up to the story again. Let's get some intuition for spectral content. And let's take a particular example. So suppose I have an x of n that's a one-sided exponential. You've probably done things like this in recitation already. So this is a signal that starts at time 0 and then starts at the value 1 and halves it each time. So it's a discrete time exponential. And so what's the DTFT? Well, you're going to some from m equals minus infinity to infinity. But actually, this only exists for positive-- for non-negative time. So you're going to get 0.5 to the n. Or let's keep it at m. That's just the definition. Isn't it the definition? I'll just use the definition. So you can now sum an infinite series here. And what do you get? You get 1 over 1 minus 0.5e to the minus j omega. This is just summing a geometric series because each term here follows from the previous one by multiplying by the factor 0.5e to the minus j omega. So it's a geometric series with that ratio. And so this is what the sum works out to be. So if you wanted to figure out the spectral content, you first compute the DTFT. And then the most helpful way to get a feel for the signal is to look at the magnitude of the DTFT. And that's what's actually plotted in this case. This is taken from somewhere that used slightly different notation, but we'll talk through it. What happened to the top of my slide, here? OK, it doesn't matter. What I've plotted here is the magnitude of x and the phase of x. To get that, you'll actually have to convert this to magnitude in angle form. I'm not doing that for you. I'm assuming you've had practice or will get practice in recitation. But the result of that is a magnitude that looks like this and a phase that looks like this. The horizontal scale, just to make the point that this is a periodic object, just like frequency response, it actually goes from minus 4 pi to plus 4 pi. But the interval of interest is really just minus pi to plus pi. All right? So it's just that central portion that's of interest-- similarly here, minus pi to plus pi. And then it replicates periodically outside of that. So we didn't really need to show it to you outside of that. This is just to make the point. Another thing to observe is that the magnitude is an even function of frequency, and the phase is an odd function of frequency. So these are elementary checks that you should make. If you get an answer that doesn't satisfy those properties, you've gone wrong somewhere along the way. If you look at the top plot in this set, the top plot is exactly a signal of that type, except I've chosen a different number. Instead of 0.5 to the n times u of n, it's something else. But here's a geometric-- sorry, a discrete time exponential or geometric series. Now I've just plotted the DTFT from minus pi to pi to show you what the spectral content of the signal is. And I'm just plotting the magnitude. Ignore these labels. These are the same figures you saw earlier for little h and big H. I'm using them again, except now I'm thinking of this as a signal, and this is its DTFT. This is a signal, x sub n, and this is its spectral content. The relationship is exactly what we had with frequency response. So where is the spectral content concentrated? Is it at low frequencies or high frequencies or intermediate frequencies for this first example? Concentrated around 0, so you'd say it's concentrated at low frequencies. There is content on all frequencies, though, so this doesn't dip down to 0 anywhere else. You have to assemble a combination of sinusoids at all frequencies to construct this signal here. Here is a signal. I've had to change the horizontal scale because this is a signal that evolves much more slowly. Again, I can ask what's the spectral content of it? I get something which has a peak near 0 frequency. So it's only got low frequencies in it and has very little high frequency content. You can also start to develop ideas for how fast-- how high a frequency you ought to expect to see here. So for instance, what's the fastest wiggle that you see in this signal? About how long does it take to-- if you were thinking of underlying sinusoids, what's the fastest wiggling you're seeing over here? Well, to my eye, this kind of rises and curves within about 18 or 20 samples, right? So this might be a half period of an underlying sinusoid. So if I thought of the period as being-- these are just rough calculations. But it helps you understand what we mean by spectral content and helps you as a way of checking answers. But let's see. If I said 18 was approximately a half period of an underlying sinusoid, and I don't see anything faster than that, period is 2pi over omega 0. So I'm saying that's approximately 18. So the frequency that I expect to see, the fastest frequency there, is 2pi over 18 or pi over 9. Is that roughly consistent with what we're seeing there? Does the spectral content drop off? Well, here is pi over 4. Here's pi over 8. Somewhere on pi over 9, we've run out of underlying components. It's because the frequency content is limited to that range that the associated signal doesn't wiggle any faster than this. Here's another example-- a signal that actually-- well, in this case, it seems to have some fairly regular periodicity to it, and then it damps out. By the way, in all these cases, I'm assuming-- actually, in all these cases, I'm assuming the signal's identically 0 outside-- outside of what I've shown you here. If you take the DTFT of this, you find that the spectral content is what-- low frequency, mid frequency, high frequency? STUDENT: Mid? PROFESSOR 1: Mid frequency right? Because this is low frequency. Here is 0. This is high frequency. This is just a reflection on the left side. So at some intermediate frequency, there's a peak in the spectral content. And again, you can go through the rough calculation I just made. So we see some oscillation here. Let's see. Let's estimate the period. That's 1, 2, 3, 4, 5. Let's say it's about a period of 5 for those oscillations. So 2 pi over omega 0 is approximately 5. So omega 0 is approximately 2 over 5 pi. So we expect to see a spectral peak somewhere around 0.4 pi. Here's 0.25 pi. Here's 0.5 pi. We're about right. So make these sorts of checks. Yeah, question? STUDENT: So for that first calculation with the [INAUDIBLE]. PROFESSOR 1: Oh, sorry. Yeah, I should have done twice this, right? What I did was estimate-- thanks for catching that. I estimated at about 18 is the half period. And so the period I should have had-- 36 here. Good. This is all ballpark, of course, but no reason to make it worse than it has to be. Vibrating, right? And that's not exactly where it sits down there, but it's in the right region. Now here's the part that we've already seen on the board. Once you know the spectral content of the input, you can assemble-- or you can think of the time domain signal as being made up of those components. Correspondingly, that's what the output is. And all we're doing is invoking the fact that this is an LTI system for which we know the frequency response gives us the output for exponential inputs. And then we compare that with what we expect to be seeing for the DTFT of y, and we make this conclusion. So this is exactly what we had earlier. One thing to keep in mind, the DTFT, the frequency response, the DTFT-- these are all complex functions of omega in general. Each of them will have a magnitude and an angle. So make sure you understand why the magnitude of y is the product of the magnitudes of h and x and why the angle of y is the sum of the individual angles, all right? It's basically the fact that for a complex number c, you can write it as the magnitude times e to the j angle. So that's really what's being invoked there. So really, what the story is about-- we've done a lot of math along the way. But this is really the story. And I've only exposed a little part of it for you because I've only dealt with DT signals. But the same thing holds for continuous time signals. A huge class of such signals can be written as linear combinations of sinusoids. And when I say "linear combination," it could be a combination of a discrete set, finite or infinite. Or it could be a continuous combination of exponentials or sinusoids under an integral sign, but the idea is the same. If you've done 1803, you've seen this kind of thing happening, at least for periodic signals. And then the other piece of what we rely on is that LTI systems are very easy to understand in terms of their action on sinusoids. So once you put these two pieces together, you've got a very powerful way to analyze LTI systems. So just to go back to the kind of example I had last time, in, which you'll be-- or you're already dealing with in the lab-- we're talking about an audio channel, for instance. The frequency response, in this case, is the magnitude. Some characteristic here-- this is a bit of a cartoon. But let me show you more typical experimental plots of frequency response. This is frequency response magnitude. In most of these plots, people don't show you the phase. Part of the reason is, or maybe the major reason is, that for audio, the ear is not all that sensitive to phase. If we were doing the analogous thing for video, then you'd be very concerned about phase. But in audio characteristics, people will typically only show you the magnitude because the phase distortions aren't picked up quite that readily. So here is three speakers. If you look on the site there, you'll see many more tested. This is the frequency range. Now, I should make some comments about that. We're talking about doing minus pi to pi. We've been talking about filters with frequency responses that we show from minus pi to pi. So for instance, if I had a band pass filter, it would be something with-- in the ideal case, something like this, right? This is because we're writing things in terms of big omega for a discrete time filter. These are actually written-- the scale here is hertz. And they're talking about the action on an underlying continuous time signal. So you actually need a way to go from an underlying continuous time signal that sampled at a particular sampling rate-- let's say f of s samples per second-- to a corresponding omega for the underlying discrete time sequence. So the question is, how does fs map to omega? And I had a slide last time. I haven't gone through it in detail. Maybe we'll have you work through it on a homework problem. But this is actually the mapping. If you have a sequence that comes from an underlying continuous time sinusoid by sampling at fs samples per second, and you're doing all your calculations in the discrete time domain, if you want to think about what that means for the underlying continuous time domain, you want to map pi to fs over 2. It's not the omega that maps to that. It's the pi. So for instance, in the lab, I think you're using 48 kilohertz, for instance, at least for some part of it, as a sampling rate. You get a discrete time sequence out of that. You do various DTFT-type computations-- spectral content or frequency response. Then you plot them on this kind of a scale. If you want to think about what the underlying continuous line frequency is, well, that's 24 kilohertz in this case, and minus 24 kilohertz, it's at this end. So when you're trying to visualize what this characteristic is telling you about what you're seeing with a discrete time sequence, that's really the mapping. The other thing about these characteristics is that people only plot the positive frequency part. So they ignore the negative frequency because of the symmetries there. In applications, when they give you a frequency response, they will typically just give you the positive frequency part of that. All right, so this is what we're seeing here. This is the characteristic of the LTI system that you're going to be sending signals into. And then you've got to characterize the signals that you're going to send through it-- voice or music or whatever. And this is a figure I showed you last time. But basically, you're looking at the spectral content of the signal of interest and seeing how it matches up with the channel that you have. And if you compare with-- well, let's actually look at the previous case and get a few landmarks, here. So let's take the Sony speaker, for instance, down here. OK, so it's got a fairly flat frequency response for a range of frequencies. But you've got to get fairly high up before you get there. For frequencies lower than about 100 hertz, this is not doing a very good job of propagating the sound. The frequency response is measured by having a microphone at a fixed distance from the speaker in an anechoic chamber. And you can see that this one is actually perhaps the poorest of the-- it is the poorest of the speakers in that it does it very poorly with the low-frequency sound. So for this particular one, you would hope that the spectral content of what you're trying to send through the channel lives somewhere in maybe 300 to-- 300 hertz to 10 kilohertz-- if you want to get it across the channel-- from the speaker with high fidelity. But if you've got low-frequency signal that you're trying to send, and you use the speaker, well, you're going to be out of luck. It's not going to propagate it very well. So thinking in terms of frequency response and spectral content is really key to making sense of a lot of this. All right, let's get a little more practice with this. And I just want to show you that once you've learned how to deal with frequency response, there's not new stuff that you're going to do to deal with spectral content. It's just a change of perspective. So let's see. If I asked you for a signal that had its frequency content uniformly distributed in some finite range, can someone tell me what that signal is going to look like? I'm asking you for a signal whose spectral content is uniformly in some range minus omega c to plus omega c. Have you met such a signal before? Anyone? STUDENT: [INAUDIBLE] PROFESSOR 1: Sorry? STUDENT: [INAUDIBLE] PROFESSOR 1: It was the unit-- we've seen the same kind of thing with the unit sample response to the low pass filter, right? So if this was a frequency response, then the associated unit sample response would be the signal we're talking about. So remember what that was called? We called it a sinc function-- sinc function in time. So if the spectral content is this in frequency, then the signal that you're talking about is going to be a sinc function. Now you can actually work that out. You don't have to take my word for it. You want flat spectral content in the range minus omega c to plus omega c. So the signal that you're going to get as a result you can extract from this computation. And this is exactly the same function that we saw last time. So the DTFT does what your eye may not do very well. If I had just given you the signal and asked you to take a guess as to what the spectral content of that is, you're not very likely to have ended up deducing that the spectral content is flat in some range and 0 outside of that. So the DTFT is valuable in actually doing this analysis for you. So there is a signal that has flat spectral content. More examples of a similar type-- and again, we first encountered these in the context of unit sample responses and frequency responses. But now, I just want to change perspective and think in terms of time domain signals and their associated DTFTs. So if we look at the top one there, this is the case we just saw, except I've truncated the sinc function. And so what I get is not the perfectly uniform distribution of frequencies in some interval. There's a little bit of a wiggle to it. But this is essentially the sinc function and its spectral content. Here's another signal whose spectral content is at high frequencies and essentially 0 in the low-frequency range. What does it look like in the time domain? Well, you can actually work it out. And here's what you see-- that this has actually more wiggle to it than the sinc does. Alternate samples seem to take opposite signs, at least of the dominant ones over here, reflecting the high-frequency content of that signal. Here's something that's intermediate. This also has the oscillation in sign, but it's not necessarily in alternate samples. It's a little bit more leisurely. Here's something that has low frequency and high frequency, but not intermediate frequency. So you see a component that's rapid wiggling, but you also see this lower-frequency content in there. So this is what the DTFT does for us. Now there's an issue of how you compute these, because if you look at the formula for the DTFT, you could certainly do analytical things with that expression. And that's the case that-- we've treated cases of that type, where you write down an analytical formula for the DTFT. And then you do things with that, like plotting. But if I gave you some numerical sequence here, there's certain simplifications. For one thing, you really aren't going to expect to compute this at a continuum of values of omega. You're not really going to expect to construct the values from minus pi to pi at every real number omega in that interval, right? That would take you a long time. So what you're likely to be doing is asking for what the DTFT is at some grid of points. So you'll form a little grid. And it's on that grid of points that you want the DTFT. That's the only practical thing you can do. You're not going to compute it at all omega outside of toy examples like that. So if you had a numerical sequence collected in the lab, for instance, this is what you'd be aiming to do. What's the other thing that's likely to be the case if you've got a numerical sequence collected in the lab? Any thoughts here? It's unlikely that my summation is going to go from minus infinity to infinity because I'd be waiting a long time to collect that signal, right? So in practice, what we're dealing with are signals of finite duration, typically assumed to be 0 outside of that interval, though you might have reasons in some context for assuming otherwise. We're always going to take finite length signals. So the summation will be over a finite interval. And we're going to want to compute the DTFT on a finite group of points. And that makes for some simplifications. So let's see here. You've probably heard people talk of FFT, or the fast Fourier transform. The fast Fourier transform is not a new kind of transform, so the name is a little bit misleading. It's a good way of computing samples of a DTFT. So you don't have to learn a new transform. We're still talking about this object, the DTFT. The FFT is an efficient way of computing the DTFT on a grid of points, given a signal of finite duration. Now, I've got a lot on this slide, and I hope all of it is right. But let me talk you through the basic idea, here. So we're going to compute the DTFT on a finite grid of points. So that's the omega k's that I've shown you over there. We've only got a finite duration signal. Let me say that it exists only from 0 to p minus 1. So the signal is 0 outside of that interval. And therefore, all the other terms drop out of this. So there's nothing new in this formula. This is just acknowledging that I only want to compute the DTFT at a grid point, and I only have a finite duration signal. Now the interesting thing is that if your signal is 0 outside of this interval, well, that means your signal is completely specified if xn is known to be non-zero only on the interval, let's say, 0 to p minus 1. So that's p values. Then you would hope that just having p samples of the DTFT will allow you to go the other way. We know for sure that if I gave you the entire DTFT, if I gave you the entire DTFT, that you could go the other way, because we have this expression. If I gave you the entire DTFT, you would just plug it into here. And you'd get the time domain signal. What's interesting, though, as it turns out-- and you might expect this. Since your signal takes non-zero values only at p points, you only need p samples of the DTFT to get an exact reconstruction. And here is the formula. And the derivation is not hard. I've omitted it here. It's the same kind of idea that we used in the full case. But you can actually-- using the values of the DTFT at these grid points, you can reconstruct the signal x sub n. So with these simplifications, you actually have a simple pair that gets you through the numerics. If you followed these formulas exactly as they're written, you'd end up doing work on the order of p squared, because you see each of these summations involves taking p products. And then you've got to sum them. But you've got to do it at p different frequencies. And the same thing on the other side-- you've got to do p products, but you've got to do it p times. So it's order p squared computation. The fast Fourier transform is actually a clever way of using the symmetries associated with these exponentials to group the computations and make it much faster. And you can actually reduce it to order p log p. So it's a huge simplification. I've got some illustrative numbers down there. So the FFT actually is a major reason for advances in numerical computations, including signal processing of various kinds-- the fact that you can get this reduction from p squared to p log p. All right, I don't think I need to say much about the grid of points. But let's move on to thinking about spectral content of signals going through channels. All right, so we'll get closer to signals of the type that we're interested in, which are these signals in this case for on-off keying that are signaling 1's and 0's and that we're trying to get across a channel-- for instance, the audio channel. So this might be a typical finite length sequence. This particular case, we had chosen 7 samples per bit. That's why the shortest interval you see has 7 non-zero bits, there. Here's the spectral content of the signal. And actually, I've taken this figure from an earlier version of the course, where we talked about the discrete time Fourier series, not the discrete time Fourier transform. The discrete time Fourier series turns out to be something very similar to the formulas I showed you for the FFT. So the discrete time Fourier series, apart from a scale factor, is essentially a story built around this relationship. And that's developed in some detail in Section 13.2, but we're actually bypassing about this term to try and keep the story simpler. So when you see these plots, you'll see on a, at magnitude a of k, that's a symbol associated with the discrete time Fourier series. These are actually Fourier coefficients associated with the periodic replication of this signal outside. So it's a discrete time version of the Fourier series you may have seen in 1803. All you have to do when you see a plot like this is think of it as a scaled version of the DTFT samples. So we're just talking about samples of the DTFT taken at a grid of points. The scale factor may be off by a factor of p-- the length of the signal. But the shape is entirely told to you here. So think of this as samples of a DTFT. We're going from minus pi to plus pi. What's down at the bottom here is imagining that you've sent this signal over a channel that could absorb the entire spectral content of the signal. So suppose you had a channel whose bandwidth-- suppose I had a low pass channel whose bandwidth has got some cutoff. This is low pass. Suppose this bandwidth could absorb the entire spectral content of the signal. In other words, what I mean is that all these DTFT numbers that are significant actually fit in under this. So suppose your channel was such that it didn't attenuate the DTFT coefficients. So the spectral content is unmodified when it gets through the channel. And so if you resynthesize the signal using this formula at the receiving end, you'd get back the same thing again, because the channel's not induced any distortion. Let me go past this and actually show you what happens when you start to distort the-- what goes across. So here, what we're doing in this succession of experiments is sending that same signal through a channel with successively smaller bandwidth. So in the first case, everything goes through. There's no distortion. You get the same thing back again. In this case, the channel actually has a cutoff that ends up zeroing out all spectral content outside of some frequency range. So it's a low pass channel whose bandwidth is not enough to take the spectral content of what you're feeding across. So what do you expect to happen? Well, the higher-frequency components of the signal have been zeroed out. So what should happen? You expect the signal to be more rounded because it can't make these sharp transitions. It takes high-frequency content to make sharp transitions. So what happens is you trim the spectral content by sending it through a channel that's not wide enough to contain all of the signal is that you get a more rounded signal at the other end. So you sent this. This is what you're receiving. This is the distortion that the channel has imposed on your signal. And you can imagine if you tried to find a place to sample this, you might run into some trouble. If you go even more extreme, here is an even narrower channel. What comes out is even more rounded than what we had there because you've taken away more high-frequency components. The signal just can't wiggle that fast, so it takes its leisurely time going through its paces here. And you can imagine that you can be thrown off when you try and take samples. This is actually even more evident on the eye diagram, here. So these are eye diagrams-- again, the same kind of thing. As you successively transmit fewer and fewer-- let's say less and less of the high-frequency content of the signal, what gets picked up-- what gets received is a more rounded version of what was sent in. And the corresponding eye diagrams that you construct-- well, at a certain point, I guess somewhere around here, you'd be a little nervous about trying to find a place to threshold and decide on what signal you have. So this is not a noise issue. This is a distortion issue. It's distortion induced by the channel. And it can all be understood in terms of what the channel is doing to the spectral content of the input. I think we'll continue next time to get more insight into this and start on the topic of modulation.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
5_Error_correction_syndrome_decoding.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GEORGE VERGHESE: OK, let's continue. So we're going to continue with linear codes and talk today about error correction. So let me just remind you, we're thinking of linear codes very concretely as being generated through a process like this. We put this up on the board several times. You've got the data bits and then the parity bits being generated by the data bits multiplying into a so-called generator matrix. You've seen this in lecture on recitation as well. And we've considered different ways to think of this matrix. One way is to think of it as made up of a bunch of rows, and what you're doing is taking linear combinations of these rows to generate a code word. So the dimensions here-- this is going to be n. So when you take a linear combination of these, you're generating a word that's n bits long. But the underlying degrees of freedom only correspond to k bits, because you're just doing a weighted combination of k of these. OK? Now, we talk of these as though they're vectors, you're combining them, take the linear combinations, and so on. And I just wanted to say a word about in what sense this is a vector. So this is an array of n bits. So we're talking about something-- I'll call it v, let's say-- which is an array of n bits. And the question is, in what sense is that a vector? In what sense does it live in a vector space? So when we say vector space, we're usually thinking of arrays of n elements with real numbers in them, and the kind that you use in physics, where you take linear combinations of them with real numbers and you get new vectors. This is the same kind of thing, except it's working over not the real field, but as we've seen, gf2. So this is a vector space over gf2. It's a funny vector space, again, because it has a finite number of elements. The vectors that we're used to thinking of-- Euclidean vector space has an infinity of elements, because you can have an array of n components, but each component could be any real number. So any point in 3D space would be a vector in r3. So this is a vector space over gf2, and it only has a finite number of components-- only has 2 to the n possible vectors. It's a finite set of vectors, so it's strange that way too. So in what sense is that a vector space? Well, it turns out that they're pretty abstract things that you can refer to as vectors, provided they satisfy certain axioms. So what you want to be able to do is define a sum of these objects. You need to have a set of scanners and define a scalar times vector multiplication. And then you need a 0 vector, a vector that, when you add to another vector, gives you the same vector back again. You need certain distributivity properties. So if you take a scalar times the sum of two vectors, you get things like that. So you can list a bunch of these properties. I'm not trying to teach you are the axioms are that define the vector space. But there's a set of axioms, and you'll recognize very quickly that Euclidean space satisfies those axioms. But the point is there are other objects that satisfy the same axioms, and you can work with them as vectors-- so notions of independence of vectors, a basis in terms of which you write other vectors-- all of these. Now, I'm not assuming you've done a linear algebra course. I'm assuming you've picked up some of this in the course of doing physics, and so on. I'm just trying to talk intuitively here. One thing we don't have here is a notion of an inner product, or a dot product, or a scalar product. So if you had two n component vectors in Euclidean space-- you're probably used to this from physics-- you'll take inner products defined in this fashion. Well, we can certainly do this kind of computation with the elements of a vector here, but the resulting object doesn't have the properties of an inner product. For instance, you can take the-- if you take the inner product of two non-zero vectors in real vector space, you'll never get-- well, you can get the inner product to be 0 under very special conditions. There's a notion of orthogonality. It turns out that doesn't actually work quite the same way here over this space. So what do we do have is we set aside orthogonality. We'll talk about linear combinations of vectors. We'll talk about a set of basis vectors. So a set of basis vectors would be a set of vectors that you can take linear combinations of to get other vectors in the space-- and a minimal such set. So we'll be using a bit of the language of vector spaces. You might have some notions of that might come from what you've done with physics. And that's all really that we want to depend on. All right, so back to this-- what we have is these arrays of n bits. We think of them as vectors in some space. The dimension of the space is the number of vectors that you need in order to generate other vectors by linear combination. So the question is, can I generate some vector by taking alpha 1 v1 plus alpha 2 v2 plus alpha 3 v3? So I'd like to be able to generate a vector in the space by taking a linear combination of other vectors. So if you ask for what's the minimum number of such vectors you need here in order to be able to generate any vector by taking a linear combination, that's the dimension of the space. So in that sense, it turns out that these anaerobes live in an n-dimensional space. But they don't span all of n-dimensional space, because you're just-- you've just got k of them here. It turns out that what you get by taking linear combinations of these is a k-dimensional subspace of an n-dimensional space. So in some sense, when you define a code, what you're doing is you're saying, I have this n dimensional space that my words can live in, but I'm going to restrict myself to words that live in a k-dimensional subspace so that, if a vector pops out of that subspace, I recognize it as being an error. So that's the general idea. All of this can be done more carefully using the notion of vector spaces. I just wanted to give you a rough idea of that. This is one way of thinking over. Here was another way of thinking of it, which was column-wise. We think of the generator matrix as being made up of a bunch of columns. And that's useful when you want to think about how a parity bit is defined in terms of the data bits. So here's what you see when you think of this column-wise. So let's take p1 here. Actually, let me specialize this further. We've already said that, because of the form in which we set up our code words, this is in what's called systematic form. We've got the data bits sitting there, and then we add in the parity bits. Because of that, we've said that there's an identity matrix here all the way down as 1's. And then we've got some other matrix here, which is something we'll denote by A. OK, so when we do this multiplication in the first k positions, we just pick up d1 to dk. In the next position over, I get the expression for p1. So p1 is going to be d1 times the first entry there plus d2 times the second entry, and so on. So here's what I get. Let me call this A11 in that first row. Here's A21, all the way up to Ak1. So these are just numbering the entries down that first row. So I'm taking a combination like that, a linear combination of the data bits to get that first parity bit. So the j-th parity bit is found by going over to the j-th column. The entries here are A1j all the way up to Akj. So the way matrix multiplication works-- if I'm looking for the j-th entry-- the j-th parity bit here, I take this and do the dot product kind of expression with the j-th column, so this is what I get. So this is a typical parity relation. And it goes the other way too. If you had this expression, and not the matrix, you can just take those numbers and translate them back in. And these numbers are just 0, 1 in our-- in the case of a binary field that we're working over. That is just 0, 1. So you either have the data bit there or you don't. All additions or modulo 2 additions, of course. Actually, let me call this the parity-- it's the parity definition. I may have had another term for it in my slides. Let's see here-- this parity equation. It just defines the parity better. Here's what I think of as a parity relation. What is this sum? What does that sum work out to be? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Sorry? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: 0-- because I'm adding pj to itself, and then gf2. That gives me 0. And this is what I think of as a parity relation. So a parity equation defines my parity bits, but I get immediately from that a parody relation that relates my parity bit to my data bits. Turns out this is important for the way we're going to talk about our correction. OK, so let me step off the dime here. And this is a particular example. I just wanted to set up the general notation before we got back to this. We've looked at this before-- 9, 4, 4 rectangular code. So what might that be? How many data bits? 9, 4, 4-- 4 data bits, right? So how would I get in 9, 4, 4? I'd have D1, D2, D3, D4. And then, depending on how I number this, P1, P2, P3, P4, P5 would be one way to number it. I don't know if that corresponds to what's up here. Can we check? So what's P1? P1 is going to be D1 times 1 plus D2 times 1, and that's it. So it's d1 plus D2. So P1 is indeed that element there on the board. And so you can check each of these entries. Let's see. P5-- that's the last entry up there. That's going to be this row inner producted or dot producted with all the sequence of 1's there, so it's going to be D1 plus D2 plus D3 plus D4, which is indeed how that overall parity bit is defined. So again, you see in the generator matrix, you have the identity matrix there, and then you have this matrix that we're referring to as A. Now, the notation is a little bit different reading chapters 5 and 6, by the way. I've tried to stick with the notation I had in lecture last time in the chapter 5 notation, which uses capital D for the data bit vector and capital C for the code word. So you'll see slightly different notation in chapter 6, but I think you'll navigate fine. One other term here-- we say that these code words live in the row space of G. So the space that we generate, the space of vectors, the subspace of the big space that we generate by taking linear combinations of these rows is referred to as the row space of G. So we define the code by defining a G. If the code's going to be in systematic form, we have the identity here, and then some matrix. This is all for linear codes. And then the code words live in the row space of this matrix. In other words, they're obtained by taking linear combinations of the rows. Here's what I already have on the board. And it's just to say that the matrix A that's sitting out here-- this piece-- is obtained directly from the parity relations. OK, so let's think about this. Can two columns of the matrix A be the same? What happens if two columns of A are the same? Let's say these two columns are identical to each other. What is that telling us about the code that we have? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Yeah. And so basically, one of those parity bits is not buying you anything, right? Yeah. OK, so if you did discover two columns two columns were identical, then one of those parity bits is not checking a different linear combination-- checking the same linear combination, and so it's not buying you anything. What about two rows? Can two rows of the matrix be identical? So let's actually think of them. Erase that-- can I have two identical rows here in A? And if I did, what would it mean? OK, let's say I have two identical rows. What does it mean? What does it signify? Someone? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Could you speak up? Sorry. My hearing's not good. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: So there'll be two data bits that are entering the same way in every parity relation. If two rows are the same here, then there are two data bits here that are entering every parity relationship in exactly the same combination. And so you're not going to be able to distinguish between an error that happens in one of them and the-- and an error that happens in another one. So this is a problem. All right, so there are certain conditions at the A he has to satisfy. All right, here's another important matrix. You may have already seen it in reading. You may not have seen it yet in recitation. And it's a matrix that we call H. And let's just think about what it's doing. What I'm trying to do is basically summarize this set of equations, the parity relations in matrix form. So let's take a parity relation that we had in this particular case. In fact, let's go back to a specific one. Let's take the first parity relationship that we had over there. We said that P1 for this code was D1 plus D2, right? That's the equation for the parity. The parity relationship is this. How would I express that in matrix form, as part of a matrix equation? Well, that's what we're starting to assemble here. So let me show you what that is. Let's look at the top row out here. What's the top row telling us? It says 1 times D1 plus 1 times D2 plus 1 times P1 equals 0. That's all that enters. So that first row is capturing the first parity relationship. And you can go down here. Go to the second row. This is saying D3 plus D4 plus P2 is equal to 0. That's indeed the parity relationship associated with the second row in the rectangular code. So all that this is doing is listing all the parity relationships. So how many of these do you have? Well, you have as many relationships as you have parity bits. So this is n minus k times n matrix, and it's just listing the parity relations. We call this matrix H, and interestingly-- let's see-- there's an identity matrix sitting here, and the reason is that each of these equations involves only one parity bit. There's only a single PI that's involved in each parity relation, and so there should be only one of these columns picked as a 1 when you get to this segment that multiplies the parity bits. So there's an identity matrix sitting there, and then there's the rest of it here. So here's the identity matrix and here's the rest of it. And not surprisingly, the rest of it-- well, it comes from the same set of coefficients, and so it relates to the A matrix. And if you look at it carefully, it turns out, well, it is the A matrix, but turned on its side. A superscript T means A transposed-- IE, rows become columns. We're taking the a matrix, and in some sense, turning it on its side. So that makes sense, because what defined the first parity relationship? Well, we found it in the columns of A here before. That's what defined the parity relationships. And now we're writing it out in this row form, so you've got to transpose things. You've got to take what was a column in A and make it a row. So what you're seeing out here at the top-- that was the first column of A. So now it's the first row of A transpose. This was the second column of A. Now it becomes the second row of A transpose, and so on. So that's what that relationship is. So I can write the parity relationships in the form H times C equals a whole bunch of 0's. This is k parody relationships. This is k 0's here. But I could read write the same thing turned on its side, and that's what you're seeing over here. So if H is the matrix there, what is H transpose? Let's see. I'm going to change-- interchange rows and columns. So what's going to happen is that the first row of H is going to become the first column of H transpose. And so if you imagine how this gets turned on its side, here's what H transpose looks like. H transpose looks like that. So when you transpose a matrix whose entries are blocks, you end up flipping the matrix around, but also transposing each of the blocks. So that's A transpose. Oh, actually, sorry. I wrote this one wrong, didn't I? What is our vector C? C is this vector. It's a code word. When I set this up in matrix form, I got a column vector, so I've got to transpose this as well. That's what I was missing. That's C transpose that I'm looking at. So when I write down the parity relationships, I've got this, but I could also write it in the form-- when you transpose a product of things, what do you get? Well, it turns out, if I transpose the product of two things, I get the product of the transposes, but in the reverse order. So these are just different ways of arranging the equations. So I could have written it in this form or I could take the transpose of both sides, and I get something that looks slightly different. So here, this would be a row of 0's. So this is just to get you comfortable with the matrix operations. You'll see lots of this in chapter 6. You want to get a little comfortable with that. Here's a question for you. Here's my H matrix. Here's my code words. If I have a code word of minimum weight, how many 1's would it have in it? In this particular case, this is a 9, 4, 4 code. If I had a code word of minimum weight, how many 1's would it have in it? I heard something, but I didn't hear where it came from, or didn't hear it very clearly. Yeah? AUDIENCE: 4-- GEORGE VERGHESE: 4? Yeah, OK. [INAUDIBLE] heard there. OK. So the code word of minimum weight would have weight 4, because this is a code of distance 4. It's a linear code. The minimum Hamming distance is 4. It's a linear code, so we know, for a linear code, the minimum weight you'll find among all code words is 4. OK, so we've got a vector here that has four 1's in it, and everything else is 0. So when I take this computation, what is it that I'm actually doing? If I take the matrix H and I multiplied by a vector that has four 1's in it, and everything else is 0, what is it that I'm actually doing to that matrix? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Of the rows? Am I taking combination of the rows? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: When I do a multiplication in this fashion with the vector on the right-hand side, I end up doing the opposite of what I was doing here. When I have the vector on the left and I multiply, I'm taking the combinations of the rows. When I've matrix times vector, I'm taking combination of the columns. So if I had a vector year with four 1's in it, and everything's 0-- everything else 0, I'd be taking-- I'd be picking out four columns of this to add, and the result would be a 0 vector. Yeah? AUDIENCE: [INAUDIBLE] all 0 [INAUDIBLE] GEORGE VERGHESE: All 0's here? AUDIENCE: All 0's for your code-- GEORGE VERGHESE: Were you asking, why all 0's for the code? AUDIENCE: Well, why you can't have all 0's? GEORGE VERGHESE: You can have all 0. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Sorry. The minimum weight non-zero vector is the Hamming distance. Sorry. I may have dropped that word. For a linear code, we know that the minimum Hamming distance is the minimum weight non-zero vector. That's what I meant to say. I may have neglected to say that. Thanks for catching that. So we have a non-zero vector here. It's got four 1's in it. What that tells us is that there are four columns here that we can add together and get the 0 vector. Do you think it's going to be possible to find three columns here that we could add together and get the 0 vector? I'm not asking you to actually do the computation in your head, but based on the reasoning we just had, if you found three columns that could be added together to give you the 0 vector, that would mean you'd have a vector here, a code word with-- or any vector here with three ones in it, everything else 0, such that this product was 0. So it would be a valid code word. Is that possible to take a vector here with just three 1's in it, everything else 0, and have a valid code word? No, right? We said the minimum Hamming distance is 4, the minimum weight code word here is-- has got weight 4. You'll not find a valid code word of weight 3. So you can actually look at this H matrix and figure out what the minimum Hamming distance is. It's basically the minimum number of columns that you can add to get the 0 vector. Now, does that tell you, by the way, why you can't, in this instance, have-- well, if you discovered that you could add two columns together and get the 0 vector, what would that tell you? If two columns of A transpose were identical or two rows of A were identical, that would tell you that you can add two columns and get the 0 vector. That would mean that the Hamming distance is 2, not 4, right? A Hamming distance 2 code is not worth much. It's no good for error correction. That comes back to what I said earlier in these questions. If A had two rows the same, well, there's two data bits that are always entering in the same combination, so you're not protecting against individual errors there. And so it's exactly that issue that we're seeing. The Hamming distance ends up being 2, if you have two rows of A that are identical. OK, so there's a lot that can be gleaned from the generator matrix and the parity check matrix. So this is what I just went through-- that, if you have the H matrix, you can get the minimum distance D by looking to see what's the minimum number of columns you can add to get the 0 vector. All right, now, how does decoding work? We've gone through this effort to generate a code word, and then, at the receiving end, we get some word. This is some received word, and it's going to be a code word plus possibly an error. And now we want to figure out, is the thing we received already a code word? Or if it's not, what code word can I correct it to? And we're going to assume we have just single-bit errors. So one way to do it is just an exhaustive search, which is you've got this received code word, you know that it's going to be one of 2 to the k-- so it's going to be no more than having distance 1 from the 2 to the k code words that you have in your code set. So you can compare against those 2 to the k code words, and whichever one it's having distance 1 away from, that's the one that you're going to announce. So that'd be one way to do it. The thing is that that's not exploiting anything nice about the structure of linear code, so what I want to talk to you about now is a way to actually capture this error in the case of a linear code. So this builds on the relationships that we've been developing some intuition for here. So here's what happens. You get a received vector and bits, which is valid code word plus a vector E that has a single 1 in it, and everything else 0, or is completely 0. So if you were receiving the code word correctly, you've had no errors, and this is what you get. But if you've had a single bit error, then E is a vector with a single 1 in it. It's n bits long, has a single one in it, and that gets added to the code word to give you what you receive. So here's what we're going to do. We're going to exploit the relationships that we had up here. We know that H times is a valid code word equals 0. Or I can write it the other way around if I want to do row multiplications-- sorry-- C times H equals 0. So I'm going to take the receipt vector and do that multiplication with it. And in this case, I've chosen to write it as H times the transpose. So if the received vector was a valid code word, I'm going to get 0. If the received vector was not a valid code word, I'll get something else. And that's what we refer to as a syndrome vector. OK, so let's expand this out. We've got R is C plus E-- so C plus E transpose. That's the multiplication we're going to do. This matrix multiplication will be the sum of the individual matrix multiplications, so it's going to be H times C plus H-- sorry-- H times C transpose plus H times E transpose. So let's write that out. So we've got H times R transpose is going to be H times C transpose plus H times E transpose. We know this is 0. It's a 0 vector. n minus k here, the number of parity relations-- that's how long that 0 vector is. Yeah. And then we've got some other vector, which we're referring to as a syndrome vector. OK? So let's see. What does HE transpose look like. E transpose is going to have just a single 1 in it somewhere. And here's H. Let's see. A transpose stacked up next to the identity-- that's what H looked like. So when we compute-- let me write this better-- I didn't write that well-- this is H. And what I'm writing here is HE transpose. So what is HE transpose doing? When I multiply a matrix like this by a vector that has just a single 1 in it, what am I doing? What do I end up doing? Sorry. I heard a voice from somewhere here. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Picking up one column, right? I'm just selecting out a column of H. So each error that you can get will give you a syndrome that corresponds to picking on one column of H. And column of H is associated with data bits here or parity bits here. So really, this is all that you have to do to do your error correction. You can pre-compute, or store basically the columns of H in your database-- compute H times the received vector to get the syndrome. That's really H times the error, which is giving you the syndrome. That's just a single column of H. So the syndromes that you can possibly get are individual columns of H. So you know what H is. You've got it stored. Compute the syndrome, and see which column of H it corresponds to. That's the bit that has the error. And actually, the only cases you're interested in are where you are going to correct the data bit, so this is really all the part that you really have to focus on. So you compute the syndromes. You compare against the columns of H, which are your syndrome vectors, and then you're done. I think I see the same thing over here. Let's just look at it concretely-- again, for the same code, the rectangular code with all the parity bits there. So this is how you generated a code word. Sorry. OK, let's take the data bit-- the data vector being all 1's. This is the code word that goes with it. It happens with this particular code that all the parity bits then are 0. What you receive ends up being this, because one of the data bits ends up getting corrupted. When you take that received vector and pre-multiplied by H, here's the resulting syndrome vector that you get. And what error does on correspond to? Well, actually, if you look in the columns of H, you'll see that what you've pulled out is the second column, so that means a second data bit is an error. And that's the change that you make. So it really is that simple. You take the received word pre-multiplied by the parity check matrix H, look at the syndrome vector, and see which of the columns of H that corresponds to. That's the bit you're going to flip. All right? So now you're actually only dealing with this many vectors. It's a number of vectors equal to-- how many is that? We've got to do the multiplication, but you just have to compare with the number of vectors in those columns. So it's a much simpler task, computationally. OK, I think we've said all this. And so let me just wind up on linear block codes with a quick summary, and then we'll go on to talk about some extensions here. We've seen all this. We know what the rate of a linear code is-- k over n-- how many errors we can correct. And we've seen all this-- what a parity bit does, whether repetition code-- it's called replication code in the notes, but the more familiar term-- actually, the more commonly used term is repetition code. We've looked at Hamming codes and the rectangular code as well. And so these are the ones that you want to have in mind as particular examples to work with when you're trying to come up with examples that will either prove or disprove-- that will illustrate a conjecture or disprove a conjecture, for instance. And you'll see many problems on past quizzes that are of that type. And then what we did today was looking at syndrome decoding. All right, so this was all focused on single error correction in linear codes. But the point is that that may or may not be the situation that you're dealing with. We actually said that, to get better error protection while maintaining high data rates, you probably want to work with longer and longer strings of data. Well, if you work with longer strings of data, you're going to get more bits and error. So you may not be able to limit yourself to thinking about single-bit error correction. We have talked a bit-- and you've done this in recitation too, I imagine-- well, you've probably done more in recitation than in lecture-- with independent corruption of multiple bits. So let me say a few words about that. Let's think of a systematic code, for instance, still k bits here, and then parity bits here. But what if you could have up to t errors, not just a single error-- so if you wanted to protect against t errors? So in some sense, you want your n minus k bits here to signal all those possibilities, so you need the number of possibilities that can be signaled by n minus k parity bits to be greater than or equal to the number of possible conditions that correspond to having up to t errors. And we've said a little bit about this, but you can have now either no error at all, which is one condition; or an error in one of these bits, which is n separate conditions; or you could have two bits out of here being an error. So how many conditions is that? n choose 2 and so on-- I'll leave you to figure out where you end up on that. So I just wanted to say-- you've seen this in recitation, but I haven't mentioned it in lecture. I don't want to say later that, oh, I didn't know it was something we had to know for a quiz. We do expect that what n choose m means. So n choose m is the number of ways of picking m things from n things, and we assume that what that-- how that's done. So you've got n objects. You want to pick m things from there. So you can pick the first one in n ways, the second one in n minus 1 ways, and keep on going until you get to n minus n plus 1, which is also n factorial over n minus m factorial. But when you did that picking, you were paying attention to the order in which you collected the things, but if the ordering doesn't matter to you-- if all these objects are interchangeable-- then you've actually overcounted. So you've got m things, but the order in which you pick them doesn't matter, because they're all interchangeable. And so you've overcounted. You've got to divide by the number of ways of rearranging m things, and so that's how you get that expression. So you start off with thinking about how you pick m things, and then make a little correction, and so this is what n choose m is. Another thing that I just threw in for fun, because it's something you might want to carry around in your head-- if you don't have a feel for how n factorial grows with n, well, it actually grows pretty fast. It's actually growing like n to the n. This is a very famous approximation, referred to as Sterling's approximation. So when you get out to large n, the right-hand side here is a very good approximation to n factorial. And you see that it's sort of like n to the n, which makes sense, because you seem to be multiplying n by itself n times. Except you're multiplying by a little bit less than n each time, so the e over there ends up compensating for it, it turns out. And then there's an extra n to the 1/2 out there. OK, so we'll assume that how to do the combinatorics. And now, what this is saying is, what's the probability of getting m bits and error in an n-bit word? Well, if you've got m bits in error, that's because those m bits flipped independently, each with probability p. The remaining n minus m did not flip, so that's the probability of getting one such configuration. And then you count all the possible configurations. So what that top expression is is the probability of getting m bits in error out of n, and that's something that we want you to be comfortable with. OK, now, just to wind up here, I want to go back to this last bullet, which is that, in many situations, the errors don't occur independently in the different bits. If you think of a CD with a scratch or a thumb print or something on it, that's local, and so if you get one bit corrupted, that increases the chances that the next bit is corrupted as well. So errors can occur in bursts. If you're trying to make a phone call from a car, and you're suddenly shielded from an antenna-- a nearby antenna, then you're going to lose a whole bunch of bits in sequence. So bits can be in error in clusters, and what we've talked about so far doesn't quite manage that. So here's an idea for how to do that, which is referred to as interleaving. So if we had B different code words that we were going to transmit, and we did it the normal way, we would send out the first one, second one, and so on. This little shading here is supposed to indicate the parity bits going along with the data bits. If we had a burst of errors, we could lose two entire words over there-- nothing to be done. They derive entirely corrupted, and we wouldn't get them back. The idea of interleaving is stack up B words that you want to transmit, but transmit the bits out one at a time from each of the B words. So you transmit the first bit from the first word, first bit from the second word, and so on. And so this is the sequence in which you're doing the transmission. Now, if you've got a burst of errors, you're corrupting a more localized set of bits in each of the words, and there's some hope that your error correction then can recover. So this is very often done. I don't think I want to actually walk you through a particular scheme for it, but we'll have it on the slides for you to look through. But basically, it actually turns out to work very well. And that may be all I want to do for today. We'll see you next time. We're going to talk about linear codes next time, but a much more elaborate kind of code called a convolutional code. Thank you.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
22_Sliding_window_analysis_Littles_law.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HARI BALAKRISHNAN: So this is actually almost near the end. So this is actually the last lecture on transport protocols. And then on Wednesday, my plan is to talk about how many of the things we have studied in this class apply to the internet. It will be a history lesson about communication networks. And I'll talk in specific terms about two interesting problems. One of them is a problem we'll start on today, which is how you pick these window sizes. And I'll talk about how TCP does this. And it's one of the pretty amazing result that was only invented in the mid-1980s or late 1980s. And the second thing I want to talk about, when I talk about this history of the internet from, say, 1960 to today, I'll talk about how people can hijack other people's routes and be able to attract traffic that doesn't actually belong to them. So apparently, now there are people who are doing it illegally. But apparently, now some governments are also doing this sort of thing. So it's an interesting thing to understand, how it is that some of the routing protocols we studied are not secure. So I'll do that on Wednesday. And then we'll wrap up next week. So today, the plan is to continue to talk about transport protocols, in particular, about sliding windows. So just to refresh everyone's memory, the problem is that you have a best effort network, where packets could be lost, packets could be reordered, packets could be duplicated, and delays in the network are variable. And what we would like to provide to applications, like, for example, your web browser or your web client or server, is an abstraction where the application just writes data into some layer, and the application on the other side reads data from a layer, and this transport layer deals with providing in order reliable delivery. So we looked at the first version of that protocol, which is stop and wait. And it had a few nice ideas in it. The first-- all simple ideas-- the first is to use sequence number, and then to have acknowledgments, and then to retransmit after a timeout. And I didn't actually talk about how to do adaptive timers and the low pass exponentially weighted moving average filter. But we started that recitation. And if I have time, I'll come back to that today. But my assumption is that you've already seen how to do that. But then we concluded that the throughput of the stop and wait protocol is not very high. It's sometimes a good idea. For example, to get reliable delivery between this access point here and your computer right now, a stop and wait protocol is perfectly reasonable. We'll understand why later on. But the short answer why is because the round-trip time between this access point here and your laptop is quite small. And because it's a really small round-trip time, you're able to get one packet per round-trip time-- or in [INAUDIBLE] packet losses, it's less than one packet per round-trip time, but roughly about one packet per round-trip time. But the round-trip time is on the order of microseconds. One packet per round-trip time can give you a throughput that's quite large. And therefore, if the link speed is 10 megabits per second, and you're able to send 1,000-byte packet in, say, 20 microseconds, if you take the ratio, that's probably going to be bigger than the link speed. And therefore, you're going to get of the order of the link speed. And therefore, you're not going to underutilize the link. But now, if the round-trip time were 100 milliseconds, and you were able to send just one packet every 100 milliseconds, it would be slow. And to solve that problem, we looked at this idea of a sliding window. This is just pipelining. It just says, rather than have one packet unacknowledged at any point in time, we're going to have a value w that the sender decides, decides upon this value. And the semantics of a window are that we're going to have w unacknowledged packets in the system between the sender and the receiver. Now, technically, it's at most w packets. Because from time to time, you might have transients, where you have less than w packets, because you're about to send the next packet. Or if you get toward the end of the file, and you run out of data to send, you're clearly going to have fewer than w packets. So the technical definition of a window, a fixed-size window, says that if the window is w, then the semantics are that we will have no more than w unacknowledged packets in the system. Now, that's not the only possible definition of the window, but that's our current operating definition of the window. So then the rules at the center are very simple. When you get an acknowledgment from the receiver, as long as it's an acknowledgment for a packet that you have sent, and that packet has not previously been acknowledged, then you now know that packet has been acknowledged. So you remove it from your list of knowledge packets, and you send a new packet. The packet you send, the new packet you send, is the smallest sequence number that you haven't sent so far. OK? It's the very simple rule. And separately, there's a calculation of the timeout, an exponential moving average filter that calculates the average value of that timeout, the smooth estimate of the timeout, a similar calculation that finds the deviation from the mean. And you pick a return special timeout that is some number of deviations away from the mean, for example, four times the smoothed estimate. If that timer fires, and you haven't received an acknowledgment, you retransmit the packet. It's a very simple idea. So I'm going to show now what happens in some pictures with the sliding window, when you have packet loss. So it's the same picture as the last time, except now we have a packet-- packet 2 is lost. The sender doesn't know that it's lost yet. So packet 1 goes here. Packet 2 is lost. This side was supposed to be packet 3. The sent-- packet 4 is sent, and packet 5 is sent. And the window size in this example is 5. So now, when the first packet gets its acknowledgment, the window slides to the right by 1. And at this point, we send packet 6. And the window is now packet 2 to 6, the sent packet 6. And now, in the meantime, what's going to happen is, when packet-- packet 2 doesn't reach, but packet 3 reaches. when packet 3 reaches, 3 gets an acknowledgment. The receiver says that it's received 3. When it receives 3, what's the next packet that's transmitted? The packet that's transmitted is 7. Now, let me ask you a question here. The sender got packet A1 and packet A3-- sorry, acknowledgment A1 and acknowledgment A3. Now, it the sender were calculating the expected next acknowledgment, it knows that after A1, it should get A2, and it now got A3. So why doesn't it just re-send packet 3 right now? Yeah? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: It could have been delayed. Now, yes, it could have been delayed. But if it were delayed, wouldn't 3, packet 3, have also been delayed? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Why not? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: All packets are delayed. So the question is, what is it about-- the delay is one part of the answer, but what is it specifically about that delay that has caused this to-- I mean, if a packet gets delayed, and packets are sitting in a queue, if the first packet in the queue was delayed, then the remaining packets are also going to get delayed, because they're sitting behind that packet in the queue. Yes, sir? AUDIENCE: Does it depend on the size of the packet? HARI BALAKRISHNAN: Well, so far, let's assume that you have a network where packets are delayed, and delays are variable. And you have a switch, right? And the switch has a queue in it. And let's say that you have-- in this example, you have packet 1 and 2 and 3. But in fact, packet 2 was lost, and you don't know that, this packet 3. That's one case. But in the other case, if packet 2 were not actually lost-- if packet 2 were lost, and you got an acknowledgment for 1 and an acknowledgment for 3, if packet 2 were legitimately lost, then it's certainly correct behavior for the sender to send, when it receives A3, to retransmit packet 2. So clearly, you're going after a case here, where 2 exists, but wasn't lost. In other words, if the sender were to retransmit packet 3-- oh, sorry, packet 2 when it receives A3, she said that that's wrong, because it could have gotten delayed. But what kind of delay would delay packet 2 but not packet 3? Or what kind of delay equivalently would delay A2 but not A3? If it's sitting behind the same queue, and the queue is serviced in that order, I mean, if this packet was delayed, this package would also be delayed. And this packet's behind this packet in the queue. So what else could it be? Yeah? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah. So the word I'm looking for is that packets could get reordered in the network. In fact, the reordering could happen even if there were no variable delays, like, no queuing delays in the network. I mean, you could just have a switch that you have a switch here. It could be that packet 2 gets sent that way, and packet 3 gets sent this way. Here's a very concrete example of how this would happen, from your previous lab. It could be that the network had a certain set of routes, and packets were going along this path. And then maybe there was a failure before, and a new link showed up. Or the failure healed. And the routing protocol converged to pick this path, going forward. And this new packet 3, that showed up after 2, gets sent along this path. And it could easily be the case that this path has a lot shorter delay to the destination than that path. And therefore, what would happen is that the receiver, packet 3 would arrive before packet 2. So in other words, if I had a network where no packets ever got reordered, no acknowledgments got-- no data packets got reordered, and no ACKs got reordered, then, in fact, it was perfectly good behavior for the sender, at this point, when it observes A3, to go ahead and resend packet 2. Because I'm guaranteeing to you that there's no reordering in the network. But in general, the networks and packet switch networks, I mean, they get a lot of their robustness to failure and resilience to failure because they send packets any which way. Their only job is to get packets to the destination with as high a likelihood as they can, which means packets are allowed to get reordered. And therefore, it's not correct for packet 2 to get retransmitted when you get A3. OK? So let's keep going. So what is the next packet that's going to be sent, when you get A3, in this picture? It's 7. Because the sender's rule is very simple. Have I seen the ACK before? No. Is this ACK corresponding to a packet that I've sent before? Remember, we need that check, because it's possible that a flaky-- there's some bug on the receiver side. So is it is an ACK that corresponds to a packet I've sent before? Yes. Send the next in-sequence packet. So it sends packet 7. At this point, we're going to lose the beautiful animation. Because each of these things takes an endless amount of time. So I just produced the full picture. I just wish I had the patience to sit and do the full animation. But I ran out of patience. So anyway, you sent packet 7 at this point. And then, when you receive A4, you send packet 8. When you receive A5, you send packet 9; A6, you send packet 10. Now, let me ask this question. At this point in time, when you receive this acknowledgment, A5, and you sent packet 9, what is the window? That is, what is the set of packets in the window? The window is 5, the window size, but the window size corresponds to some list of packets that are in the window. What is that set of packets or that list of packets? Yeah? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: 2, 7, 8, 9, 10. This is important. This is 2, 7, 8, 9, 10. These packets are not in sequence. It's very tempting to say, the windows [INAUDIBLE] five packets. So if I've sent out 10, the window must be 10, 9, 8, 7, 6. Well, that's not true. The window just says, here's the number of unacknowledged packets. The number of unacknowledged packets is 5, in this case. All right, let's keep going. When 10 arrives, when 10 is sent out here, and then at some later point in time, we get an acknowledgment for 7, we send out 11. When we get 8, we send out 12. At this point in time, the window is 12, 11, 10 9, and 2. At some point, the center times out. And the timeout is picked to be conservative. That's why we take the smooth average. We take the deviation. Because we don't actually want to transmit a packet that hasn't genuinely been lost. The reason for that is oftentimes, when you start seeing weird behavior like this, like a presumed missing packet, you're not actually sure if it's missing or if it's just delayed, as was pointed out before, because it took a different route. Something strange is going on in the network. And causing retransmission to happen of a packet that hasn't actually been lost makes things worse, because it adds more load onto the system right about the time when there's something fishy going on in the network. So the last thing you want to do, when something is under stress, is to add more stress to it. That's why the timeouts are conservative. Any time the sender, in any protocol like this, retransmits a packet that is not actually lost, that's considered a spurious retransmission. It's considered a retransmission that is just not a good thing. Now, actually, our protocol, as we've described, it has some wonderful nice properties. And I'll show later on-- maybe you can read about this in the book-- that it actually is the best possible protocol you can come up with in an asymptomatic sense. In other words, no other protocol, if you ran it for a long time, would actually get higher throughput in a network that had losses. So it has some nice properties. But it has this one bad property, which is that, in fact, this protocol, in the way these acknowledgments are structured, ends up with a lot of spurious retransmissions. Or they could end up with a lot of spurious retransmissions. Can you kind of see why this protocol could be that we follow this discipline extremely nicely, that timeouts are conservative. So the only timeout, when we are really, really sure when a packet-- we don't get an acknowledgment, is when we time out, and we wait a long time. But the protocol could still have spurious retransmissions. Can anyone see why there's a very peculiar behavior of this property of this protocol that comes from the way in which these acknowledgments work? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah. So this protocol has a peculiar problem, which is that all packets and acknowledgments are essentially the same. They contain the same information. If you lose a packet or you lose an acknowledgment for that packet, the sender can't tell the difference. Now, this is, therefore, not necessarily the best protocol, in the sense that, if you have a path-- here is an extreme case. So I have a path where there's no packet losses going from me to you, and I'm sending data to you. And coming back, the packet loss rate is 25%. This protocol has this unfortunate property that I believe that 25% of my transmissions are lost to you. In fact, you've got every single packet I've sent. It's just that I don't see the acknowledgments for those specific packets. And therefore, I'm going to retransmit all those packets to you, leading to spurious retransmissions. So we don't have to worry about it for the lab or for the class, but as a design problem, can you invent a protocol that fix this problem? Can you modify this protocol, or come up with an idea of your own, which has the property that-- pick the design point that is the sender-to-receiver path is generally loss-free. But let's say that the receiver-to-send path has a high loss. And by the way, this is not hypothetical. This is what happens in wireless networks a lot. Because that base station sitting in some cell tower that has a huge amount of power-- it's powered in-- it consumes probably kilowatts these days. So they can blast at whatever is the maximum allowed by the FCC. And your poor, little, dinky phone is sending acknowledgments. And the thing is running out of battery all the time. So they're carefully trying to figure out, what's the minimum power at which I can transmit? So in fact, these asymmetric conditions are not unrealistic. They're quite realistic. So if I ran this protocol on a network like that, it would be probably a bad thing. So what would you do to the protocol? Yes? What? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Send multiple acknowledgments-- yeah, you could send multiple acknowledgments to every time. So you'd be doubling. Yeah, that's a little bit-- but it's the right kind of idea. You want some sort of redundancy. Yeah? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah. That's actually not a bad idea. In a sense, you're sending multiple acknowledgments, but you're not just blindly sending multiple acknowledgment. But any time you send an acknowledgment, you could also say-- sending the list of all packets you've received so far is a huge amount. Because if I send you a gigabyte movie, or something, I mean, by the end of that movie, you're just sending me a lot of acknowledgments. So you don't quite want to do that. But remember, the receiver has some idea. If it knew the window size, it would have some idea of what the number of things at the center. You could do something even simpler than all of that. One thing you could do is that the receiver, when it acknowledged the packet, wouldn't just acknowledge packet 7, when it got packet 7, but it might be able to send a cumulative acknowledgment. In other words, it could say, that when I send an acknowledgment, I guarantee to you that everything I've received up to this point-- I'm sorry. I guarantee to you that all packets up to that point I have received. So if I tell you that my acknowledgment is 17, I guarantee you that there's nothing before 17 that I've not received. And then I could, in addition, in the acknowledgment, tell you a little bit about some of the later packets I've received or some of the later packets that might be missing. So you can make this protocol have a little bit more redundancy. And if you do that, and you apply almost everything else I've taught, you get TCP, which is an extremely popular protocol. But that's about the only difference of significance between our protocol and TCP. Now, interestingly, our protocol, when you actually have loss rates in the forward and reverse directions that are roughly the same, our protocol actually does a little better than what TCP happens to do. But TCP is good at dealing with the reverse path having a higher degree of packet loss. OK. So the other question I want to ask people here, at this point, is let's say that you have a receiver that's running on an extremely simple device. So you don't want to have a lot of storage. Now, why would you need storage, before I get to that question? Let's take this picture here. So packet 2 hasn't yet been received. But in the meantime, the receiver has gotten packets 3 and 4 and 5, all the way up to 12. So what does the receiver have to do? Well, the receiver, remember, before it delivers it to the application, it has to hold on to those packets. It can't deliver packet 3 to the application and packet 4 to the application. Because the guarantee that the receiver is giving is that all packets will be delivered in exactly the same order in which they were sent. So the receiver has got to hold onto those packets until packet 2 shows up. Does that make sense? OK. How big can that receiver's buffer become? How big do you need to make it? Like, if you were implementing this on a computer, if you want to allocate memory for it, how big do you need to make it? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: What? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Big enough to handle the timeout-- good. How big can the timeout be? Well, the timeout can be some finite number. But think about what happens. Think about the timeout happens. You retransmit packet 2. And packet 2 is lost again. Now, in the meantime, the protocol is going to continue. Because all these other packets are going to keep getting acknowledgments, and they're going to keep causing the sender to keep sending packets. So if packet 2's retransmission were lost, we're going to be at this point here. We're still going to be sending, at this point in time, packet 13 and whatever-- 13 and 14 and 15 and 16, and so forth, right? Now, packet 2 could just keep getting lost. I mean, it may happen with low probability, but there's a probability that it'll happen. So how big does the receiver's buffer have to be in this implementation, in the worst case? Well, let's say that you don't know how big the file is. It's a continuous stream of packets that are sent. I mean, is that a bound? In the worst case, is that a real bound on the size of the buffer? Or can it grow to be as big as the entire stream that you're sending? It can grow to be really, really big. Now, this is a potential problem. Because it can keep growing and growing and growing. At some point, you're going to run out. You might run out of space. When you start to run out of space, it's tempting to just throw things out. So let's say that the receiver implements it. Somebody implements this protocol and just says, I'm going to just have 100 packets. The sender is running with a window size of 5. I'm just going to have a buffer of 100 packets, which says, the maximum number of packets I'm going to hold in my buffer, before I start discarding later packets, is 100. Does this protocol work? Is it correct if I do that? Yes? AUDIENCE: [INAUDIBLE] like a receiver just never acknowledges [INAUDIBLE] receives it [INAUDIBLE]. HARI BALAKRISHNAN: OK, but what if I acknowledge a packet as soon as I get it? OK, if you acknowledge a packet as soon as you get it, the receiver's discipline, the guarantee it should provide, is if it acknowledges a packet, then it's told the sender that it's got the packet, which means the sender will never retransmit it, which means it shouldn't throw the packet away. So as long as the receiver only throws out packets that it doesn't acknowledge, you're OK. Does that make sense? So the discipline is it's just like writing a legal contract, right? That's what protocols are. It's just a bunch of legal contracts, and you try to make them as simple as possible. And you try hard, and you end up with 200 pages. But that's what lawyers also say-- yeah, it's really simple. But then you've got all these clauses. But the reality is that you've got to deal with all these [INAUDIBLE] cases. So protocols are nothing more than contracts that both sides agree upon. And the contract here from the receiver is actually pretty simple. It says, if I send you an acknowledgment, it means that I'm not throwing the packet away. What happens if I treat this protocol to do a little bit differently at the receiver? When I get a packet, if it's in order, I deliver it to the application. And after I deliver it to the application, I send an acknowledgment. OK? So in other words, I only send an acknowledgment to the sender after it's delivered up to the application. Otherwise, I don't. What happens to this protocol if I do that? Does it perform the same as what I describe? And remember, there's a subtle difference. The only difference is, in this protocol as I've described it, the receiver gets a packet, sends an acknowledgment, and then holds on to it in a buffer, if the packet's not the next packet in order. The modification I'm proposing is the receiver gets a packet, and only when it delivers it up to the application, does it send an acknowledgment. Otherwise, it doesn't send acknowledgment. Yes? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Is it just like stop and wait? AUDIENCE: I think so, because [INAUDIBLE].. HARI BALAKRISHNAN: But if packets are not being lost, it's doing a lot better than stop and wait, right? If packets are not getting lost, it's doing-- would you agree that if packets are not lost, it does better than stop and wait? In fact, if packets are not lost, is there any difference between my protocol and this [INAUDIBLE] modified? No. OK. But yet, you had a good thought. It looks like stop and wait. When does it look like stop and wait? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah. So that modification is, when packets are lost, it looks like stop and wait. Now, this is not a mere academic thing. So it turns out that there was a period of time in the '90s, where somebody in Linux TCP had the bright idea-- it seemed like a bright idea-- that that's what they would do. So for a period of time, there was a Linux TCP, where they said, well, it's all very complicated. Because what would happen was that sometimes, the machine would crash. And the sender thought that the packet had been acknowledged, but it hadn't actually been delivered up to the application. So let's just make it so the packets get delivered to the application, and only when the application does the read-- for those of you who've done this sort of thing-- from the socket buffer, and it's been out in the application, and out of the operating system, that's when we'll send the acknowledgment. And you [? read, ?] seemed OK. People said, that seems reasonable. And Linux, the way it seems, things seem to work, as people try out a lot of stuff. And then I suggest, from time to time, somebody declares that something is right. So anyway, they tried this out. And it actually didn't work as well. And the reason for that is, if you don't run on a high enough packet loss rate network, then what could happen is that you may get stuck. And it's very hard to notice these performance problems. Correctness problems are one thing, because the other side stops. It stops working. And you can corner it down. But this simple tweak, that looks perfectly reasonable, is actually a performance problem. And it doesn't show up all the time. It actually shows up only when the packet loss rate is reasonably high. So these are all examples of reasons why these protocols are not completely obvious and require a fair amount of care to get it to work. Are there any questions about any of the stuff? Is this all clear? OK. What I want to do now is to show a picture of something called a sequence plot, which is a very useful tool in understanding how these protocols actually work. So what you do to produce one of these plots is you run your protocol. And you plot out-- at the sender, you plot out the times at which the sender sent out every sequence number, every time it transmitted a packet. And you plot that out as a function of time. The y-axis is the sequence number. The x-axis is time. And similarly, every time the sender gets an acknowledgment, you plot that out on the trace as well. So you look at these two traces. OK, this is a trace of packet transmissions, data packet transmissions, and this is a trace of ACK packet receptions. And you look at this picture. Now, the moment you get a picture like this, there's a few things you can immediately conclude. The first thing you can conclude is, that if I look at the distance between the data and the ACK, when there are no losses happening, if I look at that distance, that tells me the window size. Because that's the number of packets. Because the last acknowledgment, every time an acknowledgment happens, you send out a new packet. Therefore, the distance in sequence numbers between-- in one of these vertical slices, when there are no packet losses, is the window size. You can also read off the typical round-trip time of the connection. Because the round-trip time is the time between when a packet, data packet, was sent and when you got an acknowledgment for it. So you can read that off as well. Those two pictures, there's an easy way for you in your lab 9 to produce these pictures. So if you're running into things where things look slow, things look bad, you should just put up one of these pictures, and then it'll usually become pretty apparent what's going on. What may happen is that initially, things look like this. And all of a sudden, things stop. And you can start to see, well, I'm not getting acknowledgments, or I'm not sending data the right way. And these are very useful to understand what is going on. And generally speaking, these are useful to uncover performance issues rather than correctness. I mean, correctness, usually, you can iron out before you get to this stage. The retransmission timeout is the time between when you send a packet and when you send the retransmission for the packet. In this particular picture, the deviation from the mean was small. And that's why the retransmission timeout is only a little bit bigger than the mean round-trip time. Every time you see a packet that's off of that sequence [INAUDIBLE]---- so you see packets here. The pluses are data packets. And then you see something going normally. And then you see a lower sequence number retransmitted, sent here. That's a retransmission. So you see, normally, the new packets are all sent there. But the retransmissions show up before. So these are examples of retransmissions, and these are examples of packets that were retransmitted more than once, because they're timing out multiple times. Yes? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah. So the window size-- what's the definition of the window size? The maximum number of unacknowledged packets. So the maximum number of unacknowledged packets, when there are no packet losses that have happened, is the difference between the last packet you transmitted and the last acknowledgment you got. Because every time you got an acknowledgment, you send a new packet. And initially, you send out w packets. So if you continue that, so you initially send 1 to 5, then you send 2 to 6, 3 to 7. And the last ACK you had was 2, when you sent out 3 to 7. So that distance tells you the window size. I might be off by 1. It's probably the last packet you sent minus the last acknowledgment you got plus 1, is the window size, or minus 1, something like that. You've got to get that right on the quiz. Fortunately, I don't have to get it right here. [LAUGHTER] And then, some of these things here are later x's. And these are acknowledgments that show up. So these are packets that got retransmitted multiple times. These are acknowledgments that are for these retransmitted packets. And I say most probably, because I can't actually be sure. In principle, it could be that this acknowledgment here is for this packet, is for this data packet, that was actually originally transmitted over here, rather than for this retransmission. It's, in principle, possible that this acknowledgment was sent by the receiver upon the reception of a packet over here. It's just that it's unlikely. It's more likely that it was this, because that's the round-trip time that's consistent with that RTT. But you can't actually be sure. All you know is that this was an acknowledgment for that data packet. But most likely, it was for the retransmission. OK, so these sequence traces are generally pretty helpful and useful in understanding the performance of transport protocols, particularly sliding window protocols. So any questions? OK. So now I'm going to turn to the last remaining issue for these transport protocols, which is analogous to we did a calculation of the throughput of the stop and wait protocol. I want to look at the throughput of the sliding window protocol. OK. And I want to explain that by first actually explaining what the problem is. And then I want to go back and tell you about a very beautiful result, very widely applicable result, applies to everything from networking to how long you're going to wait to get served at a restaurant, called Little's Law. It's a remarkable result, very simple, and widely applicable. Everybody should know it. So the question here is, what's the throughput of sliding window? And in particular, if I had run a protocol in a network that looks like this-- so I have a sender. I have a receiver. There's some network path in between. And of course, this has a bunch of switches here. And I want to know what is the throughput of the protocol. And what I would like a few to tell you what the throughput is in terms of. So the sender has a window size w, according to this protocol. For now, we'll assume that there's no packet loss. That is, acknowledgment data packets are not lost, and acknowledgments are not lost. If I have time today, I'll come back to explaining what happens with packet loss. Otherwise, we'll pick it up in recitation tomorrow, or I'll point you to the place in the book. It's just a simple calculation that expands. The more important part is when there are no losses. Now, I'll also assume that there's links of different rates here. And one of these links on the path between sender and receiver is the link that is the bottleneck link. In other words, no matter what you do or who you bribe, you cannot send packets faster than the speed of that link. For simplicity, I'll assume that there's one bottleneck. The general results apply, even though there are multiple bottlenecks. But I'll assume that there's some bottleneck here. And I'll assume that its rate as c packets per second. And I will assume here, that because there's a bottleneck, in general, packets may show up faster than the bottleneck can handle. And if they do, they sit in a queue. And because I've constructed the problems so packets don't get lost, the queue can have an arbitrary length. It could be potentially-- it could grow unbounded. Though, in reality, it won't, because the sender has a fixed window size of w. Now, all of this analysis and calculation will apply when there are many, many people transmitting data, sharing this bottleneck. So you can have multiple set senders to multiple receivers, and they'll all share this link in some way. And for now, today, all I'll assume is that there's one user of the network. It's not hard to extend the same calculation to multiple users. And the question is, what is the throughput in terms of the window size and in terms of these other things? Now, in order to answer this question, it'll turn out that the throughput depends on the window size, and also on the round-trip time, and also on the loss rate, and also on-- in a certain mode, it will depend on c. It can't exceed c. OK? But in order to understand how to solve these kinds of questions, there's a more general result that's more widely applicable, called Little's Law, which I want to tell you about. So Little's Law applies to any queuing system. It applies to any system where there's some big black box here, and the black box has a queue sitting inside it, and the queue drains at some rate. So you have a queue sitting here. Things arrive into the queue. I'll call that the arrival process, which I'll represent by A. And then things come out of the queue, according to some service process, which I'll represent by S. By the way, Little is a professor at MIT. I think he wrote this result, this law. Well, I don't think he called it Little's Law, but other people did. So he did this work, I think, in the 1950s. And what's beautiful about this result is that it relates three parameters. It relates the-- I'll call it N. It relates N, the average number of items that you have in this system, in the queue, or in this black box. It relates that to the service rate and to the average delay experienced by an item that sits inside this black box. So let me relate the three again. It relates N, which is the average number, to D, which is the average delay-- so I'm going to put a bar above the fact that it's an average-- to lambda, which is the average rate. Now, the result applies to a stable system. What that means is it applies to a system where the queue doesn't grow unbounded to infinity. In other words, it applies to a system where the service rate-- if the arrivals are persistently bigger than the service, then it doesn't matter what you do. The queue is going to grow to infinity, and the delay is going to grow to infinity, and N is going to grow to infinity. So you're going to get a relationship that's not of much practical use. But otherwise, if the rate at which things come out of the system, in a stable system, is lambda-- which, if it's a stable system, the rate at which things enter the system can't exceed it either. But it relates the service rate lambda for a stable system that doesn't grow unbounded to N entity. OK? So let me give this first by example. How many of you guys have used the food truck? All right, so last week, I did a little experiment there. And I found that-- this is all real data. I found that, at least the [? Thai ?] truck, they seem to take about 20 seconds per person, on average. OK? And when I showed up there-- and this wasn't an average calculation. But I showed up there, and there were 20 people ahead of me in the line. And the question, of course, is I don't care how many people there are in line. What I care about is, how long do I have to wait? Assuming that the random sample I did was the average, which who knows if it was or not, looking at these two numbers, what's the waiting time? In other words, what's D? 10 minutes? Is it? I didn't wait 10 minutes. How do you get 10? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: I see. It might be. So I got 30. So I had it as 20. All right, it might be 10. Why is it 10? How do you conclude that it was 10? Who said 10? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Why? AUDIENCE: Well, so it'd be like, [? if you have ?] 30 people, then you have [INAUDIBLE] [? per person ?] [INAUDIBLE].. HARI BALAKRISHNAN: Yeah. Right. So what this says-- what you did was you just said that D must be equal to N over lambda, right? Or N is lambda times D. So if you say that it's 20 seconds per person, is 3 people per minute. So what you do is you do 30 people divided by 3 per minute. And so you get 10 minutes. So that's about right. That is exactly right. So Little's Law just tells you that the average number of items in a system-- this is all applicable to various conditions on stable systems, and so forth-- it says the average number of items, or packets, or people, or whatever, is equal to the product of the rate at which the system is servicing them multiplied by the average delay that they experience. So knowing two of them, you can calculate the third. And what's truly, truly remarkable about the result is that it applies to anything that you do in the system. Packets could arrive, or jobs could arrive, or people could arrive, in some arbitrary distribution. They could be serviced according to some completely arbitrary distribution. They don't have to be serviced in the order in which they arrive. They could be shuffled around. You could make it so people who come in last get serviced first. You could do whatever, and the result still applies. Yes? AUDIENCE: [INAUDIBLE] delay [INAUDIBLE]?? HARI BALAKRISHNAN: No. Well, I kind of cheated here a little bit. This is 20 seconds per person. But whenever I tell you a number like that, what this really says, that this is 3 people per minute. So it looks like a delay, but this is really an inverse of a rate in the way-- this is the inverse of the rate in the way I've described it. I mean, it's intuitive to say they take 20 seconds per person. But when I tell you that it takes 20 seconds per packet or 20 seconds per person, it looks like a delay. But it's really a rate. So it's important. That's a good question. Yeah, so this is a rate. So this is inverse time. And this is whatever quantity you're dealing with. So if you then take the ratio of N to lambda, you get time. OK, so why is Little's Law true? So here's a very simple pictorially proof of Little's Law. And it applies under specific conditions. But it turns out these conditions are good enough for our use. So let's say we draw a picture like this of a queue. So I'm going to assume that packets enter the queue and leave the queue. Now, the fact that there's a single queue, versus not a queue, doesn't matter. It's any black box. So p.packets could get-- or information or messages or items could get sent from the sender. They enter a black box, and they get stuck out at the receiver. And the thing applies to that as well. So let me plot the number of packets in the queue, or the number of items in the queue, as a function of time. So I'm going to assume here that capital T is extremely long. Whenever I deal with rates, I have to look at what happens over a long period of time, and then I can calculate a rate. So you can see that what I've done here is, that every time a packet arrives or an item arrives, the queue increments by 1. So you can see that the y-axis, the height of each of those little snippets here is 1. And then every time it leaves, it drops by 1. So you get-- in a particular execution of whatever the queue does, you get a trace that looks like this. Now, of course, in a different execution, the details might be different. But if you do it for a long enough time, you're going to sample all the possible evolutions of this thing, or at least enough of it, so you can make meaningful statements. So whenever a packet arrives, I've shown it in a color. And I think I've matched the color up against whenever that packet leaves. But in fact, the result applies-- this particular example is a first-in, first-out queue. So packets leave in the same order they were sent. But that doesn't have to be true. So let me label these packets as shown here. Now, what I'm going to try to do is to relate the rate at which packets have entered or left the queue to the number of items in the queue and the average delay experienced by each item in the queue, in this pictorially proof. So the way you do that is everything has to do with the fact that there are two different ways of looking at the area under this curve. And there's two different ways. One of them relates to the rate, and the other relates to the average delay. And then we're going to say, all right, the area under the curve is the same, and therefore, we're going to equate two numbers. So the first thing I'm going to do is I'm going to divide this up, the area under the curve, and divide it up into rectangles like this and associate with each rectangle a packet, or an item. So I'm going to say that A showed up here, and it left at that point. So this entire period of time here corresponds to packet A sitting in the queue. This entire period of time corresponds to B. A left at this point in time. So now my queue was three packets, and they're B, C, D. And then, at this point in time, E showed up. So we now have C and D sitting here. But now E showed up, and then F showed up, and so forth. So you agree that I can divide this up into rectangles and associate with each little rectangle, whose height is 1, a particular packet. And that is the same packet in the queue. So the height represents a particular packet, and I associate every little piece of this queue picture with a given packet. Now, let's assume that we run this experiment for a long time T, capital T. And P packets were forwarded through the system. So what is the rate? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: P packets per T seconds-- so the rate is clearly lambda is P over T. Right? This is easy. OK. Great. Now, let's assume that the area under the curve is A. This is the entire area under the curve here. Now, this is an area under the curve of N of T, which is the number of packets, as a function of T. So if I take this area under the curve, which is the same-- if you think of it in continuous domain, it's the integral of N of T-- and I divide by T, I get that number. Right? You agree that the mean number of packets in the queue is the integral of N of T, which is the number of packets in the queue at any point in time. If I take that integral, and I divide by capital T, I get the mean number of packets in the queue. All it says is this is the total number of packets in the queue over-- aggregated across all time. Therefore, to find the average, I take the integral, and I divide by T. That's the definition of the mean. All right, so now we have two things. We have the rate is P over T. And we have that the mean number of packets in the queue is A over T, where A is the area under the curve. Now, to complete the puzzle, what we have to observe is, that if you look at the same area under the curve, you can look at it in two ways. The one way to look at it is the mean number of packets in the queue is some line through here, which is the area under that curve divided by T. But each of these things accounts for a certain delay. And the mean delay experienced by the packet is simply the area under this entire curve, but it's divided into all of the packets that ever got forwarded through the system. So through this experiment, P packets got forwarded by the system. And the area under the curve also represents a total aggregate delay. Because if I look at it with this axis here, that's the total time. So that's the total time spent. And if I take this entire area under the curve, and I look at that area under the curve, and I divide by the number of packets that I sent, that gives me the average time that a given packet spent in the queue, which means that the mean delay is A over P. So if I take A over P and multiply it by P over T, what I get is A over T, which is equal to N. And that's Little's Law. So now we're going to apply Little's Law. I mean, it's actually a very intuitive idea. It just says, that if I take the rate, average rate, and the average delay, and I multiply the two, I get the average number that's sitting in the system. So in order to complete the picture for the throughput of this sliding window protocol, what we're going to do is to apply Little's Law in a couple of different ways. We're going to say, that if the window size is w in that protocol, and the round-trip time is RTT-- that's the time between when I send a packet and get an acknowledgment back-- I first apply Little's Law. So now I have a big black box. I send out packets. And every time I receive an acknowledgment, I send out another packet. And I never send out more than w packets. So the average delay between when I send a packet and when I get an acknowledgment for it is RTT. So that's the D in the Little's Law formula. The number of things that I have sitting in this black box inside the network, the number of outstanding things that I have that are waiting to be processed, is w. And therefore, the rate is, by Little's Law, N over D, which is w over RTT. So therefore, the throughput of this protocol is simply equal to w over RTT. So if I increase w, I get higher throughput. So if I draw this as a function of the window size w, I look at the throughput here. I get a linear increase like that. Now, the problem with this is, of course, you look at this and go, well, the best way to get higher and higher and higher throughput is to keep increasing the window size. So what happens if I-- does this keep going on forever, that all I have to do is to keep increasing the window size, and then I'm getting infinite throughput? That's clearly not happening. So what happens? Why is it that it's completely true that w over RTT is the throughput. So why is it that I can't just keep increasing the window size and get infinite throughput? Yeah? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Well, it's true you're bounded by c. But yet, this formula is true. Right? It's true that there's some round-trip time. So what's really going on, of course, is that if you increase the window size more than a certain amount, all that's going to happen is the packets are going to get stuck in this queue here. And they're going to start draining at some rate, c packets per second. But they're just going to get stuck at the back end of the queue. When they get stuck at the back end of the queue, the RTT is no longer fixed. The RTT now also starts going. So in other words, the throughput is always this formula. But initially, when the packets are no longer in the queue-- until a certain point, initially, you said one packet. It goes through. You have a window of two packets. They go through, and you get ACKs. Three packets, they go through and get ACKs. At some point in time, they start to fill up the queue. And when they start to fill up the queue, w keeps growing. But RTT keeps growing. And what ends up happening is this ratio doesn't exceed c. So you end up with throughput that looks like that. And the point at which this happens here, this point here is actually a product of the minimum RTT of the system, which is the round-trip time in the absence of any queuing. I'm going to call that RTT min. And that depends on the propagation delay and the transmission delay, but not on the queuing delay. If there's no queues, and there's a certain minimum round-trip time-- like, it takes 100 milliseconds to go to California and back, or whatever. Now, when queues start to grow, that RTT starts increasing. But until that point happens, the round-trip time is RTT min. And if I take that, and I multiply that by c, that's the critical window size up to which point there are no queue packets that build up in the queue. But after that, packets start to build up in the queue. And there's a name given to this product of the bottleneck link speed, or bandwidth, and RTT min. It's called the bandwidth delay product. It's the product of the bandwidth and the delay, where the delay is the minimum round-trip time. And if I were to draw an analogous picture of the actual round-trip time as a function of the window size, initially, when the window size is small, the round-trip time is RTT min, with some value. And then, at this point in time-- I want to mimic this thing here-- you get to this point in time, which is the bandwidth delay product. And then the round-trip time starts to grow. So this is the actual delay. And so you look at this picture. And a well-designed, well-running protocol will run with a window size roughly around here, where it gets the highest possible throughput at the lowest possible delay. But sometimes, you might end up running with a bigger window size. You're not going to get any faster throughput, but what you would see is increasingly higher delay. Now, in real networks, designing protocols that run at this nice, sweet spot is an extremely challenging problem. I'll get back to this problem on Wednesday and talk about how people work on it. It's still a somewhat open problem. In fact, it's still an open problem in things like cellular wireless networks. So I'll come back to this point. But the main point here is this idea of a bandwidth delay product.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
2_Compression_Huffman_and_LZW.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK let's get started. Let's get started, please. All right, last time we talked about information and entropy. The picture we had was of some kind of a source emitting symbols. Symbols-- let's say n of them. So it chooses from these symbols with probabilities P1 up to Pn. And then we talked about the expected information here, or the entropy, so the expected information you get when you see the symbol that's emitted by the source. And that was the average value of the information. So it was-- let's see, you take 1 over log of 1 over P i for each of the possible symbols. And then you've got to weight it by the corresponding probability to get an expectation. And this was the entropy of the source. Or if you want to make explicit the source, you could say H of S for source-- capital S. All right? And then we were actually thinking of this operating repeatedly. So in the model we had last time, the source at each time chooses from one of these symbols with this probability. And it does it independently of choices at other times. So what the source actually generates is what's referred to as an iid sequence of symbols, independent, identically distributed. You'll see this a lot-- Or iid sequence of symbols. So the independent part of this refers to the fact that it makes the choice independently at each time instant. The identically distributed means that at each time instant, it goes back to these same probabilities. It's the same distribution that it uses each time. So that's what iid means-- so sort of a stationary probabilistic source with no dependence from one time instant to the next. Average information was measured in bits per symbol. And what we wanted to do was take those symbols and compress them to binary digits. OK, so we were going to-- you can compress them to other things. We were going to think of compressing them to binary digits because we're thinking of a channel that can take 0s 1s or a signal that's in two possible states. So what we wanted to do was take each symbol or sequence of symbols and code it in the form of binary digits. Right? Now, each binary digit can, at most, carry one bit of information. If the binary digit is equally likely to be a 0 or a 1, then it carries one bit of information. So that tells you really that if you're going to code this, the code length-- let's see-- compress to binary digits, let's say, or encode. And what we need is the expected code length. L should be greater than or equal to H. So you need to transmit at least this many binary digits on average to convey the information that's coming out of the source-- per symbol or per timestamp. All right, so that was the basic setup. I've given you one of these bounds here. When we talked about codes, by the way, we decided that if we're talking about binary codes, we want to limit ourselves to what are called instantaneously decodable or prefix-free codes. And these are codes that correspond to the leaves of a code tree. So we had examples of this type. You want your symbols to be associated with the leaves of-- the end of the tree, not intermediate points. The reason being that, as you work your way down to the tree-- by the way, I'm assuming that this picture makes sense to you in some fashion from recitation. But as you work your way down to the symbol, you don't encounter any other symbols on the way. So as soon as you hit the leaf, you know what symbol you've got. So we're limiting ourselves to codes of that type because some of the statements I make are not true if you don't have codes of this type. So I won't comment on that again. All right, so we've got that, the first inequality that I've put up there. And it turns out that Shannon showed how to actually construct codes that will give you a band on the other side. Let me actually write it the way it is on the slide. So Shannon showed how to get codes that satisfy this-- so can get code satisfying this. So Shannon showed how to get within one of the lower bound in terms of the expected length of the code. So that was pretty good. But after coming up with this paper in '48 and working on this for a while, neither he nor other luminaries in the field had found how to get the best such code, and that's what Huffman ended up doing. So we've talked about that already. OK, so Huffman showed how to get a code of minimum expected length per symbol with a very simple construction. Now, you can actually extend Huffman-- and maybe you talked about this in recitation as well. So you can code per symbol, or you can decide you're going to create super-symbols. Take the same source, but say that the symbols that it emits are the symbols from here grouped two at a time. So you're going to take the symbol emitted at some particular time and then the symbol at the following time and call that a super-symbol. And then take the next pair, and that's a super-symbol and so on. So you're doing the Huffman coding, but on pairs of symbols. So you can go through the same kind of construction. If you assuming an iid source, then the probability of a paired super-symbol is easy to compute. It's just a probability of the individual ones because they're independently emitted. And then the entropy of the resulting source here turns out to be twice the entropy of the source here because these are independent emissions, so the entropies will just add. So you can do the Huffman construction again. And what you discover is the same kind of thing except this is now the inequality, right? And the reason is-- well, here L is still the expected length per symbol. But you're doing pairs now, so the expected length for the pair is 2L. Right? The lower bound is the entropy of the source. That's 2H. The upper bound is the entropy of that source plus 1. So you can construct a code of that type. You can do it with Shannon's construction or Huffman's. And now see what you've managed to do. You've got a little titer of a squeeze on the expected length. So we've gone from H plus 1 to H plus 1/2 with this construction. If you took triples, this would just change to 1 over 3. If you took K-tuples, you'd get 1 over K. So if you encode larger and larger blocks, you can squeeze the expected length down to essentially what the entropy band tells you. Now, Huffman-- you've spent time in recitation. I just thought I would quickly run through an example so that you have this fresh in your minds. So we start off with a set of symbols. This is kind of weak, but I hope you can it. A set of symbols, A through D in this case, with probabilities associated with it. The Huffman process is to first sort these symbols in descending order of probability. So that's what I really start with. You take the two smallest ones and lump them together to get a paired symbol, rearrange, reorder. And then you do the same thing again. You take the two, combine them, reorder. Take the two smallest ones, combine them, reorder. And that's what you have for your reduction phase. And then you start to trace back. So when you trace back, you can pick the upper one to be zero, the lower one to be 1. And then every time you get a bifurcation, as you go back, you'll pick the upper one to be zero and the lower one to be 1. And you start to build up your code word, right? So this one traces back. There's no bifurcation. This traces back. The 0 becomes 0001. And you go all the way like that. OK? So trace back-- let's see. Oh, was there a-- yeah. So the 1 here becomes a 1 0 and a 1 1. And then at the next step, you're all the way back with the Huffman code. Right? So that's the Huffman code for that set of symbols. It's a Huffman code. I shouldn't say the Huffman code because, if you notice, at various stages we had probabilities that were identical, like over here and over here and over here. And we could have chosen how to order these things and then how to do the subsequent grouping. And all of those will give you Huffman codes with the same minimum expected length. All right. All right, I want to give you another way of thinking about entropy and why it enters into coding. And here's the basic idea. All right, so we're still thinking about the source emitting independent symbols. It's an iid source. And we've got a very long string of emissions. So we've got a very long string of symbols emitted, maybe S1 of the first time, S17 here, S2 here, and so on. And the question is, in a very long string of symbols, how many times do you expect to see symbol S1? How many times you expect to see a symbol S2 and so on? Well, if you actually work it out, it turns out that the expected number of times, number of times we see SI in the [INAUDIBLE] symbol is K times the probability of seeing SI. So it's what you'd expect. All right? So the expected number of times is that. Well, but that doesn't tell you what the number of times is that you'll see in any given experiment. We know that you need to think about standard deviations as well. So what this is saying is, for instance, for symbol SI, that we expect to get that many of symbol SI. But actually, there's a distribution around it. So you'll get a little histogram here. I'm not making any attempt to draw it very carefully, but there's a distribution. You run different experiments, you're going to get different numbers of SI in that run of K. Right? So there's a distribution. And it turns out you can actually write an explicit formula for the standard deviation. This is something you'll see if you do a probability course. It's actually very simple. So that's the standard deviation. So the standard deviation goes as root K. So the interesting thing is that the standard deviation is a fraction of-- the number of successes get smaller and smaller as K becomes larger and larger. Or another way to see that is, if I normalize this, so I'm going to do a number of successes divided by K. This histogram is going to cluster around P i. And the standard deviation now, because I've divided by K, the standard deviation actually now ends up being P1 minus P square root of K. All right? So what this says is if you get a run of K emissions of the symbol and you try and estimate the probability, P i by taking the ratio of times SI appears over the total number of runs, you'll actually get a little spread here centered on P i. But the spread actually goes down as 1 over root K. So this is really what the law of large numbers is telling us. It's telling us that if you take a very long run, you almost certainly get a number of successes, well, Kp i in this case. It's very tightly concentrated. All right, we don't want you to remember all these formulas. I have them on the slides. It's just there for fun. There's something else that I put on there that you can try out for fun. I don't want to talk through it, but you can use exactly this to analyze things like polling. Why is it that you can poll 2,500 people and say that I've got a margin of error of 1% as to how the election is going to turn out? Well, the answer is, actually, in exactly this. If we have time at the end, I'll come back to it. But it's easy enough that you can look at it yourself. So let's focus on what it is I wanted to show you. I picked Obama 0.55, but that was just as illustration. [LAUGHTER] No. No political views to be imputed to that. All right, so what we're saying is you've got K emissions of this symbol. And with very high probability, you've got Kp1 one of S1, Kp2 of S2, and so on. So this is really what you're expecting to get, provided you've tossed this a large number of times. What's the probability of getting a sequence that has Kp1 of S1, Kp2 of S2, and so on? So you've got to get S1 and Kp1 positions. What's the probability of that? And you've got to get S2 and Kp2 positions. So how do you work out those probabilities? We're invoking independence of all the emissions. So you can multiply probabilities. So what you have is S1 occurring with probability. P1 to the power Kp1, because P1 is the probability with which S1 occurs, and it's happening Kp1 times. So you take it to that power, and then P2 to the Kp2, all the way up to Pn to the Kpn. OK? So this is the probability of getting a sequence like this. And what we've said is this is the only kind of sequence you're typically going to get. All the rest have very low probability of occurrence. So it must be that when I add up all these sequences, I get, essentially, probability 1. So the question then is how many such sequences are there. If a single sequence of this type has this probability, and the only sequences I'm going to get are sequences of this type effectively, and the probabilities have to sum to 1. How many sequences do I have of this type? Do you agree that it's 1 over the probability? The number of such sequences? Because I've got to take the number of sequences times this individual probability has to come up to be one. Right? The number of sequences-- let me write this down. So that you see it a little better. The number of such-- let me call them typical sequences-- times the probability of any such sequence has got to be approximately 1. I say approximately because there are a few other sequences whose probabilities might-- I would have to take a count of. But this is essentially it. So the number of such sequences is 1 over this number. So the number of such sequences is P1 to the minus K1, P1, P2 to the minus Kp2 and so on. That's the number of such sequences. And essentially, these are all the sequences that I'm going to get. Well, if I take the log of this-- just visualize how the log works. Now I've got the log of a product, so that going to be a sum of the individual logs. I've got the log of a power of something, so the power will come down to multiply the log. This comes out to be K times H of S exactly. OK, so the log of the number of sequences is K times H of S, K times the entropy. This is log to the base 2. So the number of sequences is equal to 2 to the KH. I'm saying equal to. I should be putting approximately equal to signs everywhere, but you get the idea. So the number of typical sequences is 2 to the KH. How many binary digits does it take to count 2 the KH things? KH, right? So what I need is-- so I just need KHS binary digits to count the typical sequences. So how many binary digits do I need per symbol? It's just that divided by K because I've got a string of K symbols. So I need a number of binary digits equal to the entropy. So this is a quick way of seeing that entropy is very relevant to minimal coding of sequences of outputs from a source like this. All right, now I swept a lot of math under the rug. The math that makes this rigorous exists. We don't want to have any part of it here. But for those of you who are inclined, you can look in a book on information theory. There's a nice name to it. It's called the Asymptotic Equipartition-- Equipartition Property. OK? It's basically saying that, asymptotically the probability partitions into equal probabilities for all these typical sequences. All right. So all that is for Huffman and its application to symbols emitted independently by a source over time. But there are limitations to this. We've been working with Huffman coding under the assumption that the probabilities are given to us. But it's typically the case that the probabilities are not known for some arbitrary source that you're trying to code for. The probabilities might change with time as the source characteristics change. So you would need to detect that and recode, if you're going to do Huffman. And then the more important point perhaps is that sources are generally not iid. The sources of interest are not really generating independent identically distributed symbols. What's perhaps more true is that-- let's see. Oh, here-- once you're done compressing your source to binary digits where each binary digit carries a bit of information, then you've got something that essentially is not correlated over time. You've managed to kind of decouple it. But at this stage, these symbols are not really independent in typical cases of interest. So one important case, of course, is just English text. You can still code it symbol by symbol, but it's a very inefficient coding. If you wanted to do it symbol by symbol, let's just ignore uppercase. You've got 26 letters plus a space. So that's 27 symbols. Well, you could certainly code that with five binary digits because that would give you 32 things to count. You can do better with a code that approaches the entropy associated with a source of this type. That would be 4.755 bits. OK, so if you ignored dependence in English text and just treated each symbol is equally likely, you'd say that that's the entropy, and you could attempt to code it with something approaching that. But actually, not all symbols are equally likely. If you look at a typical distribution of frequencies-- and we saw this with Morse already. The E is much more common than T, than A, O, I, N and so on. So there is a distribution to this. But you can take account of that distribution and compute the associated entropy, and you get something a little bit smaller, 4.177 instead of the 4.7-something that we hadn't before. Because not all letters are equally likely. But this is still thinking of it symbol by symbol, not recognizing dependence over time. But English and other languages are full of context. Right? If you know the preceding part of the text, you have a very good way to guess the next letter. Nothing can be said to be certain except death and-- well, you can-- in this case, you can give me the next three letters. Right? Anyone? AUDIENCE: It's taxes PROFESSOR: Taxes, yeah. So even though X taken in isolation has a very low probability of occurrence, if you look at the histogram on the previous page, you see that the probability is 0.0017. Letters are not independently generated. Now, it turns out Shannon was actually one of the earliest to study this in experiments on his wife. He had her-- he presented her with bits of text from one particular book and asked her to guess the next letter and so on. And he had a 1951 paper that actually launched a lot of this, because he had developed now the tools for talking about it. His estimate was much lower than the 4-point-something. It was more in the vicinity of one bit, 1 to 1.5 bits. So there's a lot of compression possible with English text because there's this kind of a dependence here. And just to tell you what it is that we're trying to compute when we compute entropy for these long sequences of symbols, we're sort of saying what's the joint entropy of a sequence of K symbols divided by K in the limit of K going to infinity. So this is what you might call H under bar. It's not over bar because I couldn't see how to do an over bar on my PowerPoint. But it's usually an over bar in the books. But this is really the object that you would like to get your hands on. For sequential text that has context in it, this is the kind of entropy that you really would like to be working with. OK. So that brings us to an approach to coding that's really focused-- coding or compression that's really focused on sequential text. And this is the Lempel-Ziv-Welch algorithm that's in the notes. Turns out that Lempel and Ziv or Ziv and Lempel had two earlier papers. And then Welch improved on it in an '84 paper. And what's in blue over there is a bit of a mouthful. And each word kind of means something, so I thought I'd list it all there. Maybe I've used too many of these words-- universal lossless compression of sequential or streaming data by adaptive variable length coding. And I'll come to talk about those terms on the next slide. And it turns out that this is a very widely used compression algorithm for all sorts of files. Sometimes it's for a part of it. Sometimes it's optional. Sometimes it's combined with Huffman, but all of these things that do compression pay homage to Lempel and Ziv at least. They were also patented. Actually, Unisys owned the patent on LZW for many years. These have all expired now. But while the patents were held, it made for a lot of heartburn because there were things being done without knowledge of the existence of the patents. And then people got hit with lawsuits and so on. Jacob Ziv, again part of this incredible heritage from MIT of people working here in the early days of information theory. He was a graduate student here at the same time as Huffman and many other people whose names surface in all of this. I was actually at an award ceremony of the IEEE, where Lempel got an award for his compression work. And people were given a whole minute for a thank you speech, a mini thank you speech. And everyone took their minute to mention this person and that and talk about the origins of the work. It's a lot to say in a minute but they managed to convey a lot. Lempel came up and said, "thank you." [LAUGHTER] It seemed kind of fitting for someone whose life is devoted to compression. [LAUGHTER] I just couldn't help but crack up there. That was-- all right. Now the interesting thing about this is that there are theoretical guarantees that, under appropriate assumptions on the source, asymptotically, this process will attain that bound. Now the thing is, the word asymptotically hides many sins. Lots of things happen at infinity that are not supposed to happen. Or lots of things happen at infinity that never happen when you're watching. So the theoretical performance perhaps is not as important as the fact that it works exceedingly well in practice. So we're going to talk a little bit about it. You've got a lab on it as well. So let me just say a little bit about what these words mean. So this is universal in the sense that it doesn't necessarily-- it doesn't need any knowledge of the particular statistics of the source that it's compressing. It's willing to take its hand at anything. OK? So it doesn't need to know the source statistics. It actually learns the source statistics in the course of implementing the algorithm. And it does that by actually building up a dictionary for strings of symbols that it discovers in the source text. So it's built around construction of a dictionary. What it then does is it compresses the text, not to things that we've seen here in Huffman, but to actually dictionary entries. So it's sort of like Morse's original idea, which was communicate the address in the dictionary rather than communicating the word itself or some compressed version of the word. So it compresses the text to sequences of dictionary addresses, and those are the code words that it sends to the receiver. It's also a variable length compression scheme. But it's interesting that it doesn't take a fixed length of symbols to varying lengths of code words. It actually takes varying lengths of symbols to fixed length of code words. So it's a little bit backwards. But it's still a variable length in that sense. So the way this works is that the sender and the receiver start off with a core dictionary that they both agreed on. And for our illustrations, we might say that they've agreed on the letters A through Z, lowercase A through Z. So what they have is these letters or this core dictionary stored in some register. Well, actually let me show you what it might look like. So there's the register with, let's say, you have an 8-bit table. This is the dictionary that you have at both ends. So you can store 256 different things. And you've both agreed on what's going to go into those slots. So somewhere-- I think it's slot 97 in one of these particular codes, you've got the letter A. And somewhere else you've got B, and so on. Or the next position you've got B. You can store a bunch of standard symbols. So we'll consider that all the single letter symbols are already stored in designated positions in this dictionary that's known to the sender and the receiver. So if the sender just sends 252, the receiver knows what 252 refers to because they've got that core dictionary that they've agreed on. Some of the text here, by the way, is stuff I've said already. So I'll actually go back. And then what happens is that the source starts to sequentially scan the text that's arriving or that it's looking at and puts new strings that it's found into new locations in this table and then communicates the address for the receiver. The magic of this-- and I mean it's fiendishly clever, very simple, but very clever, is that the receiver can build up its dictionary in tandem with the transmitter building up the dictionary. It's just a one-step delay. So one step later, the receiver has figured out what that dictionary entry is. So the transmitter or the source is building up the dictionary, looking at strings in the input sequence, communicating the address-- the addresses of the appropriate strings to the receiver, and the receiver is building up a dictionary in parallel. Now I think the easiest way to do this-- there's discussion in the text. There's also code fragments. But I think the easiest way for me to try and do this is to actually just show you how it works on a particular sequence. And you may not get all the details all at once. I do have a little animation that I need to tweak a bit, and I'll-- well, it's not an animation, but a set of slides that'll help you understand, actually, this particular example. So I'll have that posted as well. But for now, let's just work through this and see what it looks like. And I hope I don't trip over myself in the process. I hope you'll be forgiving. And I need these two blackboards to do it. OK. And I need some colored chalk. So what I'm going to have over here is the source. And over here is the receiver. And the source wants to send a message that I'll put here-- A-B-C. This is going to look incredibly boring. But the algorithm does different things at different stages, so that keeps it interesting. And let's see 1, 2, 3, 4, 5. And then we hit a special case somewhere near the end here that is worth sorting out. Because otherwise that, the fragment of the code that you see doesn't make sense. Gee, can you believe that I want to start this again here? Sorry. Let's start here. I want at least six replications of ABC. I want you to get comfortable also so you can settle into this. OK, here we go. All right. The receiver has no idea that this is the sequence. The source has, and the receiver both have A through Z sitting in their dictionary at designated locations. So the source will first see the letter A and does nothing because A is in its dictionary. It doesn't want to do anything yet. Then it looks at-- it pulls in B. So now it's looking at AB. AB is not in its dictionary because it's a symbol of-- it's a string of two symbols. So now it knows it needs to make a dictionary entry. I'm going to indicate dictionary entry with this. So the source is going to make a dictionary entry of AB. So what this means is somewhere in that register in a particular position, or in the next position actually from the agreed on table, it sticks in this. And then what it transmits to the receiver is not this, but the code for A. OK? So it enters the longer fragment here as a new dictionary word and sends the address for the piece that the receiver sees. So what does the receiver get? The receiver sees A coming in and says, OK, that's the sequence A. That's the symbol. A, I'm all set. All right? Now what happens is that the source pulls in the next letter. It's done with the A, so you can essentially forget about that. It pulls in the next letter. Looks to see if it's got B-C in its dictionary. It doesn't have BC because it only has single letter entries, and it has AB. So it's got to put in BC. So it's going to put in an entry for BC. And then what it's going to transmit is the B. The receiver gets the B. Oh, sorry-- the directory entry for B. And so it knows that's the letter B. And now it enters AB in its-- in its dictionary, OK, in the next location. So you see, with a one-step delay, the AB that was in the dictionary here has ended up in the dictionary of the receiver. OK, we're done with this. We now pull in the next letter here. That's A. We haven't seen A-- we haven't seen CA in our dictionary. So we make an entry for CA, ship out C. C comes here. I should say that this was done with the A. The C comes here, and the receiver knows to make an entry for BC. So with one delay it's got it. OK, we're done with this. We pull in the next letter, AB. That's in our dictionary. So we keep going, all right? So this algorithm doesn't look to ship out the dictionary address every time it sees a sequence that it recognizes. If it's got this already in its dictionary, it keeps going to try and learn a new word. So it's already got AB there, so it keeps going and it pulls in C. And now that's a new word. So it's got ABC as a new entry. It ships out AB-- the address for AB rather. This gets the address for AB, which is in its dictionary. It puts the AB down there. It takes the first letter of the string that came in and appends it to the last one that it had there and gives you the CA. So you see, it's keeping up but with a one-step delay. Let's keep going. So the AB is done with. We pull in A. We've got CA. We pull in the B. We don't have CAB, so let's enter that as well. By the time we've done this example, by the way, I'm hoping you'll know Lempel-Ziv. So bear with me. All right, dictionary entry-- and now what does it send out to the receiver? AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry. AUDIENCE: C2 PROFESSOR: CA-- the address for CA, right? The address for CA. So the address for CA comes in. It decodes the CA. And so let's see. We're done with these pieces, but this one has to build up its new direct dictionary entry. And so what it's got is the AB setting from before, and it pulls in the first letter. Instead of wrapping to the next board, let me start winding up again-- winding upwards. OK, so that's the new entry there, the receiver-- one step delayed from here. OK, I pull in the C. I have BC . I keep going. I pull on the A. I don't see that. So I need BCA. I ship out the address for BC. So I'm done with these. I get the address for BC here. I decode and get BC. I combined the first letter of the new fragment with what was sitting here. So I get CAB as my dictionary entry. And I keep going. All right, it's very systematic. I'm going to keep going because there's a special case that will trip you up if you don't get to it. And we need to proceed a couple more here. OK, I pull in the B. I've got a AB. I pull in the C. I've got ABC. I pull in the A. I don't have ABCA. So I enter that in my dictionary. And then I ship out ABC. OK, so you're always building a new word, entering it in your dictionary, and then the part that's already known you're shipping out and then hanging onto the end of this to start building the new fragment. ABC arrives here. I had the BC from before. I pull in the first letter of that, and I get a BCA as my new entry, which is this one. OK. Now we pull in the AB. I mean, we pull in the B. We have AB. We pull in the C. We have ABC. We pull in the A, we have ABCA, so we pull on the B. We ship out ABCA-- A-B-C-A. Right? And now we're done with all those guys. And here comes ABCA. And I go to my dictionary, and I don't have ABCA-- big hiccup. So the reason that happened is that I'm discovering I need to send ABCA on the very next step after entering it in my dictionary on the receiver-- on the transmitter side. And so the receiver hasn't yet had a chance to catch up. Now if you analyze this, It turns out that whenever this happens, the sequence involved has its last character equal to its first character. So looking at this, the dictionary here is waiting to build up. It's got the ABC here, and it's waiting to pull in the first letter from the sequence-- the sequence associated with this dictionary entry. It doesn't have that dictionary entry. So it can't pull in the A like it was doing all along. But if you analyze the cases under which this happens, It turns out that whenever you don't have it in your dictionary entry, the missing letter that you want to pull into your dictionary is the same as the first one in that string that's waiting to be built up. So it completes it with an A, and it's all set. Now it says ABCA, and it continues So this happens under very particular conditions. It's a special case. If you actually look at the code that's in the notes you'll see. While the encoding is straightforward, it's really remarkable that a short fragment like this can do that encoding. Let's see here. I don't want to do this. I did another example. Let me just say what's on this before I dispense with it. Sorry. OK. So look at what's happened. In terms of the number of things we've sent, we've only sent these addresses. And there are fewer of them than there were symbols in the original. So that's where the compression comes in. And as you get the longer strings, the benefit is higher. Actually, I'm going to pass this and just tell you, when you look through the code fragment for decoding, this is the special case that we talked about. If the code is not in your dictionary, then do such and such. So that's the explanation. All right. And that's described in the slides. We'll put that on. I just wanted to end with a couple of things. One is actually-- LZW is a good example of something that you see in other contexts as well, where you're faced with transmitting data and you decide instead that you'll transmit the model or your best model for what generates that data. That can often be a much more efficient way to do things. And in fact, when you speak into your cell phone, you're not transmitting a raw speech waveform. There's actually a very sophisticated code there that's modeling your speech as the output of an autoregressive filter. And then it sends the filter tap weights to the receiver. So this kind of thing arises again and again. Sending the model and the little information you need to run the model at the receiving end can be much more efficient than sending the data. The other thing is everything we've talked about has been lossless compression-- Huffman and LZW. You can completely recover what was compressed. But there's a whole world of lossy compression, which is very important. And we'll find ways to sneak in discussion of that as well. All right, thank you.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
16_More_on_modulationdemodulation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GEORGE VERGHESE: I want to actually spend a little time, not too much, talking about discrete-time Fourier transforms versus discrete-time Fourier series. You don't have any basis for comparison, but I think the way we've told the spectral content story this term is quite a bit simpler than in previous terms, but that leaves you puzzling over chunks of the notes and practice problems that refer to the discrete-time Fourier series and to periodic signals and so on. So I just wanted to give you a little insight into that, and then we'll go on to talk about modulation and demodulation some more. OK, so we've seen that our interest is generally in signals of finite duration because practical computation has to deal with that. And so we've got signals of this form-- 0 outside of some window, and really without loss of generality, I can take it to be some window from 0 to L minus 1. If the signal shifts in time, we know what to do with the Fourier transform. Can you hear me all right, by the way, at the back? Yeah, OK. All right so if we've got non-zero values only over a finite range, then the computation of the discrete-time Fourier transform boils down to a simple finite computation. Now, what we'll typically do is give ourselves a little more flexibility. Since the signal is 0 outside of this interval anyway, we might sometimes allow ourselves to think of the signal as being longer, but still with zeros out here. So you might come all the way up to some P minus 1. And what we're saying is, this is the window of interest. Everywhere outside of this window, the signal is 0. Now, the signal can be 0 at various points inside here, as well, but what we're saying is, outside of this interval, the signal is 0. Therefore, I only need to compute this from 0 to P minus 1, all right? And the nice thing is, it turns out that you can actually recover the time-domain signals from the samples of the DTFT through the formula on the right side. So what we're doing is we're actually computing the DTFT just at isolated points on the axis between minus pi and pi, just P, capital P, points. Or you can think of them as points on the unit circle that correspond to each of those exponentials that appear in the Fourier transform definition. And we then recover the time-domain signals just from those samples, OK? And really what's driving this is the fact that the signal is 0 outside of a finite window. OK. We'll also typically-- and if you look in the books, you'll see this, as well-- this notation often gets simplified, so x of omega sub k gets simplified to just x sub k. It's the k-th spectral coefficient. All right, so all that is good. And we have this nice algorithm for computing things, which is the fast Fourier transform. So we talked about how that significantly reduces computation. Now, there are properties of these formulas that you can explore, and I have some listed here. I'm not going to go through them. They're essentially the same properties we've seen for the DTFT. I want to focus more on this formula for reconstruction of the time signal from the spectral coefficients. By the way, in a previous writing of this formula, I had written the upper limit as P over 2. It's actually P over 2 minus 1, so I'll fix that in the earlier slides. OK, so what you're guaranteed is that if you apply that formula, you will recover every signal value in this window of length capital P. But what happens outside of that window? Well, if you look at this expression, is the right-hand side here periodic? You should suspect that it is because of the e to the j something n is there, right? If you look at the definition of omega sub k and look at each of these terms, it turns out that each of these will repeat periodically with period 2 pi. Sorry, with period-- let's see. I've said it badly. This whole term will repeat when n increases by capital P, all right? So let me write it down. And why is that? Well, omega sub k is 2 pi over P. So if you increase time by capital P, you're going to increase the exponent by 2 pi, and you've got the same exponential back again. And you can do this for any integer multiple of capital P. So what that tells you is that the expression on the right-hand side is actually going to repeat periodically outside of this interval. So it's fine to use this formula to recover the values in this window, but if you start to evaluate this formula outside of that window, you're going to start getting this whole thing repeated periodically, so you're going to get-- at this point you'll get-- and so on, OK? So the formula doesn't know what to do except to replicate periodically. It's up to you to know that this formula is no good outside of this window. All right? There's another way to think of it, though, which is that this formula gives you a nice, compact representation for a periodic signal. So if you started off with a periodic signal, here's a way to represent it as just a sum of capital P exponentials, and that's what a Fourier series is. So you've seen in 1803 or other places, in continuous time I imagine, that if you had a periodic signal you could represent it with a Fourier series. This is actually a Fourier series for this periodic signal. But if you know that your action of interest is all in this finite interval within one period, then you can actually use the Fourier series just to study what goes on in that one interval without worrying about what's outside. And that's really what we've done this term, is we've kind of ignored periodic signals. We've said all the attention is in a finite interval. Within that interval, we have this Fourier representation. It's easily computed by the FFT, and everything works nicely. So just to give a concrete illustration of how we end up applying this in a particular situation that should be familiar, if I had an input going into an LTI system producing an output, and if the input was non-zero only from 0 to, let's say some n sub x, and if the unit sample response of the system was zero only from 0 to n sub h, is there a particular interval of time that you can guarantee for me will contain all the non-zero values of the output? I want you to find for me an interval outside of which the output is guaranteed to be 0. Anybody? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Yeah, good. There are many ways to think of this. One is to say, well, the input value at time 0 fires off a unit sample of duration, goes from 0 to n sub h, and then the input value at time 1 fires off a unit sample response that starts one time later, and so on. So each of these fires off a unit sample response. Well, you've got inputs extending from 0 to n x, and so you're going to have an output that extends from 0 to n x plus n h, all right? So you're guaranteed that all the action of interest happens in this finite interval. And given that that's the case, you can actually-- whoops, what happened? You can actually do this kind of spectral representation, use the FFT, and all of that. You're going to just work on a finite interval, 0 to P minus 1, defined by that or greater than that, OK? So this is actually one of the most frequent uses of the FFT. It's to study systems where all the action happens in a finite window and you know a priori what the length of that window is, and you can then do all your computations there. And you never look outside that window because you've already guaranteed that everything of interest happens there. But when you read the notes, you'll find it's essentially the same story, but when you talk about Fourier series you're actually talking about the whole signal, the periodic signal, all right? One bit of notation, also, as you're reading the notes, just to go back a second here. We've been working entirely in terms of these samples of the DTFT. When you're thinking of Fourier series, when you're thinking of this as a Fourier series, it's typical to write X omega k over P as just the Fourier coefficient, A sub k. So you'll see in the notes an A sub k. That's just a normalized version of the Fourier transform sample, OK? All right. That's as much as I wanted to say on this, so let's get back to talking about modulation and demodulation. If you have questions on what I talked about, you can bring them up in recitation. All right, so just to review where we are. We've got some signal, x of n at baseband. Baseband just means that its frequency content is centered around zero. You've not done any modulation or shifting yet. You've been allotted some part of the frequency axis to do your transmission in because someone's told you, perhaps, that the medium that you're going to use can only transmit in that range, or the FCC has decreed that you're only going to use that region. So you want to send that signal somehow in another frequency band. So modulation was a process by which we converted up to some carrier frequency, and then demodulation was what you did with the receiver to get back down. So just to look at that in a little more detail. This is the modulation process we talked about last time. You've got a time-domain signal, your information signal. You multiply it by the cosine to get an amplitude-modulated transmitted signal. So t of n is the signal that you transmit, OK? There are other names for this. This process of multiplying a signal by a cosine at a particular frequency is referred to as heterodyning. That's a term from the earliest days of Amplitude Modulation, I think invented by Fessenden, who also invented AM, and of course it's specifically amplitude modulation for us. All right, so just to think spectrally, we had a simplified version of this picture last time, but let's first assume that this signal has some spectrum, which is shown by a cartoon here. I'm assuming a real signal. So we know that the spectrum has a real part that's even and an imaginary part that's odd, and that's what's shown for you here on this figure, OK? So we're going to track the spectrum of the signal by tracking the real and imaginary parts separately because the spectrum is in general a complex function of frequency. We've seen last time what happens when you multiply by the cosine. You take the spectrum, and you replicate it at the locations of the carrier. So if your carrier frequency is omega c, here's your frequency band, going from minus pi somewhere there to plus pi somewhere here. You've got a plus and minus omega c, the carrier frequency. So what happens when you modulate is you take the spectrum and you plunk it down on plus omega c and minus omega c, and you scale by 1/2, all right? So if the real part had amplitude a before, it now has amplitude a over 2, and the imaginary part, similarly. I haven't drawn these to scale, but hopefully the labels are clear enough. OK, so the modulation is not simple when you're thinking of what it does in the frequency domain. Now, it is not simple, but this picture is a little deceptive, perhaps, because I made an implicit assumption here. Otherwise, the picture would be a bit messier. What am I relying on here to get this simple picture? Yeah? Sorry? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Centered? What's centered at zero? Oh, the spectrum here? OK, yeah, the spectrum centered at here gives me a simple picture, yeah. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: OK, exactly. You see, when I drew this here, you can still recognize the triangles that came from over there. But if the baseband signal had a frequency content that extended way over, then the replication that I have here would actually leak into the replication that I have there, and I get a more complicated picture, all right? So if you want the simple picture, you actually have to limit the frequency content of your baseband signal, OK? You can see here, if the signal only extends omega c on either side, then I'm OK. The two replications will not smear into each other, all right? So we need a limit on the frequency content. Now, the specific limit that you have depends on the application. We'll see later when you do frequency-division multiplexing, where you're trying to put many different signals in the same general frequency band, that the restrictions might be different. But the basic idea is this, that you want any replication of your signal, if you're going to extract it later on downstream somewhere, you want the replication to not be corrupted by images of it somewhere else, or images of some other signal. So actually, the example that I showed you last time wasn't perfect in that regard, right? Remember, this was the spectrum of our typical baseband. We had 256 samples, like this, and then 0's. And we looked at the spectral content. It was given by a sinc-like function, and this is the spectral content magnitude after modulation, and therefore it's the two replicas. I'd modulated this onto a 1,000 Hertz carrier. So this is what we saw. And you can see here that there's funny stuff going on in here because the tails of the two replications are merging with each other, OK? So it's not perfectly symmetrical around here. And actually, these sinc-like functions decay very slowly, so even though it won't be visible to your eye, there's a considerable amount of this that's actually due to the replica out here, OK? So this case doesn't quite satisfy that band-limited condition. If you shape your pulse a little bit more carefully, for instance, if you had more rounded edges, then you can pull in the frequency content, and you might do a better approximation to keeping the replica separate. Or you might use a higher carrier frequency. That pull them apart and have less interference, but it's certainly an issue that you need to think about. OK. So what happens at the receiver? We already saw this briefly at the end of lecture. If what you receive is what you transmit-- in other words, if it's the signal, then extracting the x of n is easy. We said what you do is you basically do the same heterodyning again, right? You take the signal that comes in, multiply it by cosine of the carrier frequency. That's your signal after demodulation, and a little bit of algebra shows that you actually have your original signal of interest, and then something that's your original signal modulated by a cosine at twice the carrier frequency. So now there's some hope that you can actually pull these things apart. All right. So one question, of course, is what does the spectrum of this look like, and we'll look at that. And then the other question is, again, what constraint on the bandwidth of the signal that you originally sent from the transmitter-- what constraint is needed to recover? So let's look at the spectrum of the received signal first. We're assuming the channel is not distorting and that we don't have noise, so what's transmitted is also what's received. So here is the spectrum of what's received. It's exactly the spectrum I showed you earlier, right? It was the baseband spectrum, but replicated at plus omega c and minus omega c. So this is what comes in off the channel, assuming no distortion. And I'm going to multiply it again by a cosine at the carrier frequency. So what is it that I have to do? I take this entire spectrum, plunk down a copy centered at plus omega c, and another copy at minus omega c. Because my demodulation, just to remind you, my demodulation is multiplication by cosine omega c again. It's a multiplication by cosine omega c. Well, we know what that does in the transform domain, so here is the picture. And the piece that we want is the center piece. So what we need to do is filter it out of what's resulted from the heterodyning. So what kind of cutoff frequency would you-- what kind of filter and what kind of cutoff frequency would you want? Any suggestions? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Sorry, I didn't hear where that came from. Yeah? AUDIENCE: Lowpass filter? GEORGE VERGHESE: Lowpass filter. So, for instance, an ideal lowpass filter would be great, right? If you had a filter with a frequency-domain characteristic that was perfectly flat in some region and then cut off, let me say, at some frequency omega 0, so something like that-- well, actually, we want a factor of 2 to compensate for the demodulation process, if we want to get exactly the same thing back. So this would be in the frequency domain. Ideal lowpass filter. And we know how to get approximations to this, right? Because this is not really implementable. If you wanted to implement this, what kind of unit sample response would you need? A sinc function, right, but extending infinitely in both directions. But we could truncate that sinc, and we could shift it forwards in time to get a causal approximation to this filter. And the resulting frequency response will, if you plot it out, if you compute it and plot it, won't look too different from this. If I plotted the magnitude, you'll get-- it's something that's a plausible approximation to this lowpass filter. But what cutoff frequency would you want? What omega 0 should you pick? Any suggestions? Anybody? We're trying to extract this piece. So omega c would be a pretty safe choice, right? Omega c would be one that passed everything here, and would basically extract any signal that satisfied that initial constraint that we mentioned. So if your baseband signal originally extended from minus omega c to plus omega c, then a lowpass filter that extracted that would do fine without pulling in any of the replication here. So omega c is certainly fine. But if the signal that you transmitted at baseband actually had a narrower bandwidth than that, then you might just want to get away with a lowpass filter with a lower cutoff. Can you think of why you might want to do that? Is there anything that motivates you to use as small a bandwidth as possible? Yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: To limit the amount of noise, right? We suppress noise in this whole story. So if you're going to build a filter like this but all the interesting action is over here, well, all of the rest of the filter is doing is letting other signals get in, especially noise, and then that's going to add to the output and make things more difficult. So you'd really like to get the smallest bandwidth that suffices to pass the signal part of what you're interested in but keep out the noise, all right? But if you didn't know anything about the signal and its spread, or you believe that the spectrum extended really from minus omega c to plus omega c, then you would want to make omega not equal to the carrier frequency, right? But you've got to look at your particular situation and see what it is you're going to do. OK, so this is the picture that we have at demodulation. You're going to take the received-- well, no, sorry. This is the modulation part. Ah, no, it's not. Sorry, this is not well-drawn. That shouldn't be x sub n. That should be the received signal, OK? So the received signal comes in, gets multiplied locally by cosine, gives you the demodulated signal, and then you have the lowpass filter, so I'll change that before I post it. Simple enough? OK. Now, there are some problems that you can run into, and doing all of this in the lab you actually see that very quickly. So let me actually put this on the board here. What we said is that our demodulated signal is going to be our received signal times cosine omega c n, right? And if we assume no distortion in the channel this, is x n cosine omega c n. But there's a bit of a problem here, which is that, even if you've been told what carrier frequency your center is going to use, you might not know exactly what phase. It's typically the case that you don't know what the phase is on this cosine. So you know omega c, but you don't know exactly the phase, which means that your local carrier, your local oscillator or your local carrier multiplication here, will end up having some offset relative to the carrier used at the transmitting end, OK? And so the question is, if we track this through, what happens through the demodulation process, so that's really what this is trying to do. So we're saying d of n is your received signal, but the local oscillator or the local cosine that you're heterodyning with at the receiver doesn't know exactly what phase was used as the transmitter, so you've got to assume that there is going to be some offset. So this is actually what the multiplication is. And now you use a simple trig identity. It's the cosine of something times the cosine of something. That splits into this. And so what we're actually going to get from the heterodyning at the receiver is x of n times all of this, OK? So what we're going to get is-- I should write it down here. We're going to get 0.5 x of n. And then there's two pieces here. There is the cosine phi, and then there is the cosine 2 omega c n minus phi. OK? When you don't have any phase error, the cosine phi term is 1, but now it's reduced from that. The rest of the process is the same. You're going to do some filtering to get rid of this piece, the double frequency piece, and you're going to pull out just what you're interested in. Except now it's no longer x of n itself. It's x of n multiplied by this cosine. So can you see that this could lead you into trouble? What's the worst case here? Sorry, worst case-- yeah? AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Yeah, if phi is pi over 2, then cosine phi is 0, and you get nothing, all right? So if you're unlucky in the offset between your local sinusoid and the sinusoid that was used at the transmitter, you could end up with nothing, OK? You can also get the negative of what was sent, and so on, so you can go through the whole set of possibilities there. So the case of a phase error of pi over 2 corresponds to looking at a signal that was transmitted on a cosine and multiplying it by a sine, OK? And you can think through that in the spectral setting, as well. Maybe you'll do some of this in recitation, or maybe you already have. But when you multiply by a sine in the spectral domain-- So if you've got-- you've got your received signal, r of n, and now you're multiplying it by sine omega c n, right? Sine omega c n, well, that's 1 over 2j e to the j omega c n minus e to the minus j omega c n, right? So in the spectral domain, what happens? Well, you've got r of n multiplied by 1 over 2j times this first exponential. In the spectral domain, that does a shifting and a multiplying by 1 over 2j, and then you've got this term doing the same kind of thing. So you're going to have a shift of the spectrum of r of n in the frequency domain and a scaling by 1 over 2j. So if you think through what the shifting and scaling does, you see that it's a little bit more of a complicated picture of what you had over here. Well, the real part gets replicated around the 2 omega c region, but flipped over. The imaginary part gets carried over intact. And then the replications around minus 2-- sorry, around minus omega c, that is-- the imaginary part gets flipped over, and the real part gets carried over directly. Except what was real before becomes imaginary now. What was imaginary before becomes real now. So you can track through all of that. And it just comes from applying the standard DTFT results to what the spectrum of the product of an r of n and this is, OK? But the interesting thing is now, the two replications, when you sum them up, will leave you with nothing at 0 because this piece here will cancel out exactly with that piece there. So if you think through in the spectral domain what's going on, you'll understand exactly that if you've put your signal on the cosine and you demodulate with a sign, you're going to get nothing in that lowpass region, OK? So that's just the same result but seen spectrally. All right, so that's uncertainty between the phase of the transmitter and the phase of the receiver. Here's another thing that has a similar effect, which is an unknown delay on the channel, OK? So at the transmitting end, you've got your baseband signal multiplying the carrier. This is what's transmitted. But then you have a time delay, let's say D samples, capital D samples. So then what's received is actually t of n minus D, and that's what's going to get multiplied by the local carrier. And I'm assuming for now that you have the phase, so we can bring them both together later. But I'm assuming now there's no phase out locally, but there's an unknown delay on the channel. And you can see it's going to be the same kind of thing. You've got a cosine times a cosine, and the arguments are slightly different from each other. And you use the same trig identity, and what you find is the output of this process is not the input delayed, which is what you would like to get ideally. You aren't going to compensate for the delay with a causal filter, but it's also going to be scaled. And it's going to be scaled by an unknown amount that depends on that delay, all right? So it's the same kind of thing that happens. So the question is, how do we get around this? And here's one idea that works well, and which you're actually exploring in the lab. Which is to use both the sine and the cosine, OK? So use both the sign on the cosine to demodulate. If you go completely bad on one channel because you've got the phase completely wrong with the cosine, you're going to do all right on the sine channel. If you do completely bad on the sine channel because you've got the phase wrong, then you're going to do all right on the cosine channel. So at least one of them will work, and more typically both of them will work a little bit, and what you'll then do is combine the two outputs. OK? So you're going to have the signal coming in. There's a cosine multiplication and the sine multiplication, and then the lowpass filtering. We refer to this as the in-phase component, assuming that you were modulating on a cosine, and this is referred to as the quadrature component. So there's in-phase and quadrature. "Quadrature" just means at right angles. So this is the I and the Q components. And if you work out what these are, assuming now that there's both a time delay and a phase offset, you can see that the in-phase component will be the signal that you want, but multiplied by cosine phi. The quadrature component will be the signal that you want, but multiplied by sine phi. And from there, it's not so hard to imagine that you could actually get back to the signal of interest. And here's one way to do it that works fine if you've got on-off signaling. So what you would do is, here's the I. Here's the Q. And I've just represented it graphically here. This is typical to do. So here is the I component. Here is the Q component. And you could take the root sum of squares to basically get rid of that sine theta and cosine theta term, right? So what that's going to give you is the absolute value of x of n minus D. So you can certainly get back the absolute value of what was used to modulate the carrier, and that may be all you need. If you have on-off signaling, that's all you need. If your modulating signal never goes negative, its absolute value is the same as the signal, so this is fine. So what you will discover is that you get some signal out there, and you're looking for its length. When the length is non-zero, you say you have a 1 cent. When the length of 0, you say you have a 0 cent. In the presence of noise, of course, it won't be exactly at the origin. There might be some cloud of points there. And similarly for the 1 level. What if you were interested in the polarity, though? So suppose it mattered to you whether the signal was positive or negative. Well, you could then just plot the point and don't take the magnitude. So you'll get something that looks a bit more like this. OK? So what you'll have is, when a 1 is sent, perhaps you'll get that value in the absence of noise. When a minus 1 is sent-- sorry, when a 0 is sent, corresponding to minus 1, this is what you'd get. This is-- sorry, I should have said that. This is assuming bipolar signaling, right? Bipolar signaling is the case where you're interested in the sign of the signal. You use plus 1 to send a 1. You use minus 1 to send a 0, OK? So you get some diagram like this. The only problem here is, if you've got uncertain phase and delay, you actually don't know which of these two points corresponds to the plus 1 and which corresponds to the minus 1. So there's that additional ambiguity that needs to somehow be resolved, and there are different procedures you might use. You could, for instance, have some preamble with a sign that's agreed on and use that as a basis for figuring out which is a plus and which is a minus. And there are other ways of doing it, as well, where what's called differential coding, where basically it's not where that is, but whether it flips over to the other side or not that signals a bit. And so what you could do is, to transmit a 1, you'll step the phase by pi, and that can be detected, and to transmit a 0, you don't change the phase in the next bit slot. So if from one bit slot to the next the dot stays there, you know you've just received a 0. If from one bit slot to the next it flips over to the other side, you know you've just received a 1, OK? So even with the ambiguity, if you change the way you code at the sending end, you can actually compensate for this. OK, now playing this game with sines and cosines can actually also be done at the transmitting end, and we haven't explored that in class, but it's something that you could think about. So we've been talking about taking the samples and multiplying them onto a cosine carrier. You could have another bitstream whose samples you multiply onto a sine carrier. And you can just add them together and send them over the channel. At the receiving end, you multiply by cosine. Well, that'll only pull out-- you multiply that cosine and then filter, lowpass filter-- that'll only pull out the first stream in the ideal case. Multiply by a sine and filter, you'll get exactly the second stream. So you can simultaneously send two streams on a given carrier using this scheme, this method, OK? So depending on how you make out in the lab in problem set 6-- I don't know how many simultaneous carriers you're getting, but whatever you end up with you can actually try now to transmit twice as much on each carrier by using this kind of a scheme. Could be fun. All right, so this kind of bipolar signaling, what's called phase-shift keying-- I didn't explain that really, did I? I said-- we've said it before, but-- we've talked about bipolar or phase-shift keying. All that we mean is, if you get a signal with voltage plus 1 and minus 1 for your bit 0 on bit 1, by the time you modulate what you're going to end up doing is sending a burst of carrier here with the plus 1. And then when you come to the minus 1 region, you're going to multiply that carrier by minus 1, so you're going to suddenly step the phase, right? So amplitude modulation with an amplifier that switches between these levels can be also thought of as a phase shifting. So you're keying between a phase of 0 degrees and a phase of 180. So this kind of scheme is used all over the place. And I actually have a slide that lists a whole bunch of schemes that you're familiar with, you see every day in all sorts of literature. You know, 802, and Bluetooth, and Zigbee and so on. In all of these standards, there's some piece of it or some domain or some regime in which what's going on is some variant of what we've learned here. And they get fancier and more sophisticated, but you really have the key ideas here. OK, let's now talk about putting multiple signals on a given piece of the spectrum, OK? This is exactly the situation you have in your lab now. You've got the speaker that can transmit in a certain band, and you're trying to put multiple simultaneous signals on it by using different carriers. So this is what's called Frequency-Division Multiplexing, or FDM. And the idea is very simple. You've got three signals here and this illustration, the blue signal, red, and green. Pick a carrier frequency for each of them. Do the modulation, and then just add them on the channel. If you've got a linear medium, then the signals will superpose, so what's received is just the sum of these, and now you can do the same kind of thing. And what we're relying on here is, again, the heterodyning principle. Whoops, sorry. OK, so if you've got frequencies omega red, omega blue, omega green in the signal that you're receiving, and you multiply this with some local sinusoidal frequency, omega 0, where will your various spectra be centered in the result? So the way to think of it is, all the sums at different frequencies here will now appear. So you get omega 0 plus omega r. You get omega 0 minus omega r, and similarly for all of these. OK, it's the same thing that you saw with a single transmitted the signal, except now it's a more elaborate constellation. It's actually this that's being transmitted. So the receiving end, you'll pick a particular frequency to multiply the incoming signal by. The result will have pieces of the spectrum centered at each of these. So if you want to center one of these in your lowpass filter, how should you pick the local oscillator? If you want to tune in a channel, what is it that you want to do? You want to get one of these center frequencies to sit right in the window of your lowpass filter. So what you'll end up doing is pick your local oscillator frequency to be the carrier frequency of the station you're interested in, or the signal you're interested in, OK? So it's the same idea. You have a lowpass filter, and you're using heterodyning to shift the piece of the spectrum of interest into the passband of the lowpass filter. All right? Now, what about the bandwidth of the lowpass filter? What should it be? So now it depends on how closely spaced your carriers are, right? So for instance, if I ended up heterodyning such that my blue signal, my blue signal came into the window of interest-- and I've got the red spectrum sitting somewhere here and the other piece sitting somewhere here. OK? I've shifted things so that this is at 0. What's this frequency now? Omega r minus omega-- what was it? Blue, right? So I've basically shifted these frequencies. So this is at 0, sitting in my lowpass filter. Use a different color. And I want to reject everything else. So how should I pick that lowpass filter? Well, presumably, you want the cutoff to lie between these two frequencies. So you want half the distance to the nearest carrier, right? Half the distance to the nearest carrier frequency. OK, so you can think through these. And notice how all our thinking has been in the spectral domain. Thinking in the frequency domain clarifies this whole thing. You really would not have been able to do what you're doing thinking entirely in the time domain. Now, all of this comes to us, really, from-- let's see. I had that already? Yeah. All of this comes to us from a rich legacy in AM radio. We're not using this for transmission of analog signals by amplitude modulation, but it's the same principle. So these principles actually were studied from the early 1900s, actually. And the AM radio that we see around us now is actually set up exactly to do the kind of thing we're talking about. So you've got some frequency spectrum that the FCC is allowing you to use. Different stations are given different carrier frequencies that they can operate on. They're also instructed on what bandwidth they can occupy. So basically, the carriers are 10 kilohertz apart, the way that the stations are assigned. So if you're transmitting from your station, you'd better lowpass filter what you're sending out to 5 kilohertz before you transmit it. Because if you don't, you're going to interfere with a nearby station. Assuming there is a station in the same geographic area that's been assigned to a neighboring carrier frequency, all right? So all of these issues come in. Another thing that actually is-- what did I do here? I think I-- I mashed together two slides. But the other thing that-- it turns out, for instance, AM radio, at nighttime, because of the way radio transmits, the signal can propagate much further. So these stations are asked to reduce their signal strength, the carrier strength, at nighttime so that they're not interfering with nearby stations. The stations that they would not interfere with during the day they could interfere with at night because propagation characteristics turn out to change. So all of this business of your signal not interfering with your neighbor's carrier or your neighbor's portion of the spectrum, all of that ends up being important. OK, I think we've probably said as much as we want to say about the signals part of this class. One of the things about 602 is that those of you who master it come out knowing the subject better than any of us that teach it because there's none of us that's able to teach the course right through start to finish. Well, that's not entirely true. Harry knows how to do it Chris Terman knows how to do it. The recitation instructors hang in there for the whole term, but it's very hard for one person to do that. So I'm done. I'm going to be sitting there from lecture onwards, so thank you all for your attention, and thank you.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
1_Overview_information_and_entropy.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GEORGE VERGHESE: The course is called Digital Communication Systems. So I wanted to say a bit about what that means. And the easiest way to do that is to contrast it with analog. What's analog communication? Well, in analog communication, you're typically focused on communicating some kind of a waveform. So you've got some continuous waveform, typically, an x of t, maybe the voltage picked up at a microphone at the source, and you want to get it across to a receiver. And it's under this umbrella that you have things like amplitude modulation, frequency modulation, and so on. These are all schemes aimed at transmitting a continuous waveform of this type. So an amplitude modulation, for instance, what you'll do is you'll take a sinusoidal carrier. The carrier carries the information about the analog waveform, and basically, it's a high-frequency sinusoid whose amplitude is varied in proportion to the signal. I haven't drawn it too well. It's supposed to be constant frequency and just the amplitude varying. So this is something of the type x of t cosine 2 pi fc t, for instance. It's a sinusoid of a fixed carrier frequency with the amplitude varying slowly. In FM, what you do is you have a fixed amplitude waveform, but you vary the frequency. So what you might do is have high frequency in this part, and then when the signal goes low, the frequency gets lower. And then it gets higher where the signal is high. So there's a modulation of the frequency, but the amplitude stays fixed. The good thing about this is you can be transmitting at full power all the time, and the information is coded onto frequency, whereas this can tend to be more susceptible to noise. But the focus is on an analog waveform and transmitting that. Now, in digital communication the focus changes. So in digital, we think in terms of sources with messages. So we have messages. There's a source of some kind that puts out a stream of symbols. So at Time 1, there's some symbol emitted. At Time 2, there's some other symbol, symbol, symbol. And these are all heading to the receiver. So already we're thinking of a clocked kind of system. We're thinking of symbols being transmitted at a particular rate. We're thinking of these discrete objects rather than continuous waveforms. And the focus is then on getting a message across as opposed to getting a waveform across with high fidelity. And that turns out to actually be a big shift in perspective. These symbols, then, will often get coded onto-- for instance, the symbols, if they originally were, let's say an A, B, C, D, for instance-- coding the grades in a class-- you might want to, when you're transmitting them, to adopt those symbols to what your channel is able to take. And maybe your channel is one that's able to distinguish between two states, but maybe not between four states, so you might want to code these onto a channel that-- well, onto strings of 0's and 1's, so that you can impress this on a channel that can respond to just two states. So you might have a coding step that takes the original symbols, puts out a stream of 0's and 1's. And then you've got the task of decoding the message. The channel might corrupt these streams, and that's another thing that you have to deal with. So what's made digital explode is the fact that it's really well matched to computation, memory, storage, all the stuff that's advancing rapidly. In the world of analog, you're talking about analog electronics, which is also advancing greatly but doesn't have the same flexibility. Here, you can do all sorts of things with digital and with the computation that's available, and that's growing in power all the time to do more and more fancy things. So digital communication is really most of what you see around you. So when you talk on the phone, or do computer-to-computer communication, or browse the web, and so on, it's really digital communication that you're talking about-- with one little caveat, I guess. When you get down to the details of how you get a 0 and 1 across on a channel, you're back in the analog physical world. And you tend to be doing things that are much closer to what you worry about on analog channels. And we'll see that in this course. So for most of what we talk about, we'll be working at the level of the digital abstraction here. But when we come to talking about transmission on a link, and modulation, and demodulation, and the like, we're back in the analog world. And you'll actually get a good feel for some of this in the course of our digital communication. So to give you a sense of how the course is structured, we'll spend some time, first of all, talking about information-- information in a message and how you measure it, how you code it up. So this is sort of the bits piece of the course. And then we'll talk about how to get these messages across single channels. So this is a single link source at one end, receiver at the other end. And we'll focus on how you get the data across. And that brings us to the analog world and to the world of signals, so we'll spend time on that. And that's sort of the second third of the course. And then the last third of the course, which Harry will actually be lecturing, focuses on now when you have interconnected networks. So you've got multiple links, so you might want to communicate from some source here to a receiver that's way across on the network, going through multiple links and multiple nodes. And there are all sorts of issues there. And there, we're thinking in terms of a packet kind of abstraction. It's packets that we ship around the network with associated logic, and mechanisms for failure detection in the network, and coping with all of that So there are these three sort of chunks to the course, and you'll see that in the notes as well. So we'll start off with the bits piece. That's more or less Quiz one. Then we'll get to the signals piece. That's more or less Quiz two, and we'll get to the packets piece, and that's more or less Quiz three. And these will be relatively modular. So you get a chance to make a fresh start on each of them. But you'll find us reaching back to build on ideas developed earlier in the course. Now, as I think about where digital communication originated, it actually turns out to be largely due to the person who painted this painting. This is called "The Gallery of the Louvre." This is somebody who painted it in around 1830, an American painter. He was actually born in Charleston, close to here, studied at Yale, made enough of a name for himself that he had commissions to paint portraits and the like. He was actually called to Washington, DC to paint a portrait of the markets, the Lafayette. While he was there, he got a telegram from his father saying that his wife in New Haven was convalescing. By the time he got to New Haven, which was as soon as he could-- he abandoned the painting he was doing and left for New Haven-- by the time he got there, he found that she had actually died and been buried. And that sort of fortified him for what he decided was his life's work, which was to find better ways to communicate, faster ways to communicate. He didn't want to have to depend on horse riders to carry messages or ships across the ocean. He's actually painted himself into the middle of that painting. These were some friends that he made in Paris. It's actually the author, James Fenimore Cooper, of The Last of the Mohicans fame. He was actually hoping to sell his painting to Fenimore Cooper, but things didn't work out that way. In any case, this is a huge painting. It's about six feet by nine feet. He wrapped it up to bring it back to the States. It wasn't quite finished. This was about 1831 or so. And on the boat, he met this person who had a little electromagnet that he was playing with. And they had various discussions, and he got the idea for a telegraph. Anyone with a guess as to the name? Morse. Now we think of Morse as the Morse code guy, but it turns out that he actually did hugely more than the code. So that's Samuel Morse, looks pretty imposing. He didn't just come up with the code, he actually invented the whole system. Now, he didn't work in a vacuum. There were people doing related things in different places, but his was the first practical essentially single-wire system. If you look at his patent documents, he's got all the little pieces that it takes to make the system. A key piece was actually the relay. So he figured out, working with a colleague back in New York, he figured out that with a little battery, you could close an electromagnet or you could power an electromagnet at some distance. But you couldn't have that wire be too long. So what he arranged was a relay where that electromagnet then pulls another piece of metal, which then closes another switch on a separate circuit, so you can then start to propagate the signal over very large distances. And that was really a key part of his invention. Morse code-- there's actually some discussion as to whether he invented it or it was actually his assistant Vail, but it's called "Morse code" anyway. The other staggering thing about this story is how soon after the invention-- I mean, his patent was, let's see, 1840, very early in the days of the Patent Office, as you can see from the numbers assigned to the patent. About 15 years later, there were people raising money to lay cable across the Atlantic to carry telegrams. So can you imagine, partly the bravery of these people? I mean, it's hard enough to think of laying cable across Boston Harbor. And they were prepared to design this cable, load it on a ship, and lay it across the entire Atlantic. They made an attempt in 1857. It actually turned out to work for about three weeks. That was long enough for Queen Victoria to congratulate President Buchanan, except it took almost all of a day to get the 98 words across from one side to the other. And the reason is when you put a little pulse on one end of a very long cable, it distorts like mad by the time it gets to the other end. So you can barely detect-- if you put a sharp change in one end and you've got a long cable, and if it's a poorly designed cable, it takes a long time to detect the rise at the other end if you detect it at all. It turns out the person at the American end was the person who would later become Lord Kelvin. He was called plain old Walter Thomson at that point. He had designed a very sensitive way to measure these changes in voltage at the ends of cables. But the person at the British end was actually a surgeon, a self-taught electrical engineer, who was convinced that the problem was there was not enough voltage on the cable. So he kept cranking up the voltage. When he got to 2,000 volts, the cable failed. And so there had been celebrations in the street, and there had been fireworks, and all of that. And then people got very angry, and thought this was a scam, and a way to raise money, and all of that. Despite all of the negative press, a year later here was this man again, with enough funding from governments and private sources to make another attempt at the cable. Anyway, it took a while. It took a good nine years to finally lay a good cable. They'd gone out about 1,200 miles with a cable in 1865 before it broke. They had to start again in 1866. They managed to lay an entire cable, and then they came back and found the broken end of the 1865 cable, and picked it up, and continued it. So in 1866, they managed to get two cables working. And now it was a lot faster-- eight words a minute. It was digital communication. It's got all the ingredients of what we see in digital communication today. And then a little while later, there was a transcontinental line, which marked the end of, essentially, the Pony Express trying to carry mail across the continent. Now much more was going to happen on telegraph lines. There was a transpacific line in 1902, so that meant at that point, you could encircle the globe with telegraph. So it was really a transformative technology. And it was a digital technology because all you were trying to figure out at the other end was whether something was a dot or a dash. It was just basically two states that you were trying to distinguish. That's his Patent Office documents. Actually, they're interesting to read, but let's just see here-- "Be it known that I, the undersigned, Samuel F. B. Morse, have invented a new and useful machine and system of signs for transmitting intelligence between distant points by the means of a new application and effect of electromagnetism." And then he goes on to describe the equipment and the code itself. This is just a map to show you the kind of distance that they had to lay that first cable over. Morse code you've all seen. It's gone through some evolution, actually. Actually, Morse originally thought of just a code for numbers, and then he imagined a dictionary at the two ends, and you would just send the number for the word in the dictionary, and someone would look it up at the other end. But then with Vail, they developed this scheme. You notice the most frequently used letter has the shortest symbol here. It's just a dot. And then if you go to an A, it's a dot, dash, and so on. The T, I think, is a dash. Yeah. So the choice of symbols sort of matches the expected symbol frequencies in English text. You want the more frequently used letters to have the shorter symbols because there are going to be many of them, and you don't want to be sending along code words with them. But this was Morse code. Here's another way to represent it. So going to the left is a dot, going to the right is a dash. So a single dot brings you to an E. A dot, dot brings you to an I, dot, dot, dot to an S. Dash dot brings you to an N, and so on. So you can display this code on a graph. One thing you see right away from this display, and it was clear from the code itself, is you're not going to be able to get away with just two symbols. Because if you're trying to get to an A on this path, you hit an E on the way. And you need something to tell you that you aren't done yet. So there is a third symbol, and that's the space. So Morse code has dot, a dash, and a space. It's really a three-symbol alphabet, and the space is critical. If you want to have a code where you can deduce instantly that you've hit the letter that the sender intends, you need all the letters to be at the ends here, at the leaves of the tree. If you have all the code words at the leaves of the tree, right at the ends, then you're not going to encounter any other code words along the way. So you just keep going down the tree, dot to the left, dash to the right, till you hit the code word at the end, and then you're done. But in this kind of arrangement, you need a third symbol, actually, to demarcate the different words. So this made Morse a very celebrated man. He got patents in various places. He got medals from all over the world, including the Sultan of Turkey. I believe this one is a Diamond Medal from the Sultan of Turkey. He was celebrated on postage stamps, and all deservedly so. He really made a huge difference to communication, and set digital communication on the current path. So I've got to skip forward very quickly now to bring me to the part of the story I want to continue with. So we're going to hopscotch over a whole bunch of, again, transformative inventions. There was a telephone in '76, again with Boston connections. It was really thought of as speech telegraphy at that time, so it wasn't the telephone yet. Bell's patent is titled "Improvement in Telegraphy." There was wireless telegraphy, which was Marconi sending signals from Europe to, actually, Cape Cod, but it was not voice. It was Morse code, basically, dots and dashes. And then here came analog communication. So this was exactly what I talked about, AM and FM, and then later video images, and so on. So there's a lot of this going on during this period. A big player in the theory, that company-- this was actually Bell Labs. The Bell Labs is really full of people who made a huge difference to the development of all this. In fact, I had some names listed on a previous slide. Let me see if I-- I passed over them without mention. But in the development of the telegraph, I've mentioned Lord Kelvin already. He did a lot to model transmission lines and to show how to design better ones. Design of magnets and the invention of the relay-- that was actually Joseph Henry, a Professor at Colombia, after whom the unit of inductance is named. That's the Henry, and various other people. So this technology really was a very fertile kind of ground for people to develop things in. And if you take other courses in the department, these are names you will encounter all over the place-- Nyquist, Bode, e But the one I want to focus on is Claude Shannon, who's sort of the patron saint of this subject, I would say. Shannon did his Master's degree here at MIT. It's been called one of the most influential Master's theses ever, because he developed Boolean algebra as a way to design logic circuits. The logic circuits he was talking about were relay circuits of the time, but this was very quickly picked up, and quoted, and applied. And then he moved on to something else for his PhD. And I don't know the extent to which that's been influential in genetics. But then he joined Bell Labs just about the start of the war years. A lot of work on cryptography during that time, initially classified but then a declassified version published. During that time, he also had interaction with Alan Turing, who was working on cryptography in England, but had been sent over to Bell Labs to share ideas. And then in 1948, a groundbreaking paper that really is the basis for information theory today. So it was this that developed a mathematical basis for digital communications. And the impact has been just incredible since then. So that's what we want to talk about a little bit. Now I have here a checklist that I don't want you to look at now, but the theory that Shannon developed is actually a mathematical theory. It's a probabilistic theory. And if you're going to be doing calculations with probability, you need to know some basics. What I put down there is a checklist. We don't have a probability prerequisite for this course. We assume you've seen some in high school, some in 601, some elsewhere. By the way, I should say that all these slides are going to be on the web, or maybe they're on the web already, so you don't have to scramble to copy all this. We'll put the lecture slides up on the web. We may not have them exactly before lecture, because I'm often working right to the bell, but we'll have them after the lecture. So take down whatever notes you want, but you don't have to scramble to get every word here. By the way, the other thing I should say is that your contract in this course is not with whatever materials on the web or what you find from past terms and so on. It's really with us in lecture and in recitation. So we urge you to come to lecture even though all of this will be posted, because there's other learning that happens, and you have a chance to bring up questions, and hear other people's questions, and so on. I'm not going to read through that. But I want to have a little picture that you can carry away in your mind for what we think of when we think of a probabilistic model. So we've got a universe of possibilities. We've got outcomes. These are what are called elementary outcomes. Think of it as rolling a die, for instance, and I get one of six numbers. So each of those is an outcome. So here's the elementary outcome. I could number them s1 to s n, and they don't have to be finite. It could be an infinite number, a. Continuum we'll see examples of that later. But if you're thinking of a source emitting symbols to construct a message, then at every instant of time, the source is picking one of these symbols with some probability. So that's the kind of model we're thinking of. So here are the elementary outcomes-- s1, s13, and so on. You've got events, and events are just collections of outcomes. So events are sets. So this is the event or set a, just a collection of elementary outcomes. I say that the event has occurred if the outcome of the experiment is one of the dots in here. If the dot is out here, if this is what you got when you ran the experiment, the event didn't occur. So an event is just a set, a subset of these outcomes. We say "the event has occurred" if the outcome that actually occurs is sitting inside, in that set. And then we can talk about intersections of events. We say that if this event a and this is b, the event a and b corresponds to outcomes that live in both sets. So if I roll a die, and I get a number that is even on a prime, that tells me what that number is. So if this is the event of getting an even number on rolling a die, and this is the event of getting a prime number on rolling a die, what number do I get when both events have occurred? So I can identify different events, and then I assign probabilities to them. So I can talk about the probability of an event. And then you can combine probabilities in useful ways. So let's see, there's a lot on here because I wanted, in principle, to fit it all on one slide that you could carry around with you. Probabilities live between 0 and 1. The probability of the universal set is 1, meaning when you do the experiment, something happens. So it's guaranteed that something happens and therefore, u always happens. So the probability of u is 1. And then the probability of a or b happening is the probability of a plus the probability of b If a and b have no intersection, if they're mutually exclusive. So we say that two events are mutually exclusive-- actually, let me draw a c over here-- a and c in this picture are mutually exclusive because there's no outcome that's common to the two events. So if one event occurs, you know the other one didn't occur. And so if I now ask what's the probability of a or c occurring, it's the probability of a plus the probability of c. You'll be doing this all the time in this course. You'll be adding probabilities, but you've got to think-- am I looking at mutually exclusive events? If you've of got mutually exclusive events, then the probability of one or the other happening is the sum of the individual probabilities. If they're not mutually exclusive, then there's a little correction you have to make. The probability of a or b happening is the probability of a plus the probability of b minus the probability of both happening. All of this is quite intuitive. Another notion that's important is independence. So we've seen that mutual exclusivity allows you to add probabilities. Independence allows you to multiply probabilities. So we say that a set of events-- a, b, c, d, e, for instance-- are mutually independent if the probability of a and b and c and d happening is the product of the individual ones. But similarly for any subcollection-- so you're going to call a collection of events independent if the joint probability of their happening in any combination factors into the product of the individual probabilities. And again, this is a computation you'll be doing all the time in different settings, but you've got to think to yourself-- am I applying it to things that are independent? Because if not, then it's not clear you can do this factorization. We'll come later to talk about conditional probabilities. But the probability of a given that b has occurred-- we can actually sort of see it here-- the probability of a given that b has occurred is this area as a fraction of the whole area there. Sorry, the probability of a given that b has occurred is the probability of a and b over the probability of b. Given that b has occurred, you know that you're somewhere in here. And what's the probability that a has occurred given that you're somewhere in here? It's the probability associated with that intersection. One last thing-- expectation. We talk about the expected value of a random variable as being basically the average value it takes over a typical experiment, let's say. And the way you compute that is by computing the average weighted by the associated probabilities. And we'll see an example of that. I didn't feel right just jumping into Shannon's definition of information without saying a little bit about how you set up a probabilistic model. But with all that said, here's what Shannon had as the core of his story, and building on earlier work by other people. So if you're thinking of a source that's putting out symbols, the symbols can be s1 up to s n, the information in being told that the symbol s i was emitted is defined as log to the base 2 of 1 over the probability. So what you're trying to come up with is actually a measure of surprise. Maybe "surprise" is a better word than "information." "Information" is very loaded word. But what you're trying to measure here is how probable is the thing that I am just seeing. If it's a highly improbable event, I gain a lot of information by being told that it's occurred. If it's a high probability event, I don't get much information by being told that it's occurred. So you want something that's dependent reciprocally on probability. The log is useful because that allows you to have the information given to you by two independent events being the sum of the information in each of them. And the calculation is just this-- it says that if a and b are independent events, then the information I get on being told that both of them occur is 1 over log p a p b. But that then just becomes the sum of the individual ones. So the advantage of having a log in that definition is that for independent events where the-- I should have actually perhaps written one more here. Here's the information in being told that both events have occurred. Because they're independent, that joint probability factors into the product of the individual ones, which then factors into the sum of these two logarithms. So here's the information in being told a and b. Here's the information in being told just a occurred, and here's the information in being told that just b occurred. So the log allows things to be additive over independent events. Now, the base 2 was a matter of choice. Hartley chose base 10, Shannon chose base 2. And he called it the "bit." So when you measure information according to this formula, with the log taken to the base, 2 you call the resulting number the number of "bits" of information in that revelation. I'm being told that that's the output. Now for this lecture and probably only this lecture, I'm going to try and maintain a distinction between the bit as a unit of information and our everyday use of the word "bit" to mean a binary digit. It's unfortunate that they both have the same name, because they actually refer to slightly different things. A binary digit is just a 0 or 1 sitting in a register in your electronics, whereas this is a unit of measurement. And the two are not necessarily the same thing. So I'll try and catch myself and say "binary digit" when I mean something that can be a 0 or 1, and "bit" when I'm talking about a measure of information. But here, for instance, is a case where the two coincide. If I'm tossing a fair coin, so it's a probability 1/2 that it comes up heads, 1/2 that it comes up tails, then log to the base 2 of 1 over 1/2 gives me 1. So there's one bit of information in being told what the outcome is on the toss of a fair coin. And that sort of aligns with our notion of a binary digit as being something that can be either 0 or 1. We don't usually associated probabilities when we use "binary digit," but with "bit," we do. So Shannon has a measure of information. And there are examples we can talk about there in the notes, so I won't go through them. And I think I've said this already, so I'll pass through that and get to his second important notion, which is the notion of entropy. The entropy is the expected information from a source. So what we have is expected information from a source or from the output of an experiment, but if you're thinking of a source emitting symbols, this source can emit symbols s1 all the way up to s n, let's say, with probabilities p1 up to p n, let's say. And the sum of those is going to be 1. If I tell you that s1 was emitted by the source, I've given you an information log 2 1 over p1. If I tell you s1 was emitted, that's the information I've given you. But if I ask you before you see anything coming out of the source, "What's the expected information, what information do you expect to get when I run the experiment, when I produce a symbol," then you've got to actually average this quantity over all possible symbols that you might get. But it's got to be weighted by the probability with which you're going to see that symbol. So this is exactly what I had defined earlier as an expected value. So the entropy of the source is the expected information-- or let's say the expected value of information you get when you're told the output of the source. And so if the emission is s1, then the information is this, but that happens with probability p1. If the emission is s2, that carries this information that happens with this probability, and so on. So this is the entropy. Shannon is borrowing here from ideas developed in thermodynamics and statistical physics. People like Gibbs at Yale in 1900 already had notions of this type. His innovation is in actually applying this to communications, and he has several constructs beyond this. We'll come to some of them later. But up to this point, he's making a connection with what they do in statistical physics, except they're usually not thinking in terms of information. They're thinking in terms of uncertainty here. And they're not thinking of sources emitting symbols. So this is the entropy. So for instance, if you've got a case where you have capital N symbols and they're all equally likely, then the probability of any one of them is 1 over N. So what is the entropy? Well, it's going to be summation i equals 1 to N, each probability is 1 over N. I take log 2 1 over 1 over N. So what does that end up being? That ends up being-- I can take the log 2 N out, and then I've got the summation, 1 over N. And the result is log 2 N. So if I've got equally likely symbols, N of them, then the entropy, the expected information from being revealed what the outcome is, is log 2 N. It turns out that this is the best possible case in the sense of maximum uncertainty. If you're looking for a source that's maximally uncertain, that's going to surprise you the most when it emits a symbol, it's a source in which all the probabilities of symbols are equal. Symbols are equally likely-- that's when you're going to be surprised the most. Now you can see this in a particular example here. Let's look at the case of capital N equals 2. So we're just talking about a coin toss. I toss a coin. I get heads with probability p, some p. I get tails with some probability 1 minus p. Instead of saying "heads" or "tails," I could make it look a little more numerical. I could say C equals 1 for a head, C equals 0 for a tail. That's sort of coding the output of the coin toss. And now I can evaluate the entropy for any value p you give me. So if you've got a fair coin with p of 0.5, I evaluate the entropy. And I find, indeed, that it's one bit. So the average information conveyed to me by telling me the output of the toss of a fair coin is one bit of information. But if the coin is heavily biased, then the average information or the expected information can be a lot less. This turns out to be a very tight connection to this idea of coding. So let's actually take an extreme example. I've taken the case now where you've got a terribly biased coin. It's not p equals 0.5, it's p over 10 to the 24. I picked 10 to the 24 because log to the base 2 of that is easy. So it's a very small probability of getting a head. In fact, if you were to run 1,024 trials, the law of large numbers, which I haven't put on that one sheet, but you probably believe this-- if I had a coin that had a 1 in 1,024 probability of coming up heads, and I threw a coin 1,024 times, I'm more likely to get a heads once than anything else. And actually in a very long stretch, that's just about exactly the fraction of heads that you get. That's the law of large numbers. In that case, what is the entropy? So I've got p times log 2 1 over p plus 1 minus p times log 2 1 minus p. I'm just evaluating that parabolic-looking function. It's not quite a parabola. And I see that I've got just 0.0112 bits of information per trial. So unlike the case of a fair coin-- remember, in the case of a fair coin it equals 0.5, I have an entropy of one bit. That's the average information revealed by a single toss. Now I'm down to much less. I'm down to 0.0112 bits per trial. And the reason is that this coin is almost certainly going to come up tails, because the probability of heads is so small. So for almost every trial, you'll tell me, "Oh, it came up tails." And there's no surprise in that. There's no information. There's just the occasional heads in that pile. And when you tell me that came up heads, I'll be surprised. I get a lot of information. But not when I average it out over all experiments. It's actually low average information there. So if you wanted to tell me the results of a series of coin tosses with this coin, you toss it 1,024 times, and you want to tell me what the result of that set of coin tosses is, it would seem to be very inefficient to give me 1,024 0's and 1's, saying, it was 0, 0, 0, 0, all the way along here. Let me say it this way-- here's one way to code it that would tell me what you got in 1,024 trials. You could say, well, it was tails, tails, tails, tails, tails, tails, tails, oops, head, tails, tails, tails. So you could give me that 1,024 binary digits with a 1 to tell me exactly where you got the heads. It seems a very inefficient use of binary digits. A binary digit can actually reveal a bit of information, and here you are using 1,024 binary digits to reveal much less information. In fact, let's see-- 0.012 bits per toss times 1,024 is really all the information there is. And you shouldn't be using 1,024 binary digits to convey that information. If you're sending it over a transmission line, it's a very inefficient use. So can you think of a way to communicate the outcome of this result with something that's much more efficient? Yeah. AUDIENCE: [INAUDIBLE] GEORGE VERGHESE: Yeah, just since you're expecting in 1,024 tosses that there'll typically be just a single one, just encode the position where that one occurs. How many binary digits does it take to do that? 10, right? 1,024-- you've got to tell me, is it in position 1, 2, 3, 4? You've just got to be able to count up to 1,024. So if you send me 10 binary digits to tell me where that 1 is, you'll have revealed what the outcome of the sequence of experiments is. So 10 binary digits over 1,024 trials, so here's the average usage of binary digits-- binary digits per trial if I use your scheme. And that's much more. That's much closer to the actual bits per outcome. And somebody had a question on that? AUDIENCE: Yeah. Is it actually less than [INAUDIBLE]?? GEORGE VERGHESE: It better not be. And part of it might be that I've rounded this here. Is it a small rounding difference? Did you actually compute something there? AUDIENCE: 0.0097. GEORGE VERGHESE: Sorry. AUDIENCE: 0.0097 [INAUDIBLE]. GEORGE VERGHESE: Oh, is it not 0.0112? OK, good, I'm glad somebody computed that. How did I get that? Sorry. AUDIENCE: This is also only because a possibility of 1 [INAUDIBLE]. GEORGE VERGHESE: Oh, I see. What you're saying is that this was-- right, you're saying this is 0.99 something. OK, I'm just saying that we're in the ballpark if we try to code just for the single 1. But there will be cases in my experiments where there might be two of these, and then I've got to use a more elaborate coding. I'll use a longer code word. Those are less likely events, so I've got to factor in all those probabilities. Yeah, good. I'm glad you caught that. I don't want to get too sunk in this because I just want to convey the idea. The idea is that the Shannon entropy actually sets a lower limit to the average length of a code word. And so when you're trying to do design of codes, you're actually trying to find codes that will get you close to the Shannon entropy limit. So what I want to just briefly mention, and you'll follow up in recitation, is something called Huffman coding, which you might apply to a situation like this. So you're coding, let's say, grades to send to the registrar. A's occur with probability 1/3, B's with 1/2, and so on. You want a coding whose expected length will come close to the Shannon entropy. So the question is, what's the Shannon entropy? I hope I haven't jumped over too many slides. I have jumped over too many slides. Let's go back and find the Shannon entropy here. For that particular case, if we compute the entropy, we get 1.626 bits. If you are communicating four possible grades for 1,000 students to the registrar, one way to do it would be to use two binary digits. You can cover all four grades, send 2,000 bits to the registrar. The entropy says that you've got 1.626 bits per grade on average. So for 1,000 grades, you should be able to get something closer to 1626 bits. So can you communicate a set of 1,000 grades occurring with these probabilities with a code whose expected length is closer to the 1626? That's the task for designing a variable length code. Now, it turns out that this task was set by Professor Fano, who was a Professor here, retired, but still comes to our weekly lunches, set as a term paper in the course he taught on information theory, actually just three years after Shannon's paper appeared. He posed the problem of designing a variable-length code whose expected length came as close as possible to the Shannon limit. Huffman struggled with that almost to the end. Fano offered the option of doing a final exam if you didn't have a term paper. He was about to give up on it, and then came up with an idea that turns out to be the optimal variable-length coding scheme for this scenario. So what he does is, just to very quickly finish with that, he takes the two lowest probability events, groups them together to make a single event that is C or D with probability 1/6. Then in that resulting reduced set, he looks at the two lowest probability events, combines to make them a meta-event with a probability that's the sum of the individual ones, and so on. So he chases this procedure up-- take the two lowest probability events, combine them into a single one with the probability that's a sum of these individual ones, in the resulting reduced set look for the two lowest probability events, and so on. Build up a tree. The resulting tree then reveals the Huffman code. The Hoffman code is guaranteed to have an expected length that satisfies this constraint, but actually has an upper bound, too. It's within entropy plus 1 on the upper side. We'll talk next time about how to improve this, but in recitation tomorrow, you'll get practice at a little bit more leisurely pace than I did here with constructing Huffman codes.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
6_Convolutional_codes.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I wanted to tell you a little bit about the use of digital communication schemes in the space program. And part of that is it wasn't just the use. A lot of coding theory was developed for use in this program. So in the early days, there was no error control coding. So they had very slow transmission rates and tried to compensate for not having error control coding by taking a long time to send a bit over. But in later years, with the Mariner and Viking probes, they started to use error control codes. And linear block codes are what we're talking about. This would be the typical parameters for such a code. So we know how to read this. This is 32 bits per block. Six data bits. And minimum timing distance of 16. A particular kind of code called a bar orthogonal code or Hadamard code, which had specific characteristics and specific symmetries that actually helped with the decoding. So for instance, on Mariner Nine, it's 1971. This went into Mars' orbit. And the code was used to encode the picture transmissions. So each data word was six bits to encode 64 gray levels and a picture. It turned out that, because of transmission issues, the safe number of bits for a block was 30 bits. And after that, you had to do a little bit of realigning or tweaking. So you could send 30 bits at a time safely. And so, that was a choice of n in that vicinity was a natural choice. One thing you could have thought to do would be take the six bits and repeat them five times in the 30 bit window. And that would be a repetition code. It turned out that, with this particular Hadamard code, you could actually get the same data rates or comparable data rates. Let's see. What would the data rate be? It would be k over n, right? 6 over 32. But with much better error correction properties. So let's see. How many errors could you correct in this code per block? Somebody? Seven. Yeah. Because you've got a minimum Hamming distance of 16. So you want D minus 1 over 2, the floor of that. So you could correct up to seven errors per block. And this code was actually used on space probes right into the 80s. And as I mentioned, this particular code has various symmetries that allow actually something called the Fast Fourier Transform to be used in the decoding. And so, that's really what drove this. As you read about these probes, it's actually staggering how much they did with so little. Let's see. This thing went half a billion miles almost. It had an onboard computer with a memory of 512 words. So you can imagine the kind of engineering that went into organizing all of this. The transmitters-- and this is typical of these space probes-- you don't have a lot of energy generated from your solar panels necessarily. So 20 watt transmitters. So these have to transmit over this kind of distance the data that you want to send in the presence of noise and various other errors. OK? So quite an engineering feat. Now the kinds of pictures that you would get? Well, these are pretty amazing actually, considering what the probes had to do. So over the lifetime, it sent over 7,000 images. Mariner 9 is still orbiting Mars, from what I understand. It's not sending back. It stopped sending back transmissions one or two years after this. But it's still in orbit until it slows down enough to crash in. OK. So as I said, you're typically talking about low power. 20 watts. WNBR, what's a typical radio station power on a college campus? They advertise something on the order of 700 watts for their transmitter. So we're talking about doing a lot with a little here. A lot of the art is in the antenna. So you have an antenna that directs this power very sharply towards the intended receiver. But the more sharply you try to direct that, the bigger of a control problem you have. Because you've got to point that antenna all that more carefully. So all of these are coupled issues. And then at the receiver end, you've got very high quality amplifiers and signal processing. But the data coding and error correction schemes are a key part of that. And it turns out that, as you got more ambitious with these transmissions, you had to go to more complicated codes. And these are the codes we're going to talk about today, what are called convolutional codes. We'll talk about the coding today. And then we'll talk about the decoding with what's called the Viterbi algorithm next lecture. So this has been used extensively from late 1970s onwards. More recently, you have codes that are actually combinations of convolutional codes, what are called Turbo codes. And another family of codes, low density parity chat codes, which were developed in Bob Gallagher's PhD thesis here. Bob Gallagher's on our faculty. But convolutional codes were really a workhorse of the whole system. OK. So an example is now Cassini, which is in orbit around Saturn. It's actively sending pictures. This, if I read the website correctly, is a picture from August 29. And I saw other pictures posted from June and July. So this is a picture of one of Saturn's moons. And you can see the rings and the shadows of the rings and so on. This is actually recreated in natural color from multiple images. This, I guess, is part picture and part artist's rendition. But that shows you what Cassini looks like. There's only one of them out there. I don't think there's something else to photograph Cassini. So the kind of code that's used is a convolutional code. We'll learn what these parameters mean, how they enter into the definition of the code. And here is a typical code rate. You're talking about something on the order of 83,000 bits per second as the code rate here. Sorry, not the code rate. This is the data rate. OK? So the messages are coming. Let's see. You're sending six times this amount per second. But this is the rate at which the data is coming in. OK? So convolutional codes. And again, I keep coming back to MIT names. Peter Elias was on our faculty here. He was a department head for a while. And in a short paper in 1955, he invented the idea of convolutional codes. So the idea here is not to divide up your data into blocks but to actually work on the streaming data. And as the data goes past, you generate parity bets at a regular rate. And what you transmit in most typical schemes are just the parity bits. You don't send the message bits. So this would be a non-systematic code, if you like. So there's no part of that message that's directly observing the message bits. Now, you will actually generate and send multiple parity bids. So you'll have a message sequence, x zero xn. Sorry, x zero, x one. And from this, you derive parity bits. And you do that using the standard sorts of equations we've seen with block codes. Each parity bid here, for instance, parity bit zero at a time n will be some linear combination of message bits. But it's the message bits as they're streaming by. So you might have, for instance, this as your choice for parity but number zero. OK. And then parity bit number one could be some other combination here. So for instance, xn plus x n minus 2, for instance. OK. So it's a linear combination of some set of messages, just the way we've been generating parity bits all along. The plus here, of course, is we're talking about binary messages. So this is addition in GF2. So it's exclusive or Modulo 2 addition. And you can imagine a whole bunch of such parity bits. So in general, you would have R such parity bits computed off some set of message bits and transmitted instead of the message bits. So you might have, for each message been coming in, you might actually be sending out R parity bits. So what you do is just send these out in sequence. You'd send out the P zero value, the P1 value, time n. Then recompute a time n plus 1 and keep going. All right? Well, actually I have them here. I didn't see that. So all this happens on a sliding window. This happens for a particular choice of n. And then it happens for the next choice of n and the next choice of n. So you're doing this on the fly with a streaming sequence. So let me just put up an equation that explains why this is called a convolutional code. It turns out the expressions of this type where you take a data stream coming in and generate new data streams of this form, it turns out that the operation that's being carried out here is something referred to as convolution. So in general, what has P zero then? It's some weighted combination of x at the current time, x1 times step back, x2 time steps back. In general, k different values involved. So what I have is a P zero n being a summation from, let's say, J equals zero. So G zero J x of n minus J. All right? So this is just some set of numbers. Zero, one. Just as these bits are, these are zero, one. But this is the general form. This particular kind of combination is referred to as a convolution operation on the input stream. And we'll see much more of this when we come later to modeling channels, the physical channels. We'll talk about convolution type models. So here, it's not so important that you must have this expression. We'll have plenty of opportunity to work with expressions like this. This is just for you to know that an expression of this type, wherever you see a summation with indices that are in this form, this is referred to as a convolution. OK? So it's convolution of the message stream with some set of weights. AUDIENCE: Professor? PROFESSOR: Yeah? AUDIENCE: What does G stand for? PROFESSOR: The G is just a set of weights here. So in this particular case, for parity expression zero, G zero of zero would be 1. G zero of 1 would be 1. G zero of 2 would be 1. AUDIENCE: OK. PROFESSOR: It's just a set of weights. So yeah. This expression is a bit of overkill for the kind of use we're making of it. But it's just to explain the origin of the name. It turns out later, when we use it for channel modeling, the x's will not just be zeros or ones. They could take arbitrary real values. And the Gs could take arbitrary real values. So we'll be working with much more elaborate versions of this. OK? The number k is referred to as the constraint length. And it's the maximum number of message bits involved when you look over all your parity expressions. So in this particular instance, k would be equal to 3. Right? It's the maximum window of data that you're using in a non-trivial way to generate the bits. So here, you are using up to 3 to generate this. Well, in this case also, you're using a window of three message bits. It happens that you're ignoring the one in the center. But the constraint length is the length of message that you're actually looking at. OK. So in some sense, if you want to think of it this way, the number of parity expressions that you use? Well, that's straightforward. That's just telling you how much redundancy you're willing to put in. Whereas the constraint length is telling you how deeply you're folding that redundancy into the message. So the bigger the constraint length, the more message bits are involved in generating a parity bit. And so, the more you're scrambling up the message and spreading it over a large section of what's transmitted. And so, you might expect that you get a better error correction properties with larger constraint links. OK? OK. This is not saying anything new. So how do we come to actually transmitting? Well, we generate the parity bits. And then, as I said, you send all the parity bits associated with your computation at time zero. Then all the parity bits associated with the computation at time 1, time 2, and so on. So in the case of the code used on the Cassini probe, that's 1 over 6 rate code. It's actually computing six parity expressions. So it's transmitting six parity bits for each message bit that comes in. What happens then at the next time instant is that you shift everything up by one. And we do the whole thing. OK? Now, you can actually-- and I'll have this up on the slides-- you can actually crank through the equations. But it's not the most illuminating way to think of things. It's much easier to think of it visually through a block diagram of this type and using the idea of what's called a shift register. So what is a shift register? You may have encountered it in other places. So we think of a shift register as, it's basically a box that can remember something. OK? That's the register part of it. A register is something that remembers a number. You've got some input stream that comes in and some output stream emerging. At any time, this stores a particular number which is available to the output. So whatever is stored in the register is available to the output. The shift part of this description is that whatever is of the input will get shifted into at the next clock cycle or the next time instance. OK? So the input gets shifted in at the next clock cycle. Whatever is in here is remembered for that one clock cycle and is available at the output. Right? So if I have a sequence x n being fed in four n zero, one, two, three, and so on, if I'm seeing x n here, what must have gone into the previous time? If I see x n here at time n, if I'm seeing a particular input at time n, what must have gone in the previous time as x n minus 1? So what's sitting here is x minus 1. Right? And x n minus 1 is available to me at the output. The next clock cycle, the next input comes along. The xn goes in here. And the whole thing shifts. All right? Now what you have up there is a cascade of shift registers. You've got some shift registers. So keep in mind, the operation that I described-- if this is xn, if I'm looking at this time at time n, xn sitting here-- what must be in this shift register is the input of the previous time. So that's x n minus 1. These are shown adjacent. What we really mean is that one shift register is feeding into the next one. They're just shown as adjacent. But what must be sitting here then is x n minus 2. And if I read off something from here, what I'm looking at is x n minus 1. Namely, what's sitting in the register. What I'm looking at here is x n minus 2. All right. So do you see how this is working now? This is actually the same example that I had written up earlier, I guess, for the computational parity bit. So here's-- except it's-- yeah. It's the same one. P zero n. Maybe I have the equations. Let's see if I can display them for you. No, I can't. OK. So what's P zero n? P zero n is xn, that's connecting from here, plus x n minus 1 plus x n minus 2. Again, by the way, in this diagram, what I showed as an arrow coming from the output of the shift register is just a shorthand here that shows the arrow coming out from the body of the register. It's the same thing we're talking about. OK? So P zero of n is the sum of these three message bits. So we're talking one constraint length three here. And what about P1 n? It's xn plus x n minus 2 with nothing of xn minus 1. All right? So imagine this being the picture for every n. So you start off at time zero and keep going. Right? We refer to the state of the shift registers as the pair of numbers that we find in here. So if we're talking about x's that can be zeros or ones, the shift register combination here can be in one of four states. Right? Zero, zero. Zero, 1. 1, zero. Or 1, 1. So four states. So here's a four state shift register into which we're feeding in the stream. And what gets put out on the channel are these parity bits interleaved. That clear enough? OK. Nothing I haven't said here. Right? So let's actually work through an example step by step. This is clear enough. But let's just see it concretely. Let's assume that I'm starting out with the shift registers in the zero state. And now, I've got this message sequence coming in that I want to send out. OK? So the sequence is 1, zero, 1, 1. So the first bit that appears here is the 1. And I've got to generate P zero and P1. Well, P zero is the exclusive OR of these three things. So it's 1. P1 is the exclusive OR of the first and the last. So it's again 1. So that defines P zero and P1 at time n. The same way at the next time instant, the next message input bit comes in. So we had 1, zero, 1, 1. We took care of the 1 here. Now comes a zero. We do the same thing. So the exclusive OR of all three of them appears here. That's the 1. The exclusive OR of the first and the last appears there. And that's the zero. So you can see how things are getting folded together because the input that was here before is now sitting in here and plays a role in generation of the parity bit for the next step. In fact, the word convolve means to fold together. And this is what it's actually trying to capture. You're folding together these two sets of weights, the weights on the top tier and the input sequence weight. And then the next two cases, similarly. OK. And that's what gets sent out at the bottom. So this is the transmitted sequence. So it's the 11100001. Right? That's all there is to it. The implementation of the shift register is very easy. And so, this is actually a very straightforward thing to implement. Now there's another viewpoint that's also very useful here. Another way to look at what's going on, which is thinking in terms of the state of the register and how you move between the states. I guess, how many here are 004? Are those are the ones with smiles on their faces? OK. You see a lot of this there, I imagine. OK. So how do I read a diagram like this? I've got a circle for each state that the shift register can be in. So the shift register can be in zero, zero. Zero, 1. 1, zero. 1, 1. Right? Each of these arcs represents a transition from one state to another. So let me ask you this. What does it take-- if I'm in the zero, zero state with my shift register-- so what you've got in the picture is your shift register sitting there with zero, zero. What does it take for me to get to the 1, zero state? What must my input have been to get to the 1, zero state? Imagine how these shift registers operate. Right? If I'm going to get from zero, zero to 1, zero, I must have fed in a 1 at the previous time instance. So it takes an input of 1 to go from zero, zero to the 1, zero. So to go from zero, zero to 1, zero, use an input of 1. That's the number that we write before the slash. That's our labeling convention for the arcs. We put the input that it takes to make that transition. And then after the slash, we put the parity bits that are omitted. So what we've got for the 1, 1 is the parity bits that are omitted when you've got input 1 sitting here, zero, zero here, and you're using the parity computation that I had before. Let's see here. So P zero is going to be x n plus x n minus 1 plus x n minus 2. So that gives you 1. And what about P1? P1 is x n plus xn minus 2. So that gives you another one. OK? So if you're in state zero-- the zero, zero state-- and you get an input of 1, you're going to transition to 1, zero. And you're going to omit 1, 1. OK. So the state diagram captures all that. And similarly, all the way around. So I haven't checked each of these. But I hope there are no mistakes in it. But if you're in 1, zero? Oh well. By the way, if you're in zero, zero, there's no way to get to zero, 1. Right? So you don't see any arc from zero, zero to zero, 1. If you're in 1, zero, you can get to 1, 1. Or you can get to zero, 1 depending on what you feed in. OK? So it's very straightforward, then, to actually build out this diagram. Why don't we do a little bit more on here? OK. So if I'm actually abstracting from the shift register picture to something that's more like the state picture, I'm going to say, here are my four states. I've just drawn it a little differently than I have in the upper picture. Instead of circles with these states in them, I prefer to think of them this way. So what we said is, if you get an input of 1, you limit 1, 1. And you'll get to that state. What does it take to get to the state? Somebody? Can I have a hand and a loud voice? Yeah? AUDIENCE: Input zero. PROFESSOR: OK. And then I guess you've got to go back to this to think about what's happening. So I'll allow you to think of a zero sitting at the input here. So what would the parity bits be? So the first parity bit will be the exclusive OR of the zero, 1, and zero. So it's going to give you a 1. Right? And then the next parity bit is going to be exclusive OR of what's here and there. So that's going to be a zero. I hope that matches with what I have upstairs. We're talking about going from 1, zero to zero, 1. It takes a zero input to do that. And what you omit is 1, zero. Right? So you can fill in all of these. This is the state transition diagram. OK. Let's see. We say that, if you've got a constraint length of three then, of k equals three, for instance, or let's say if you've got a constraint length of k, you've got 2 to the k minus 1 states. well, that's because, in that constraint length, one of the bits involved is the input bit. That's not sitting in the shift registers. So you've got k minus 1 bits left over. So your shift register is k minus 1 stages long. And so, you've got 2 to the k minus 1 states. All right. So you could imagine generalizing this to more complicated sorts of situations. Let's see. Just going back to the Cassini example, if you let me jump back a bit, there was a k there. What was it? k of 15. OK. So for Cassini, you're using one input bit and 14 more bits in your register. OK? So you've got 2 to the 14 possible states there. So in these codes, you're actually using very large constraint lengths. OK. All right. I want to go from the state machine view to another view now, which is what's called-- so this is the state machine view-- to something called the trellis view. This is something-- by the way, there was a way of looking at things that was developed by someone else who was on our faculty, David Forni. In fact, if you visit his home, you'll see his garden. There's a nice trellis around it. And you'll see why when we draw this. OK. So what's the trellis view? The trellis view says take the state machine but unfold it in time so that all your transitions over time are not happening here. At every time step, you draw the picture again and look to see where you get to. So let's do this. This is the one I want to be most careful with and where I'll introduce a few notational conventions so that our later life is simplified. OK. So we've got state zero, zero. State zero, 1. State 1, zero. And state 1, 1. OK. Except, though, this is going to be the picture that I have at time-- let's say-- at time n equals zero. At time n equals 1, well, I've got the same shift registers. I'm going to draw this picture again. The easiest way to learn this is to just follow through one example. So please keep your attention here. And you'll have it sorted out. And then you won't have to worry about it again. It's the same thing as with LZW. All right. So it looks kind of detailed. Maybe tedious. But it's actually very simple. Just hang in there and follow through one example. OK. So what does this say? At time n equals zero, I'm in zero, zero. Suppose I get the input zero. Suppose the input is zero. What state do I transition to? Here. Right? So if I have an input zero, I'm going to transition here. So this is with an input of zero. And what are my parity bits going to be? Both zeros. Right? What about if I get an input of one? Where do I transition to? Well, we've already seen that here. If I get an input of 1, I'm going to transition to here. And what am I going to omit? Well, we've already calculated that. We're going to omit a 1, 1. Right? Let's do it for one more case. We're in zero, 1. What states can I transition to? I could go to zero, zero. And I would do that if my input was zero. Right? And what would my parity bits be? Well, that's another case for us to look at. If our input is zero and we're in state zero, 1, what would the parity bits be? For this choice of parity bits, depends on what specific choice you made, of course. 1, 1? Do you agree? And if I get an input of 1 instead, where do I go? If I get an input of 1, I'm going to go to 1, zero, which is here. OK? So if I had an input of 1, I would go to 1, zero. My parity bits would be-- what would they be? Can I have a hand and a voice? Yeah. Zero, zero. Right. OK. So it's that simple. That's all you have to do. Fill out this picture and you're seeing what this picture translates to at the next time instant. We're not using anything more than is in the state transition-- sorry-- in the state machine diagram. But we're unfolding things in time, which is actually very helpful. Now, this is a simplification we'll make in drawing this. Because I've arranged the states in natural binary counting order-- zero zero, zero 1, 1 zero, and 1, 1-- it's always the case that the upper arrow that emanates from a state corresponds to the input of zero. And the lower corresponds to an input of one. OK? So I don't really need that first thing before the dash. I'm just going to dispense with it. So if you're going up, of the two choices that you have when you come out of a box, if you're going up, it's zero. The input is zero. And if you're going down, the input is 1. So I'm just going to label that as zero, zero. OK? I'm going to label this as 1, 1. And I guess I've forgotten already what some of these are. But you can see what the whole picture starts to look like. OK? So let me actually, I'm not going to do these in detail. But let's just see how the next stage would differ, if at all. When I come to n equals 2, well, it's the same story all over again. So whatever pattern of arrows I had coming out of here, I have the same pattern at the next stage with the same labels. Right? Because there's nothing different. So if you'll allow me, let me actually fill out a few of these. And you'll get practice drawing one of these when you do recitation maybe for another example. So I can keep going with these. Let's see here. This is going to be 1, 1. If I haven't found two arrows coming out of each box, then I'm not done. Oh, this is wrong. Right? Thank you. From zero, 1, I can go to 1, zero. OK. So the two arrows coming out of each one, the upper arrow corresponds to the input having been zero. The lower arrow corresponds to an input having been 1. And there are two arrows going into each box, as well, corresponding to whether the bit that's going to get dropped off is a zero or a 1. All right? So there's a real symmetry to this. I'll draw one more stage just a little bit to make a point here. OK. So you can keep going. So how do you generate a code word from a trellis diagram? You're starting in some state. Typically, it's the all zero state. In fact, what you'll usually do is have a header for your message stream which is all zero. So you force the shift register to be in the zero state once the real message bits come in. And then you move from here. So you're typically starting here. And then you navigate, depending on whether you've got a zero or a 1. So if the first message bit is a 1, you're going to go down here. If the next message bit is a zero, you're going to go up here. If the next message bit is a zero, you're going to go up there. And the code word that you omit is going to be, in that case, 1, 1, 1, zero, 1, 1. And then all zeros. Right? Assuming you're staying at zero from then on. So depending on what the message sequence is, you can actually go through the trellis. It's infinitely long, or as long as your message sequence is. And figure out what the code word is that's omitted. So this is actually just a graphical way of displaying code words. So the set of code words that I get, does that correspond to a linear code? Let's assume that, somewhere downstream, all these things come back down to zero, zero. OK? So I'm only considering a finite window of things. It's not going to go on forever. So suppose I'm going to end my input messages with zero, zero at the end and come back down to that state. So my messages will always start with zero, zero to force the register to the zero, zero state. And they'll end with zero, zero. OK? The set of possible code words is a set of parity bits I omit along the way as I navigate through the trellis. Is the set of code words, does the set of code words constitute a linear code? That's the question. Maybe not obvious. Right? The way you answer that is actually thinking back to this setting. So one particular code word would correspond to a particular input sequence that generated it. A particular data message sequence that generated it. Another code would correspond to another message sequence. And the question is, is there a message sequence that would generate the sum of these two code words that you have? And actually, it turns out that the answer is yes because these parity relationships are based on a nice linear operation. OK? So it turns out that the set of code words that you generate constitutes a linear code. So if you were going to think of a minimum Hamming distance for this code, what would you want to be thinking of? I don't know if I've actually drawn this correctly right now. Has anyone spotted any errors along the way? Or do I have it right? Seems to be OK. What would, how would you look for a minimum Hamming distance in the set of code words generated over this window? AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry? I didn't hear where that came from. Yeah. Can you speak up? The minimum number of ones in a non-zero code word. Right? So it would be the weight, the minimum weight code word you'd find among all the number of code words. So you would have to find a path starting here. All my code words are going to start here and there. You have to find a path through this that picks up the minimum number of ones. A path that's different from the all zeros path. OK? Find a path through there which has the minimum number of ones in it and the code word. So what would that be? In this particular case, maybe this path we highlighted in another color. I don't have-- this is not a proof. This is just a suggestion that this might be it. And you would have to draw in all the paths, explore all the other paths. But what would be the minimum weight along this one? Let's see. I've got 1, 1 there. 1, zero here. 1, 1 there. So I would get a weight of 5 on this path. Now the question is whether you could find another path with a smaller number of ones attached to the code word. And I think if you work this out in detail on [INAUDIBLE],, that you'll find that you're actually stuck with 5. OK? Now it turns out that the interpretation of this number is not quite as straightforward as the interpretation of the minimum Hamming distance in block codes. And the reason is that actually this is a more complicated kind of picture because it continues on with the structure. So we don't actually call it the minimum Hamming distance. We call it the free distance. So I'm just trying to evoke this. So the minimum weight code word you find among the non-zero code words will indeed be a code word of eight 5. But the interpretation of that number may not be directly as simple as in the case of the Hamming distance. But it's close. OK? So what it really tells you is that, over a data length that is maybe not much longer than this, the code words that you have are separated by this distance minimum. So you might expect that you could correct two bit errors over data lengths that correspond to code words that are somewhat longer than this perhaps. OK? So that's all very hand-wavy. But that's all we're going to do with the notion of free distance. So this is more complicated to deal with than a block code. But the free distance actually has that kind of intuition. It has the intuition of minimum Hamming distance locally over this window of data. Even if this went on for thousands of bits, if you got a burst of errors in this stretch that had up to two errors, you could correct them. Now we haven't talked about decoding. We're going to talk about that the next time. OK. So that answers this piece. Now let me say one thing about decoding, just to set us up for next time. If I didn't have any noise in my channel, it actually turns out that decoding is pretty trivial. How is that? If I gave you the sequence of parity bits, can you think of a way that you could recover the input sequence? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Good. Yeah. You see, if I add these two, I get x n minus 1 equals P zero n plus P1 n. So if you give me the parity bit stream, I can reconstruct for you exactly the input with a one step delay. That's pretty good. I'm happy with that. If it's taken me minutes for the signal to reach me from Saturn, I'm happy with a one step delay here in decoding. All right? So in the absence of noise, the inversion is simple. The inversion meaning deducing the input message bits from the output, the parity bits. And this is a theme you'll see in many other settings. If there's no noise, inversion is easy. You can look at the output of a system and figure out what the input was. If you know exactly how the system was creating the output from the input. But in the presence of noise, you've got a problem. Because you see, if you have these parity bits corrupted at some rate-- every few bits, you've got errors-- well, you're interpreted message is going to have that same kind of error rate. OK? So it's really, in the presence of noise, it's an unsatisfactory way to do it. So this doesn't work. We'll be looking at something more careful next time. OK? So well actually, since I have you here, let me put up the spot quiz. We haven't quite hit the mark. So can you answer these for me? What's the constraint length of this code? Anyone? Who hasn't answered? Yeah? 4. Right? Because you've got xn and xn minus 1, xn minus 2, xn minus 3. That's the largest window over which you're picking things. What about the code rate? 1 over 3? Right? Because for every message bit, you're generating three parity bits. You're going to shift out those three parity bits before you do the shifting on the shift register. So the code rate is 1 over 3. The coefficients of the generators, of course, you can read up there. What about the number of states in the state machine here? What? 8. Right? Because constraint length is 4. But one of those is the input. So you've got three bits that you're storing in memory. 2 to the 3 is 8. So the number of states in the machine is 8. OK? So more complicated picture than this. But the same principle. All right. We'll continue next time.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
18_MAC_protocols.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So the goal for today is to talk about a particular kind of communication network called a shared medium network. And there are many, many examples of shared media communication networks that exist today. Now, these are, generally speaking, networks that don't span the globe or even span the country, with one exception, and that exception is the satellite network shown here. The model here is that you have a set of ground stations. This is a picture from an actual internet service provider in the middle of the Pacific. I mean, I'd love to go there on assignment. And the way this network works is in order to communicate between these islands, the way they do this is they have satellite ground stations that can receive and transmit data. And there's a satellite up in the sky. So you communicate between two of these islands by transmitting data from one of the ground stations up to the satellite and down there. And the way this system works is not to divide frequencies amongst the different satellites. And the reason they don't do that is that they don't know how often each of these satellites-- these ground stations is going to communicate. So you don't divide up the frequencies upfront between the ground stations. So all of the ground stations sending to the satellite do so on the same frequency. And the satellite downlink is on a different frequency. So what could happen if two satellites communicate up at the same time is that the satellite up-- if two ground stations communicate to the satellite at the same time, the satellite may not be able to pull together the two different transmissions apart because they're on the same frequency. Now, they could use frequency division multiplexing. That's a perfectly reasonable solution. But they don't do it because if one satellite-- one ground station has data to send and the other doesn't, you end up wasting frequencies and not getting as high a throughput. So that's one example of a shared medium network. This here is a picture of something called ethernet, which was one of the most successful networking technologies developed. It was invented in the early 1970s. And the idea here is you have a shared bus, a shared wire, and many stations connect to it. And if two stations transmit at the same time, the two transmissions could collide and you don't actually receive the data. And what you would like to do is to figure out a way by which these different stations or nodes on the ethernet can somehow manage to communicate by collaboratively figuring out how to transmit on the medium. And the idea is to make it so that only one node transmits at any point in time. Other examples of shared media are wireless networks. 802.11 is an example of a shared communication medium. If a bunch of us communicate and share that access point, we're all running on the same channel. And in 802.11, or Wi-Fi, there are a bunch of different channels available. But any given axis point tends to have a cell, which is some area around it that, in order to communicate with that access point, you use the same channel. So we need a way by which we have to take the shared network and allocate access to this medium amongst these different nodes. And if you look at an entire building, there's a big plan in place for how the different access points communicate potentially on different channels or the same channel. And there's an entire process by which this building's network is laid out and the campus network is laid out and so forth. And cellular networks-- you know, Verizon, AT&T, Sprint and T-Mobile and all these guys have the same problem that they have to solve. So these are very interesting questions which boil down to how you take a shared medium-- whether it be a wire like an ethernet net or radio, which is over the air, or it could be acoustic-- how do you take all of that and have these different nodes that are communicating with each other on that medium somehow make it so that they can all share the medium without colliding with each other? And that's the problem that's solved by these media access or MAC protocols. Now, all of the stuff that I'm going to tell you is based on chapter 15, which is on MAC protocols. And there's a little more in it than we'll be able to cover in recitation, in lecture and recitation. But it's hard for me to keep straight what we cover and what we don't. So you're sort of responsible for everything in that chapter. We'll cover most of the issues. There's a bunch of details in the chapter that are worth understanding in order for you to really get your understanding to be clear. So I want to caveat that upfront. We lost a lecture because of the hurricane. And I just can't keep straight what I tell you and what I don't. So everything in that chapter is fair game. But we'll cover the most important issues. OK. So here's the abstract model. And you know, there's a shared medium. It could be a wire. It could be wireless. And you have these nodes. And the nodes have two things there that you have to worry about. The first thing here is just a detail. It's the channel interface. It's a way by which the data on that node gains access to the medium. And then each node has a set of queues-- or one queue or two queues, a transmit queue and a receive queue, which has packets waiting to be sent on that medium. And the basic idea is that you have all these nodes on the medium. And it may be that these nodes are all trying to communicate with a single router or a switch or a satellite. Or it may be that the nodes are directly communicating with each other, like your laptop is communicating with mine. Your phone is communicating with your laptop. And somehow they're all sharing this medium. And we're going to come up with the world's simplest shared medium network because it's simple and because it's a reasonable model of reality, which is that at any point in time if more than one transmission is on that shared medium, then you end up having what's called a collision. And you cannot decode the packet. So let me repeat. The model here is if there's exactly one transmission on the medium, we'll assume that it's delivered correctly. If there is no transmission on the medium at some point in time, nothing happens. The channel is sort of wasted. It's not used. If there's more than one transmission on the medium overlapping in time, neither transmission successfully gets decoded. So there are two or three or four people hitting the channel at the same time overlapping in time. Nobody succeeds. OK. That's the abstraction. And what we want is a communication protocol or rules of engagement between the nodes that allow us to get reasonably good performance in sharing the medium. Now, depending on the details of the network, the nodes that are on this shared medium may be able to hear each other perfectly. Or maybe they can't hear each other at all. Or maybe they can partially hear each other. And we're going to deal with all of these cases. But for now, just assume that you have these nodes on the medium. And for simplicity and completeness, let's take the example of the satellite network. You have a satellite up here. And you have a bunch of ground stations that are all trying to communicate up to the satellite whenever they have data to send. And just assume for now that the nodes cannot hear each other. And the satellite doesn't really know whether a node has packets waiting to be sent or not. And somehow, we're going to invent a protocol or a rules of engagement that allow these nodes to, in a distributed way, figure out when it's OK for them to transmit data and when they should keep their mouth shut. And each of these nodes has a queue of packets waiting to be sent. Sometimes the queue may be empty. Sometimes the queue may have one or more packets. That depends on what the application is doing and how quickly that node has been able to transmit data on the channel. If the node has packets waiting to be sent, so a queue can either be empty or it has one or more packets waiting to be sent. In which case, we're going to use the word backlogged. So at any point in time, some subset of the nodes may have backlogged packets and that may have backlog queues. And our goal is to come up with a way by which these nodes can transmit data on the channel. Now, there are two or three different ways of solving this problem. The first approach to solving this problem is to do frequency division multiplexing. You could just allocate different frequencies and you've solved the problem. But as I mentioned before, we don't want to do that because if you've allocated and predefined a frequency to a node and the node is empty, it doesn't have packets, then you've essentially wasted bandwidth because you've allocated a portion of the frequency to a node that isn't actually going to send any data. Somebody else who had data to send could have more profitably used it and helped you win, helped you get better performance. The second thing you can do is to somehow divide up time. In other words, they all run on the same frequency. And somehow make it so that you give the illusion that each node gets access to the channel for some fraction of the time. And if you do that, it's a model of sharing called time sharing as opposed to frequency sharing. So one approach to dealing with it is frequency sharing, which is a good idea in some cases, but not in this case when the traffic is quite bursting. The second thing you can do is time sharing. And there are two ways of doing time sharing. One of them is called time division multiplexing, or TDM. It's also called TDMA for Time Division Multiple Access. And we'll talk about this in recitation tomorrow, so I won't belabor it here. The second approach to solving this problem, which is what we'll spend the rest of today on, is the class of protocols called contention protocols. And these protocols are really beautiful, because they're completely distributed. There's no central intelligence. It's highly distributed intelligence. Each node makes its own independent decisions as to what it should do based on very little information that it learns as it sends its packets and determines what happens to those packets. And yet, somehow the nodes are able to come up with a way of sharing the channel by essentially cooperating but yet competing with each other. That's why these are called contention protocols. That ends up getting pretty good performance. There's a third kind of sharing, which we won't talk about at all in 602, and that's code division. The slides that are online say a little bit about it. And you can look it up on the internet, if you want. We're not going to talk about it here. So for today, my goal is to tell you about contention protocols. And largely speaking, for MAC protocols right now in this chapter and this part of the course, we're interested in time sharing, TDMA and contention. So before I tell you about the protocols, I want to tell you a little bit about what we would like, what makes the protocol good and what doesn't, what's bad. So if I tell you that you have a bunch of these nodes that are trying to share this medium and you would like to get good performance in this protocol to share access to the channel, by what metric or metrics would you measure this performance? How would you know it's good or bad? Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Well, the model here is if two people send at the same time, you fail. We fail. AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. So good, let's keep going with that. Now let's say that someone observes the system and watches its evolution over time. They observe whether packets succeed and packets fail and so forth. And they want to count something. What would they count to determine if a protocol is good or bad? AUDIENCE: Rate of failure? PROFESSOR: Sorry? AUDIENCE: Rate of failure? PROFESSOR: Rate of failure. I mean, live your positive, people, so let's count the rate of success. OK, so there's a word for this rate of success. It's a measure of, if you succeed more, it means you're able to deliver data faster, which means you get higher throughput or a higher rate. So the first metric that you have-- so these are metrics. The first metric that you have is throughput. And throughput is generally measured in bits per second or packets per second. So let's just imagine that it's packets per second for simplicity to assume that all the packets are the same size. So you can translate into bits per second. Now, throughput by itself is a good metric. But really, we would like protocols-- we would like to evaluate protocols without worrying about whether the underlying channel can send data at 1 megabit per second or 10 megabits per second or a gigabit per second or 100 gigabits per second. I mean, we really don't want to care about what the actual throughput or the rate supportable by the channel is. So we're going to translate this throughput into a different word which means-- which is a proxy for throughput called the utilization. And we'll represent that by U. And the utilization of a MAC protocol-- or in fact, of any protocol over a network-- is defined as the throughput that the protocol achieves divided by the maximum rate at which it's possible to send data over that channel or over that network. So if you have a sudden maximum rate at which if everything were in your favor you could send data at a certain maximum rate, stick that in the denominator. Look at what throughput you actually get. And take the ratio. The higher the utilization, the higher the throughput. We've just normalized out by the maximum rate. And of course, by definition, we know that the utilization must be between 0 and 1 because you can't exceed the maximum. And this is an aggregate measure. So if you have, let's say, four nodes and the max-- just for example, let's say the max rate is 10 megabits per second and you have four nodes. And the throughput that the four nodes get are, let's say, 1 megabits per second, 2 megabits per second, 4 megabits per second, and 1 megabit per second. What's the utilization? The total throughput is 8 megabits per second. The maximum is 10. So the utilization in this example is 0.8. In fact, if you had-- for the same four nodes, if the throughput you got was 0, 1, 7, and 1, the utilization would also be 0.8. Can't add. Can't multiply. But can convolve. All right. Now, which of these two is better? That depends on if you're a Democrat or a Republican. But which of those two is better? If you were designing a network, which of these would you want? AUDIENCE: [INAUDIBLE] PROFESSOR: What? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, a Democrat. This is a tough question. You might say that everybody should-- they should be as equal as possible. You might say that they should get proportion to what they pay for. There's various ways of thinking about this. We're not going to worry about the-- I mean, it's actually a pretty deep set of topics that connect with economics and social justice and so on. And a lot of people do work on what it means to have fairness and what it means-- different kinds of fairness. And for those who are interested, I can point you to lots of literature. And it's still somewhat open in terms of, what is the right way to think about it? We're simple people. We'll just assume that absent any other information, we would like to get as equal as possible. And to do that, there are many ways to define fairness. And there's a particular one that I like because it's simple and somewhat intuitive. And it's used a lot in networking. It's called the Fairness Index. This isn't the only way to measure fairness. But this is one that we'll use. And the definition of it is that I will look at either the utilization or the throughput. It doesn't matter. Let me call that XI. In other words, XI is the throughput or the utilization-- let me call it throughput-- achieved by node I. And if I look at this number here, XI squared divided by n summation XI squared, that's my definition of fairness. Now, this looks a little daunting. But it's actually very simple. What it's saying is that I take each of the throughputs that I get and I add them all up and I square them. So if I were to divide both sides by n squared-- essentially, this is capturing for me something that's related to the ratio of the square of the mean to the variance. So in other words, what ends up happening with a term like this-- this is a second moment kind of a definition of fairness-- this thing here looks like the square of the mean. This thing here is related to the variance. And this is capturing the ratio of those two terms. If you have a situation where you have n nodes and you end up with everybody getting equal, then the fairness index is 1, because if everybody has an equal value and you just run the calculation through, you'll find that the answer is equal to 1. So if this is F, F is between 0 and 1. What's the minimum value of the Fairness Index from this formula? Well, that happens when you have n guys and the throughput looks something like this. One of them gets all the throughput and the others get 0. And if you plug that in here, what you'll find is that only one term survives, the 1 over n term, and everything else becomes 0. And so the minimum value of the fairness is 1 over F. And this is intuitive in that it says that if you have two people and one guy hogs everything, that's a little bit less unfair than if you have five people and one guy hogs everything, because in a sense, one guy hogging everything out of five is worse than one guy hogging everything out of two. The only real reason we care about this fairness is we're going to compare different variants of a protocol along this index. And I'd like you to get a feel for what is a little fairer and what's less fair. There's nothing particularly sacred about this particular definition of fairness. And indeed, people will argue also that this is a terrible definition of fairness because it doesn't reflect how much people have paid for the resource. But those are details because you could have weighted versions where people are weighted by how much they pay, and so forth. So you can handle some of that stuff. Is this kind of clear, these two basic definitions? So we're going to worry about throughput and fairness in protocols. Any questions so far? Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Depends on whether you care about it or not. I mean, it depends on the application. So typically speaking, when I look at, for example, measuring the performance of my-- let's say I'm downloading web pages. When I measure the performance of web downloads, all header information is just pure overhead. So I only look at how long it's taken to download my content. But now I can go in and look at how well is my TCP, which is the transport protocol, or IP, or whatever, some lower level protocol working, in which case, everything else is overhead. But I will include the particular headers related to TCP in my measurement. So the answer to whether something is overhead or not is it sort of depends on what it is you care about. So if I'm delivering video, you know, all this other stuff is overhead. And the only thing I care about is my video frame. So typically, the word throughput is by itself not meaningful. You have to say throughput of what. And here I'm talking about throughput of a protocol, which is what you always have to say. And this is the throughput of the MAC protocol. And it does include the MAC header overhead if you have any headers. Any other question? OK. There's a third metric that's important, which is that we would like to have reasonable delays. So in general, delay is a good metric, or bounded delay is a good metric. And this matters because I can get extremely high utilization and extremely high fairness by doing something utterly dumb and naive, which is that if I have all these nodes with a lot of packets, if they're all backlogged, then what I will say is, today this node gets access to the network. Tomorrow, he gets access to the network. Day after tomorrow, he gets access to the network. If they're always backlogged, then clearly the throughput is-- the utilization is very high, because the network is always being used profitably and there are no collisions and no idle slots, no idle time. The network is fair, because if I measure this over a month or a year or something, everybody gets equal throughput. But I've clobbered the delay because this guy got lucky. But everybody else is waiting and waiting. And in fact, even he has to wait. Once today is over, he's got to wait many days before he gets a turn. So you actually would like to have bounded delay, or at least low delay. And this is something we're going to measure. We're not going to try to optimize in the work we're going to talk about. OK. So that's the statement of the problem and the statement of the metrics that we wish to optimize. You have to ask me questions because if the problem setup wasn't clear, the solution is kind of going to be completely meaningless. So does anyone have any questions about the problem statement? It's one of these things where stating the problem is actually a little harder than the actual solution. So I'm going to tell you now one method to solve this problem. I'll get to you. And the method we're going to use to solve this problem was invented in the context of satellite networks and then it got put into ethernet and then it's now part of Wi-Fi. So everybody uses a variant of the method that we are going to study. Yes? AUDIENCE: [INAUDIBLE] how do you measure delay? PROFESSOR: The delay is measured per packet. It's measured between when the packets showed up at the queue to when it actually successfully got received at the receiver. And then typically we'll take an average across all of the packets and report an average delay, and perhaps the standard deviation. You had a comment? All right. So the solution we're going to study-- the basic solution is something called ALOHA. ALOHA was the protocol developed by a group led by Norm Abramson, who was a professor at the time he did this at the University of Hawaii. I believe he moved from Stanford to Hawaii because he was an avid surfer. And he decided that-- this was in the late '60s-- that he wanted a scheme to connect the different islands together. There were seven of these islands-- seven of these stations that he wanted to connect together. And he came up with a scheme that on the face of it should really not work. And only think good about it is how utterly simple it is. And the fact that it works is actually very fortunate and very useful. And the reason it works is because, in a way, nodes doing things that look completely random-- as long as the probability with which they do these things is controlled, it turns out they work pretty well. So let me first show you a picture that will define a few terms. And I'm going to come up with a version of this ALOHA protocol that ends up being a pretty popular version. And it's called slotted ALOHA. And this model that I had from before where the nodes have queues, and when two packets run at the same time they end up colliding, all of that remains. I'm going to add one or two more restrictions to the kinds of-- to the model. The first-- an important thing that defines slotted ALOHA is that-- and in fact, it defines real implementations of this kind of protocol-- is that time is slotted. What that means is that you cannot send a packet at an arbitrary point in time onto the network. Instead, what ends up happening is that if time-- you view time as a continuous-- as a continuous variable, you divide up time into time slots. I mean, these slots could be any length. It doesn't matter. And the assumption we're going to make is that packets can only get transmitted at the beginning of a time slot. So these are legal. Let me-- this is a legal packet transmission. And this is a legal packet transmission. But this is not a legal packet transmission. Not allowed. And the second assumption is that every packet is an integer number of time slots. So in other words, this is legal. But this is not legal. You cannot have a packet that starts here and ends in the middle of there. All packets are an integer number of time slots. OK. So if I have both of those assumptions, that tells me ALOHA. By slotted ALOHA, we're also going to make the additional assumption that each packet is exactly one time slot long. Later on, we'll relax this assumption. But I want to come up with the world's simplest working protocol. In other words, the only legal packets in slotted ALOHA are like that. OK. And as shown on this picture, this is a picture of how slotted ALOHA works. You have time going on that axis. Time's divided into slots. And I have these three different nodes-- blue, red, and green. When no node sends packets in a time slot, that time slot is said to be idle and the channel is said to be idle in that time slot. If you have more than one node sent in a time slot, we have a collision. And in our model, none of the packets that sent in that time slot gets decoded. All of them are wasted. And everything else is a success. If you have a time slot in which exactly one packet is sent, we'll assume in this model that the packet is successfully decoded and we get to count that as a successful packet reception. So if you count and look in this picture, the utilization here is 65% because we have 20 time slots here. And in 13 of those time slots, we were able to successfully transmit exactly one packet each. And that gives us the utilization of 65%. And the advantage of picking many of these things is you can't really check in real time if I got the numbers right or wrong. You can check that later on. I'm pretty sure-- I'm not sure of anything. It's probably correct. You should just count the number of slots in which exactly one guy is sent. OK. So that's the picture here. So what I want to do now is to come up with an algorithm, with a protocol, that each of the nodes can implement that allows us to get reasonable utilization, reasonable fairness, and reasonable delay. And an example of this protocol, and one way of solving this problem, is to solve the problem in this context under these assumptions. And then we're going to calculate the utilization of that protocol. So let me start by telling you what the protocol is. The protocol-- each node independently runs a version of this protocol. And in the protocol, each node maintains-- in the simplest version of the protocol, each node maintains one variable. And the variable it maintains is a probability. So each node maintains a variable. I'm going to call it p. And p is the probability with which the node will transmit a packet if it has a packet to transmit. In other words, each node has its own variant, its own version of p. And the semantics of this is that if backlogged, we won't just greedily go ahead and transmit. But instead, if we're backlogged, we will transmit our packet on the channel with probability p. How do you generate-- how do you actually do something with probability p? Like, what would you do to do something-- if I were asking you, write a program to transmit a packet with probability p, how will you actually do that? Yes? AUDIENCE: Call a human to roll a dice for you. PROFESSOR: Call a human to roll a dice. All right. Let's try to make it a little more practical. Actually, a dice has only got six sides. How will I get p out of a dice? AUDIENCE: A lot of dice. PROFESSOR: A lot of dice. OK. What if I want p with 10 to the minus 17? AUDIENCE: A lot of [INAUDIBLE]? PROFESSOR: Yeah, that's-- all right. Does someone have a slightly more practical solution? Yes? AUDIENCE: Could you have a random number between 0 and 1 [INAUDIBLE]? PROFESSOR: That sounds good. So you pick a random number between 0 and 1. I mean, how do you get that? Well, that's a deep question. But I would just call random.random in Python, or whatever the thing is. And you know, how Python does it is very-- there's ways to botch it. But for our purposes, we'll assume it's correct. And if the number you get is less than or equal to p, that tells you an event with probability p. That's great. OK. So suppose we did this. I'm not telling you how to pick p yet. That's magic that's going to come up a little bit later. But suppose every node had a value of p, that someone came and told it, you know what? There are n nodes in the system. Let's assume there are n backlog nodes in the system. And I told you that somebody came and told each node that it can transmit with p. What is the utilization of this protocol? So if I have n backlog nodes, each transmitting with this probability p, what is the utilization of the protocol? I want to know what is the utilization, which is, of course, a function of n and p? And the way you have to answer this question is, of course, you draw this picture in time and you divide time into slots. You've got some of these things where you have one packet going through, some of these things where more than one goes through, in which case, this is a collision. And I ask you, what is the utilization? How would you calculate it? Suppose you observe the running of the protocol and you find that in some time slots there's a collision and in some time slots there's nothing. And in some time slots, there's exactly one transmission. If you look at this, how would you calculate the utilization of the protocol? The utilization is the throughput over the maximum rate. What's the maximum rate of this channel? Well, the rate is measured in-- the rate here is measured in packets per time slot. And I've said that each packet occupies one time slot. So the maximum rate is every time slot is occupied with a packet. So the maximum rate is one packet per time slot. You cannot send faster than that. So the denominator is just one. Therefore, the utilization in this model is simply equal to the throughput, which is simply equal to the number of time slots in which I have exactly one packet. Or put another way, if I look for a long enough amount of time, it's the number of time slots with exactly one packet that tells me the throughput, and therefore, tells me the utilization. So if I look for a very, very long time and I count the number of successful packets, that's going to tell me the utilization if I take the number and divide by the number of time slots. And therefore, the utilization is equal to simply the fraction of time slots in which I have one packet and exactly one packet sent, which is equivalent to asking, what's the probability that in any given time slot, I have exactly one successful-- one exactly one packet sent? So I'm going to repeat this. The utilization of this protocol is exactly equal to the probability that in any given time slot, I have exactly one transmission. If you disagree with that statement, you have to raise your hand now, or if you don't understand it, so I can explain it again, because we're going to use this idea repeatedly in pretty much-- there's guaranteed to be some question on the quiz related to this idea. And then you have to work some probability out. So does everyone understand why the utilization of the protocol is equal to the probability that in any time slot there's exactly one transmission of a packet? The reason why that's true is it follows from this definition because the maximum rate here is one packet per time slot. And so I want to know what the throughput is. The throughput is simply, I look over a long period of time, many time slots, and I count what's the number of packets I sent. So if I take the number of successful packets I sent and divide by the number of time slots, well, that's actually the definition of a probability that in any given time slot I have exactly one successful transmission. And therefore, the utilization is equal to the probability that I have exactly one transmission. So I have n backlog nodes. Each guy sends with probability p in a time slot. What's the probability that I have exactly one transmission? AUDIENCE: [INAUDIBLE]. PROFESSOR: Well, there's certainly a p, which is success, times 1 minus p to the n minus 1, which is all the other guys keep quiet. Is this right? It's almost right. AUDIENCE: And then times n because there's n ways to [INAUDIBLE]. PROFESSOR: Times n. There's an n choose one, which is there are n ways to pick the winner, and therefore that's the answer. All right. Let's hold n times-- so let me write this as u equals n times p times 1 minus p to the n minus 1. Suppose you knew n. Suppose n were some value-- 10, 15, whatever. Therefore, I would view this-- assuming n is constant, I could view this as a function of p. If I want to maximize this utilization, I want to-- obviously I want to maximize it, right? If I want to maximize the utilization, what should be p be? Or I'll call it p star. What's the value of p equal to p star that maximizes this utilization? Yes? AUDIENCE: 1 over n. PROFESSOR: 1 over n. Yeah, you could do this the hard way or the easy way. AUDIENCE: Differentiating [INAUDIBLE].. PROFESSOR: Great. So if you did that, the answer works out to be p star-- if you do u prime of p equals 0 and you solve that, you'll find that p star is 1 over n. The long way to do it is the way you describe. But the answer is intuitive because what it really says is that, if I did have every node transmit with probability 1 over n in a time slot, the expected number of transmissions within any given time slot is n times 1 over n, which is 1, which is kind of what you would expect. You would expect that to maximize the utilization, the expected number of nodes that transmit at any given time slot is 1. And that's fortunately what you do get if you solve the equation. So p star is 1 over n. So if somebody told you the number of backlogged nodes in the network and they had the ability to program each node to set the appropriate probability, the probability you would use is 1 over n. So let's assume still this world where we know the number of backlog nodes. And somebody came and told us the probability we should use. Do I need anything here? Let me erase this. If somebody came and told you the probability you should use, you would pick 1 over n. That's great. So now therefore I can now view u star of n, which is the maximum probability now becomes a function of n, because I can go back in here and I can set-- assuming you picked this value of p, you can stick p equal to 1 over n in this formula and then you get a utilization that's purely a function of n because I want to draw this picture of what it looks like with n. So you start off n is equal to n times 1 over n, which is the value of p that's the best value of p, times 1 minus 1 over n to the n minus 1. Does that makes sense? Haven't done anything other than substitute a value of p star equal to 1 over n. And that's equal to 1 minus 1 over n to the power n minus 1. Now, one of the questions we want to ask is, is this how-- this is the best you can do in this protocol. How good or how bad is it? So let's draw a picture of what this looks like as n becomes large. So I want to draw a picture of n on this axis and the best value of the utilization, which is u star of n on this axis. Well, when n is 1, get to n is 1-- actually, intuitively, when n is 1, what's the utilization? When you have one backlog node and the protocol runs with the value of p equal to 1 over n, what's the utilization? The utilization is 1. This formula, you got to take the limit, and so on. But the answer is 1. So let's assume this is 1 and 2 and 3 and 4, and so on. So it's 1. What is it when n equals 2? What's the utilization when n equals 2? Well, it's 1/2 to the power 1, which is 50%. So I get a value which is 1/2. When n is 3, what happens? Well, 1 minus 1/3 squared, which is 4 over 9. As n goes bigger and bigger and bigger, what does this value become? Yeah? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, as n goes bigger and bigger and bigger, you do the limit when n goes to infinity of this thing here. n minus 1 and n are more or less the same thing, so you can get rid of that. This should be a well-known limit. If you don't know it, you take the log. You take the log and you find the limit as it goes to infinity. You can expand that into a power series. And you'll find that the answer-- the limit of the log is minus 1 or this value, the limit goes to 1 over e. So in fact, it goes to a value which is 1 over e when n is large, or about 37%. This is actually not bad. It's actually very good. For a protocol that did nothing sophisticated, all it did was pick a value of this probability. The fact that it's able to get not a zero utilization but a reasonably good utilization is an extremely strong-- is a pretty strong result. And that's the basic ALOHA protocol. The basic ALOHA protocol, or a fixed probability ALOHA protocol, is somebody telling you the number of backlogged nodes and you using that information to make sure that every node sends with some probability. And they just-- and the probability you would pick is 1 over n. Now, this is not actually a very practical protocol, because how do you know which nodes have backlogged packets and which nodes don't? What we would like us to come up with a way by which we automatically have a protocol that somehow gets us to the correct utilization. And we're going to do that by adding a method to this ALOHA protocol that will make it completely practical. And that method is called stabilization. And the purpose of stabilization is to determine at each node what the actual value of p it should use is. And that value of p is going to change with time as other nodes have traffic to send and if nodes go away. So the magic protocol is going to be somehow that we're able to change the value of p at every node. Every node runs an algorithm which adapts its value of p over time. And if there's a lot of competition, the value of p will reduce. If there's very little competition, the value of p increases. And if we do that, we're going to be able to get pretty good utilization. And this process is called stabilization. AUDIENCE: Are we still assuming that all of the nodes are transmitting with the same probability? PROFESSOR: Nope. The nodes will end up not transmitting with the same probability because they're each going to be making independent decisions. So the first thing we're going to do is each node is still going to have a p. But in fact, node i is going to have its own variable. So we're going to say that node i has its own variable that only it knows. And its probability is p i. So the way we're going to do the stabilization is very, very simple. We're going to say that at node i, it's going to do-- well, the only information it gets is it sends a packet. And if the packet succeeds, it knows something. If the packet doesn't succeed, it knows something. So let's say that a node sends a packet and the packet fails. And the way you know that a packet fails-- and I haven't talked about this. But the way this protocol-- all these protocols-- end up working is they send a packet and then they watch to see if the packet succeeds or not. They can get that information by an acknowledgment coming from the receiver. Or in the case of certain networks, like ethernet, when you send a packet, if you aren't able to receive your own packet on that bus, then you know that it's failed. So that's just a detail. But the assumption here is there's some feedback that tells the node whether a packet transmission succeeded or not. In general, it's with an acknowledgment that comes from the receiver. If you get an ACK, it means it succeeds. So we're going to have two rules. If you don't succeed-- in other words, there's a collision-- then you do something. And in contrast, if you succeed, you do something. So what we're going to do on a collision is, let's say that you send a packet and it didn't succeed. It collided. Yes? AUDIENCE: So with the acknowledgement, what if the acknowledgement fails? PROFESSOR: Yeah. You're out of luck, because in reality, in Wi-Fi, when an acknowledgment fails, what ends up happening is you assume the packet collided. So how people deal with this problem is typically to-- essentially to do very strong error protection on the acknowledgment. You send it at a very low bitrate. And what that really means is you're adding a lot of redundancy to that acknowledgment. So imagine you're coding the heck out of it using channel coding. That's what happens. OK. Let's say you send a packet and it collides. What should you do to the node's transmission probability? Increase it or reduce it? AUDIENCE: Reduce it. PROFESSOR: Sorry? AUDIENCE: Reduce it. PROFESSOR: Reduce it, because the assumption is it collided because presumably there's a lot of competition and you're a nice guy. And your assumption is that all the other nodes are nice people, too, which is kind of changing nowadays with all these software defined radios and the hacking that you can do on Wi-Fi cards. It's possible for your node to not actually back off. But if you're a nice node, what you're going to do is you're going to reduce the probability. And one way of doing that, a good way of doing that, is called multiplicative reduction, or multiplicative decrease. You reduce it by a factor of 2. You just halve it. And on a success, you could do a bunch of different things. But one thing you can do is you can be a little bit greedy and say, all right. I succeeded, which means there's not that much competition in the network. Maybe there's nobody else in the network, in which case, I want to keep trying to increase my transmission probability. And maybe you double it. And my notes talk about whether this is right or something else is right. But it really doesn't matter. It turns out the important rule is this rule. The protocol turns out to be not very sensitive to how you increase. You do have to increase. But it doesn't matter how you do it. Now, of course, the probabilities can't exceed 1. So we're going to actually pick the minimum of 2 times p and 1. This is our basic stabilization protocol. Now, every node has its own version of p. And so you may run with a p of 0.8 and I might be at a p of 0.1. Presumably, if I succeed, I'm going to increase. If you fail, you're going to decrease. And something happens. The question is, what happens? And that's what I want to show you. I'm going to skip all the stuff we went through. So if I run this protocol exactly as I've described on this board here-- and this is an experiment with-- you'll be doing all this stuff in the lab yourself. This is with 10 nodes. What you see is that you have a utilization of 0.33, which is not too far from the 1 over e utilization, which is remarkable that this protocol where everybody just jumped around multiplicatively changing p i like this worked out to be a pretty good utilization. But there's a big problem in the protocol. And the problem in the protocol is that the fairness is pretty terrible. As it happens, it's 0.47. I was reminded that 47% of the nodes here got pretty healthy throughput. It's probably looking for handouts or something. But anyway, it's pretty bad. And what's going on here is that when a node is running at a high value of its probability and some other node is at a low value of the probability, if they collide, the guy with the higher value of its probability reduces some by a factor of 2, but it's still a pretty high value, whereas if a node has a probability of transmission, somehow it's got screwed and it's now running at 1 over 32, it becomes 1 over 64, then 1 over 128. It's practically 0. It doesn't ever get out of that real morass that it's in and ever start being able to successfully transmit again. And that's what's happening here. And the way you solve this problem-- there's a very simple solution to this problem. The way you solve this problem is you decide that nobody should get really, really poor, that you decide that you're going to have a value of the probability p min that will never actually get-- you'll never go below that. So you modify the protocol to do that. And if you do that, you end up with much better performance. You end up with performance that looks like this. And you'll see this in the lab. But what you find here is something very puzzling. What you find here is that the fairness is amazing. It's 99.99, which is really, really good fairness. But what you find is that the utilization is 0.71. That's actually too good to be true, because the best utilization you could expect is probably around 37%. And the fact is, we're getting something astonishingly good. What's happening here actually is something that's really important. It took people a few years to figure it out. It's called the capture effect. What's happening here is that some node captures the channel for quite a substantial period of time, shutting everybody else out. And then some other node captures the channel for some period of time, shutting everybody else out. So you get significant short term unfairness. Or equivalently, you get a long delay. So some nodes may end up waiting for a long time. And then they get access. And then they keep access for quite a while. And then they lose access. And then some other nodes get it. So what's really going on here is, even though you have many, many nodes competing, at any point in time, effectively the competition is only between one or two nodes. And this is a problem called the capture effect. And the way you solve this problem is symmetrically to change this value of the probability, there's a maximum value, which is p max. So you have to pick a different value that's less than 1 that's the maximum probability. In other words, once a node has a transmission probability of, let's say, 0.25 or 0.3, you don't want to have it keep increasing, because what ends up happening is then it captures the channel for quite a while. And it's only upon successive collisions that it comes down to the point where other nodes can gain access to the channel. So if you put all of that together and run the experiment, we're just putting this protocol as described exactly on this board. And you can play around with this thing. There's a lot of leeway in picking these parameters. If you run this protocol, you get a utilization, in this case, of 0.41, which for n equal to 10 is pretty much what you would get according to sticking into that formula a fairness that's extremely high. And it's super cool because this is exactly what you would want to get. If somebody magically told you the number of backlogged nodes and you theoretically calculated the optimal value of the probability, you couldn't do better than this protocol. And we managed to do it by simply these nodes that are just independently making these decisions, figuring out what they should do. And if you were to plot the actual evolution of the probabilities, you find that at no point in time does any node have a value of 1 over n. Even though in the experiment there's some value of the number of backlogged nodes, the nodes kind of dance around it. But they never actually stick to it. But they conspire to dance around it in a way that gives you exactly the same result as you would get if you, in fact, had a pretty good thing. I'm going to close with one last comment. This is from s student from about, I think, two terms ago, a guy called Chase Lambert, who took this class. And I was very gratified to hear this, because it turned out he interned or he is working or worked or working at a company called Quizlet, which is a startup company. And one of the things he had to do was a load generator. And he ended up using exactly these ideas to come up with a load generating scheme that was stable, because you want to be able to generate load that allows you to measure the throughput of a system. And he used this random back off idea. So it's not that you'll find these ideas useful only if you were doing this kind of networking. You actually find these ideas useful in other contexts. So I'm going to stop here. We'll pick up on some of these topics in recitation, and then back here on Monday.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
19_Network_routing_without_failures.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HARI BALAKRISHNAN: OK. So today, we're going to continue talking about multi-hop networks. And in particular, I'm going to talk about a fairly fundamental problem in multi-hop networks called network routing. And that's the topic for today and through most of, half of, next week. And today, this week, we'll talk about network routing in networks where there are no failures. So in real networks, of course, things fail. Packets get dropped. Nodes may fail. Switches may fail. Links may fail. We'll just worry about the case when there are no failures for this week. Next week, we'll add some more complexity and talk about how we deal with failures. So the abstract problem is pretty simple. You have a set of nodes in the network. And there's some network topology. And every source in the network-- you have sources, and you have destinations, nodes in the network. And they wish to send packets to each other. And we're going to decompose this problem, these endpoints, which actually want to communicate with each other, as we've already talked about, do so by sending packets via switches. And the problem we're going to worry about is what happens inside the network in these switches. The switches solve the following problem. If things work correctly and if things work well, when a switch receives a packet with a destination address specifying that that packet has to be sent to a particular named destination to the network, the switch figures out how to ship that packet. It figures out whether to send it along-- I need a better-- here. This switch, for example, figures out whether a package should go along that link, or this link, or this link, or this link, or that link, given any package that it receives. And every switch in the network performs this task. Now, when a switch gets a packet and the packet has a destination address in it, what the switch does is some sort of a lookup. It looks up that destination in some sort of a table. And that table maintains information about which of the links to use in sending that packet toward the destination. And this step is called forwarding. Packets are forwarded by switches. Technically, packets aren't routed. Everybody says packet routing, and I say it, too. But it's important to realize packets aren't routed. Packets are forwarded. The forwarding process is pretty straightforward at slow speeds. You get a packet. You take the destination. You look up that packet in a table. And that table tells you what you do with the packet, which link to use. Routing is the problem that the switches solve of constructing these tables. So forwarding is simply lookup in a table. And the table is called the routing table. Sometimes it's also called the forwarding table. And there's a technical difference between them that's not that important for us. So forwarding is the process of looking up the destination in the routing table. Routing is the distributed process, the algorithm that's used or the protocol that's used, to build up these routing tables. So it's the construction process, how to construct these routing tables at each switch in the network. That's done in the background. That's usually done in software. Switches at high speeds involve a fair amount of hardware. Routing itself is usually done in software. And so a typical switch has a lot of different processing elements in it. A lot of those processing elements are high speed hardware that deals with the process of forwarding. And then you have software sitting on the side where all of the complexity is. And all that complexity deals with all sorts of complicated rules that you have to come up with in order to decide how to construct these routing tables. And we're going to talk about the world's simplest networks today. And you'll find that even that's reasonably sophisticated and complicated. So just for concreteness, an example of a picture of a topology is like this. And for reasons that will become apparent as we talk this through, we'll model the network as a set of nodes and a set of links. And so far, there's nothing new about this. But we'll model links as having a cost. And this cost might reflect, for example, the delay or latency to send a packet along that link. The cost might reflect the real dollar cost of shipping data along links. Internet service providers might charge different amounts of money for different types of data, for example. Or the cost might just reflect your own internal preferences as to which links you might prefer based on whatever concerns. Maybe some links are slow, and some links are fast. Maybe there are higher speed links, lower speed links, that cost more or less, et cetera. So these costs are abstract numbers. And we'll just assume that we're interested in finding minimum cost paths between senders and receivers. So every switch solves the problem of finding the minimum cost path to a destination where the cost along a path is simply the sum of the costs along the links on that path. This is just standard shortest path routing. We'll use shortest path even though we don't mean literally the shortest number of hops, but the minimum cost path. And where that distinction is important between the number of hops and the cost, et cetera, we'll clarify. So the routing table looks something like this. In fact, it looks like this. There's destination. Every switch has this. So this is an example of a routing table at node B. Node B maintains all the destinations in the network. And we're dealing with small networks so far in our class. So we'll just assume that every node in the network has a unique name or a unique address, and that the routing tables contain an entry for every destination in the network. So the routing table has three columns. It's a database with three columns. And every switch in the network or a node-- I'm going to use the word node and switch interchangeably. Every switch or node in the network has one of these if the network is working correctly. So if the routing protocol does its job, every node comes up with its own version of this table. There is a destination. There is the link you need to use that is the next hop, if you get a packet to this destination, what link would you use, and a cost. So in this table, if node B or switch B received a packet destined for destination A, it would use link L1. Each link is named locally. So B would have its own L0, L2, and L1. You know, your own computer has a bunch of links. I don't know what the command is in Windows, but in all other sensible platforms, if you do IF Config, you can get it. I think it's called IP config on a Windows. But you can actually see a list of links. So switches have many, many, many links. And so you'll find, for example, for destination A you use link L1. The cost is 18. In fact, you'll see here that, when B receives a packet for A, it doesn't use the direct link because the direct link has a cost of 19, whereas going through L1, this link has a cost of 11. And if B believes that the path toward A along this link has a cost of 11 plus 7 is 18, that's what it thinks. And, therefore, it would use that link. OK? It's very simple. Now, routing is the process by which these different nodes talk to each other and build up these tables. When I say a route to a destination, I mean this. I mean the route to a destination at a switch or at a node is the link that that destination would use to send packets to the destination. This is important technically for us. Because, otherwise, we will get all tangled up getting confused between routes and paths. OK. For the purposes of our discussion, a route is the next hop link, is the link that you're going to use to get to the destination. The path is a sequence of links. So I don't want people telling me that the route from B to E is whatever, is BCE. I'd like people to tell me that the route at node B to destination E is L1. Or if you wish to be clear, you can say, it's link L1 which takes it to next hop C. The path from B to E might BCE, OK? All right, so that's what the routing table structure looks like. And we're interested in minimum cost paths to go from places to places. So normally, traditionally, we sit around and try to now dive into protocols. But I think here what we're going to do is since our notes are all so nice and written up, I actually don't have to tell you everything that's there. So what we're actually going to do is something we call the routing game, which is an experiment in social networking. What it means is that each of you or some of you will act as nodes and start computing routes to different destinations. OK, so that's what this game is going to be. So what I have in my hand is 40 slips of paper. And I hope there are 40 people here, 40 slips of paper. And each of these slips of paper has some information that looks like this. It says you are node X, and you're connected to nodes Y, Z, W, et cetera, OK? And there's a set of rules that I will go through in a minute. First of all, these were all in perfect order. So I need to shuffle them a few times carefully without losing them. I think Perci Diaconis, the famous probabilist, said that you've got to do this seven times. Maybe this is close to seven. OK. So what I'm going to do is I'm going to pass this around. There's no bag here. The bag's cumbersome. So what I'd like you to do-- actually, I'll put it in the envelope and pass it around. Just kind of close your eyes or something. Don't look at the number. And just pick one up. And don't look at it until I tell you, OK? So we'll start here and just pass it around. I'd like 40 different people to have them. You don't have to feel compelled to take it, but just pass it around. And try to do it quickly because I'd like that to happen faster than the time it's going to take for us to compute these routes. So why don't you just pick one sheet of paper up and just pass it around, please? OK, so I picked 7 minutes. It sometimes works a little faster than 7. The best that the class has done is about 5 minutes. Last time, they did it very quickly, but they got it wrong. So I don't think that counts. But I've been told that they've tried this experiment at Berkeley. It's taken usually more than 7 minutes. And I figure MIT students are smarter. So let's see if you can do it faster than 7 minutes. Now, your job is to find a path from a source node to a destination node as quickly as possible, OK? And I mean, as a bonus, if you actually find the path with the minimum number of hops, that's even better, but we'll take any path. Now, there's some rules here. Now, there are some rules here. So you may not actually kind of get up and run around and try to do things that a normal network switch wouldn't do. So you're allowed to stay in one place. And what you're actually allowed to do is you're not allowed to pass your sheet of paper to other people because you have information about yourself. And that piece tells you who your neighbors are, the numbers of your neighbors. So you can't actually kind of send your piece of paper to other nodes. That's not allowed. And don't let people copy what's on your sheet of paper. Now, here's some things you can do. You can read them. Ask your friends for advice. You can shout to other participants. We're allowing you to yell and scream, but do so in a way that's somewhat civilized. And you can wish that you didn't pick up a slip or pick up a slip or whatever. But try to act generally-- I mean, this class is recorded. So you know, what you say might be heard. Now, if you get a slip, there's some ground rules. You can't cheat. We're not dealing with security here where a node that's 17 and connected to 27 and 29 tells people that it's 14. I mean, you don't want that. It's hard enough to do when you tell the truth. So don't cheat. There's probably a variant of this we can come up with where some fraction of the nodes are adversarial and one can't see if this stuff even works. But right now, don't cheat. If you've got a slip, you kind of have to really try to participate in whatever protocol you come up with. And this experiment has no human subjects approval. OK. So who has the envelope? Is it empty? Oh my goodness, it's 7 minutes already. Let's move it around. OK. When the envelope gets empty, we can start this thing. So we should try to move it along. Is everyone clear on the rules? What you're trying to do is to find a path between the source node, which was numbered 1, and the destination node that was numbered 40. The source and the destination know who they each are. And then we're just going to wait. This is the easiest lecture to prepare for because I just have to keep quiet for a few minutes. OK. And if things don't work out or things don't work, whatever it is, I'm assuming that you'll come up with some variant of a reasonable routing protocol. And odds are, it'll be a variant of one of the ones we're going to study. You know, the gentleman here who suggested something was a reasonable idea, which is you go and you pick one neighbor and you go through. But you get stuck in a loop, and then the idea is you come back to where you start. And then you go through the next. And that principle could work, but you got to remember a lot of stuff and tends to make mistakes. And in fact, this was an easy network where every node had one, two, or three neighbors. I mean, there's a few with more, but most of them had a small number of neighbors. What you guys were going after was a sort of better plan, which everybody yells out their neighbors. And then they yell out their neighbors and their neighbors and so forth. And that particular protocol has a nice name to it. It's called link-state routing, where you broadcast and flood your own neighbor information, OK? That's the second of the two protocols we're going to study. What you were trying to go after was link-state routing. Now, the first protocol we're going to study has a different name to it. It's called distance vector routing. And these are the two routing protocols we're going to study. Almost all routing protocols in practice are variants of either a vector protocol or one of these link-state protocols. There's lots of variance. And there's hundreds of routing protocols, but these cover the main concepts. So let me tell you how distance vector protocols or how these vector protocols work. They're a little different from the kind of approach you were going after in solving this problem. Had you gone after an approach where you started at the destination, at 40-- and 40 simply said, I'm 40. And then the neighbors next to 40 then said, I'm connected 40. To get to 40, come through me. And the cost is, let's say, whatever the cost of their link to 40 is, right? Then, now, initially, nodes only knew about themselves. But, now, you have a destination. And you have a set of nodes going to get to the destination. Let's call these nodes-- let's just say n1, n2, and n3. Initially, the only thing that D says is I am D. And it says that to its neighbors along these links. This saying has a name to it. It's an advertisement. Now, when anyone hears about destination D, it can do the same thing to its neighbors. So let's say these are n4 and n5 here. What n1 can do is to say that, to get to D, let's imagine that this cost here is 6. So D says, I'm D. And the cost to get to me is 0 because I'm D. n1 here's that. And it says, I'm n1. That's true. But to get to D, come through me and the cost is 6. So it now puts out advertisements along these links. But it says, D:6, D:6, D:6, where 6 is the cost of that link. Now, let's imagine that these other links have cost. So let's say this link has a cost of 8. This link has a cost of 2. Let's say this link has a cost of 9. Sorry, I should draw this better. Let's put a 9 here. So the link cost is 8. The link cost is 9. And the link cost is 2. And let's say for a minute here that this cost of this link is 10. Now, n1 advertises to everybody saying, to get to D, the cost is 6. To get to D, the cost to 6. And to get to D, the cost of 6. Each of those guys, when they get this information, now know that they can get a route to destination D. Because now n4 knows that, if it used this link to get to D, the cost would be 6 plus the cost of that link. So it's 14. So it would have a routing table entry that says D is cost 14. And of course, its routing table entry would have an entry saying this is the link that it should use. Similarly, n5 here would take destination D and say that the cost to destination D is 9 plus the advertisement, which was 6. And so it would say D is 15. And n2, which previously had it a route to D, which was this link at cost 10, would look at this new advertisement coming in here saying that the cost on this from n1 is 6. Add that cost to the cost of the link, which is 2, and have now a different way to get to D whose cost is 8. And compare that cost against the cost it previously held to destination D. And because we're interested in minimum cost routing, 8 is smaller than 10. So n2, which previously had a cost to destination D of 10, would throw that route out and replace it with a route to destination D going along this link with the cost of, now, 8, which is smaller than 10. And this process just continues through as it goes along. So you might end up in a situation quite easily, as we just went through here, where, let's say, n3 were connected to n5. If this link had a cost of 1 and this link had a cost of 4, what would eventually happen is that n5, which ended up with a cost of 15 to go like that, would, when it hears an advertisement from this node, replace that route with a cost of 4 and use this link as its route to get to destination D. And this process just continues until everybody has initially some route to D. And then if the process continues a little bit longer, everybody will have a minimum cost route to D. This step, where nodes evaluate an advertisement that they hear about a destination against their current route and the current cost to the destination, and replace it if they end up finding a route with smaller cost or an advertisement with smaller cost, when the cost you have to compare is the cost of the advertisement plus the cost of the link along which that advertisement came and you compare that cost against the cost of the route you already hold, if it's smaller, you replace it. That algorithm is called the Bellman-Ford for algorithm. And you might have seen centralized implementations of shortest path routing using this algorithm. And if you have, it actually turns out it's a little less efficient than another algorithm, another one we'll the study called Dijkstra, if you're doing it centralized. But it's very, very elegant elegant for distributed computation because the routes to different destinations are being computed in a completely distributed way. These nodes far away here have no idea what the network topology looks like. They couldn't even reconstruct the network topology. The best they could do is to find their own way to get their linked to use. The only information they have is what they hear from their neighbors. But, yet, they're able to find an answer because all they have to do is to listen to all their neighbors and, among the set of neighbors, pick that neighbor who's advertised cost plus the cost of the link to that neighbor is minimum across all of the neighbors. So the computation that's being done is a very simple computation. It's a very elegant computation, which is the min over all of the neighbors of the link cost LIJ plus the advertised cost from J where the minimum is done over that all J, where J is the set of neighbors of I. So you minimize over the set of the neighbors. Each node I does this. You minimize over the set of neighbors J of I. The link cost from I to J plus the advertised cost to destination D of J. And you take the minimum cost. That's the minimum cost. And then you take the link corresponding to that neighbor. And that's the route that you use. Now, this algorithm gets more complicated and tricky to argue that it's correct when there are failures. But today, there are no failures. There's another wrinkle in the algorithm, which is I mentioned that what are the conditions under which a node changes its route to a destination. So I went through one rule. I mentioned a rule that said, if the current route to the destination does not exist, in which case the cost is assumed to be infinity, or if the current cost to the destination is smaller than the cost of the advertisement plus the cost of the link along which the advertisement came, then you replace the route to the destination. But there's actually one other condition under which you should replace that route to the destination. Yeah. AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: If it's equal, well, technically, you don't have to replace it. You still have a good path, right? You could replace it, but it doesn't matter. There might be a case where you have to replace the link, the route, when in fact the cost increases. There might be a case where you have a current cost to the destination and some current route to the destination. And you hear an advertisement, and you take that advertisement. And you add the link cost, and you find a bigger number. And you might sometimes have to replace it. Yes. AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Right. It could be that what's going on here is that the cost to the destination, a link-- I previously told you that my cost to the destination is 17. And, now, I've changed my mind. I tell you that it's actually 19. And I could change my mind for a variety of reasons usually having to do with failure. So I guess this is a little bit of a cheating question because I told you there's no failures. But it could be that perhaps the cost of a link changed because it became more expensive or something like that. And that's the only sort of case you have to worry about where a cost of the advertisement could increase. And if that cost increases along your current route, like you think that the route to the destination is 17, the cost is 17, but in fact, it turns out to be 24. That's the time when you have to change your entry in the routing table. You have to change the cost associated with it. But, otherwise, it's basically that's the algorithm. And it's summarized over in this chart here. And I've gone through pretty much all of it. Does anyone have any questions? No questions? How long does it take before every node in the network-- actually, before I get to that, the reason it's called a vector protocol is I showed you a picture for one destination. But in fact, each switch or each node does it for all destinations. So the general form of a distance vector advertisement looks like this. It has a destination, destination 1 colon cost 1, destination 2 colon cost 2, destination 3 colon cost 3, and so forth for all the destinations to which you have a cost. And initially when you start, you don't know about any of the other nodes. If you have a route to some destination in general, then there's some cost associated with it. If you know about a destination but have no route to it, the cost is infinity. It'll turn out next week we'll find that the value of infinity in this network has to be pretty small, but that's because this algorithm is not the world's best algorithm for big networks. And I'll explain why infinity has to be a small number, but theoretically we can assume it's infinite. The reason it's called a vector protocol is because the advertisements are a vector of destination cost tuples. And so you send these tuples around. And this is a vector of these tuples. And, hence, this is called a vector protocol. It really should be called cost vector protocol. But initially, they ran this thing where all of the links had cost of 1. And, therefore, they were minimizing distance. And the name stuck. But if we wanted to be perfectly precise, we would say cost vector. But then no one else in the world would understand what you meant. So we say distance vector. OK, any questions? All right, for some destination, how long does it take before every node in the network has a route to that destination? By how long, I mean, how many advertisement cycles do you have to go through? How many of these advertisements? Let's say that the way our world is going to work is initially every node advertises its own advertisement to itself. Then at the next time step, every node advertises the routes that it knows about. And then it does that periodically. So let's say that, every t seconds, a node sends out an advertisement where an advertisement basically contains this vector of tuples for all of the destination it knows about, right? So let me explain this protocol again, and then I'll ask you the question. The protocol is very simple. Every t seconds, what the node is doing is looking at two columns in its routing table. It's looking at the destination column and the cost column. And it's just taking that information out. And it sends out an advertisement, the distance vector advertisement, every t seconds. How long does it take? Now, let's focus on one destination D, some destination D in the network. And you have some network. How long does it take before every node has some route to the destination? Yes. AUDIENCE: [INAUDIBLE] up to the number of [INAUDIBLE].. HARI BALAKRISHNAN: Up to the number of edges in my network? AUDIENCE: Yeah, because if you're [INAUDIBLE].. HARI BALAKRISHNAN: All right. So let's say you have a network that looks like this. How long does it take before every node has a route to destination D? AUDIENCE: So in that case, it only takes one-- HARI BALAKRISHNAN: Therefore-- AUDIENCE: [INAUDIBLE] worst case everything is in a line. HARI BALAKRISHNAN: Well, I'm asking for an answer that holds for all networks, not for-- AUDIENCE: Oh, OK. What do you call the worst case, like how fast [INAUDIBLE].. HARI BALAKRISHNAN: Sure, in the worst case, it could-- well, yes. In the absolute worst case, it is true that it'll always take time smaller than the number of edges in the network. But you can come up with a much better bound. So let's try to come up-- yes, sir. AUDIENCE: What if you did the number of nodes [INAUDIBLE] longest length chain? HARI BALAKRISHNAN: Longest length chain, so that's not completely true. Longest, what kind of longest length? So another counter example, let's say that I have this network. The longest length chain is 1, 2, 3, 4, 5, 6. But, yet, you guys just told me that, in one shot, you get the answer. So you have to clarify what you meant a little bit. You're almost right. [INTERPOSING VOICES] AUDIENCE: In this cast, it might actually be 6, right? Because you want to find that path of the top one and the bottom one? HARI BALAKRISHNAN: I said find a path or find a route, not the best route. AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: How long does it take to find a route in a network? You said this almost. The longest path-- but it's not quite the longest path. It's the longest something path. Yes? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: The longest shortest path, that is the the longest path when you compute the longest over all paths with a minimum number of hops between one place to another. That's also called the diameter of the network. OK? Yes? Yes? OK. All right, now, that's the time it takes a multiplied by t. And I might be off by 1. You know, it's that minus 1. Actually, it's not. It is the number of hops along the longest shortest path. Now, how long does it take to find the minimum cost path to some destination D at all of the nodes? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: What? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: t times-- no, that's true that you can find it within that. That's too long. So I'm going to come back to this question. It's probably answered in chapter 18, I think. But we've come back to that next time. It'll become a little bit clearer. But you should think about it. There's a nice, succinct answer to this question. And generally speaking, in every quiz, there's some variant of this question. So you know, it's not explicit, but there's some story that requires-- yes, you have an answer? AUDIENCE: No, I have a question. HARI BALAKRISHNAN: Oh, you do. OK. I'll come back to that question next time. But yes? AUDIENCE: Can you go over again what you mean by the longest shortest path? HARI BALAKRISHNAN: Yeah, I can go over that. So let's say you have D here. You look at this destination D. And you look at, from every node, what's the path with the smallest number of hops-- number of hops, not cost-- to get to that destination, right? For this guy, it's 1, 2, 3. For this guy, it's 1, 2, 3, et cetera. Whatever that biggest number is multiplied by t is the answer. That's how long it takes before every node hears some path. But it's not quite the right answer for the best path. Because as you saw in this example here, it took us one step before n2 got a route to the destination. But it took us two time steps before it got its best route to the destination. And the reason was that it has something to do with the length of the minimum cost path, right? Because in this case, the length of the minimum cost path is 2 plus 6, 8. It took 2 hops, which is different from the length of some shortest hop path, which was one hop. So if I look at this picture, n2 hears about some route to the destination in the first advertisement. But then to find its best route to the destination requires us to actually wait around until we find this 2 plus 6 path, which took 2 hops. So it's a little longer, but it's not enormously long. It's just a little bit longer. And the answer, of course, depends. In the worst case, it could be quite long. But it depends on the number of hops along the minimum cost path. And if you minimize that quantity, you'll find the answer to the question, how long does it take before every node finds the minimum cost path to the destination. OK, is that clear? Any questions about distance vector? Crystal clear? OK. We'll see when the lab comes around. It is crystal. Everybody really, really does well in these labs. And it's a lot of fun hacking this stuff up. So you'll implement both protocols. And you'll actually look at all sorts of failures. And it'll just be sometimes miraculous that it actually works even when you didn't consider some failure cases. OK. Now, I'm going to talk about link-state routing. And this is the routing protocol that you guys were sort of attempting to implement, attempting to come up with. This is a radically different approach from the vector protocols. In a vector protocol, everybody advertises, for each destination, a vector of tuples where it's the destination and the cost. In a link-state protocol, we don't do that. The link-state protocol does not compute in a distributed way. In a link-state protocol, every node just says, I am node 17. And I'm connected to 16, 45, and 44. And the cost of my link to 16 is 7. The cost of my link to 45 is something. And my cost of my link to this other neighbor that I have is something else. So every node advertises what I've shown up on this slide. Every node advertises a neighbor, its immediate neighbor, and the link cost to that neighbor. OK. In addition, in each of these link-state advertisements, there is a sequence number. The sequence number starts at some initial value, like, say, zero. And every time one of these link-state advertisements is sent-- and that's done periodically as well, every t seconds. Every time you send a link-state advertisement, you increment the sequence number by one. Now, here's the key step. The key step here is that, if I receive a link-state advertisement from you, I just send that to my neighbors. And then my neighbors will send it to their neighbors and so on. So it's a very this nice flooding protocol. Every node sends out its link-state advertisement. And every neighbor that receives it processes that and then turns around and ships it to their neighbors. And they do the same thing to their neighbors and so forth. Now, when this flooding process completes and every node is originating its own link-state advertisement-- so you're telling them your neighbors who you're connected to. She's telling her neighbors who she's connected to. And we all do that. And we're all doing this in parallel. It's all happening at the same time. And all our neighbors are rebroadcasting this. And eventually, every node is going to get one or more copies of every link-state advertisement, which means that every node can now construct an entire map of the network. Every node can construct this entire graph. And once they construct that graph, every node can implement some shortest path routing protocol to compute the paths over that graph. This is very different from the previous protocol. In vector routing protocols, the nodes actually have no idea what the topology of the network is. All they know is they trust what the neighbors tell them. Here, under the basic model, every node has complete knowledge of the overall network topology. So let me show this by example, and it will become completely clear. So let's imagine that this is what the network looks like. I'm going to show you a picture of node F. Node F originates its initial link-state advertisement. And then every advertisement, it increments the sequence number by 1. And it says, I'm connected to node G with a cost of 8 and to node C with a cost of 2. And it spits it out to its neighbors. Each of those neighbors turns around and does the same thing. You rebroadcast a link-state advertisement along the links that you are connected to. And they rebroadcast it and so forth. And eventually, B gets it. And he broadcasts it, too, though it was completely useless to do so. Well, not completely. If packets are lost, it's pretty useful. Anyway, when this flooding completes, which takes some number of steps, every node now has at least one-- if no packets are lost, every node has a bunch of copies of this link-state advertisement. If packets can get lost, as long as the loss rates are not enormous, every node might have one copy of a link-state advertisement. And, now, every node originates its own link-state advertisement. And, therefore, they end up with a map of the network. By the way, why do we have the sequence number in the link-state advertisement? Yes? AUDIENCE: I don't know, if F broadcasts its advertisement and something changes and it broadcasts a new advertisement, by the time the two advertisements get to B, you don't know which one gets there first. So you want to [INAUDIBLE]. HARI BALAKRISHNAN: That is one of the two reasons why you have it. That's actually the second reason. And that's a valid reason. But the main reason you have it is that, if C gets a link-state advertisement originating from F with sequence number 17, and then eventually C is also going to get D rebroadcasting that link-state advertisement. Because F sends it this way, but F also sends it this way. And that goes up here, and that comes down here. And then D rebroadcasts it. C needs a way of telling whether this link-state advertisement is new or old, right? And the way it tells if it's new or old is it considers a link-state advertisement to be new if it's bigger than the last sequence number it received from that origin. So, now, every node has a map of the network. And, now, we run this integration step where we actually take this map of the network, and we find shortest paths to the network. I need a show of hands. How many people know how Dijkstra's algorithm works from class? How many people don't know how it works? All right, what I'm going to do is it's described very well in the notes. We'll talk about in recitation tomorrow, but I'm going to show it by example. And then we'll come back to this again on Monday. But I'm going to tell you this now because you kind of need it for the lab. And it is described very well in the readings. And we'll do it in recitation as well. So here's how it works. Let's imagine we want to find paths from A to all of the other nodes in the network. Now, initially, A doesn't know paths to anyone except for itself. But what A knows is this map of the network. And what A is trying to do is to find routes to all the other destinations. The way it does that is it keeps building up in non-decreasing order of the cost of the minimum cost back to the destination. It starts building up information about the routes to the different destinations. So initially, it looks at this table. And it says, it's connected to C. And it's to get to B with a cost of 6. So what it does is it says, all right, among all of the people out here, I'm going to pull in the person with the minimum cost path. And in this case, it might just pick the node C. Because between C and B, it doesn't matter which one it picks. Now, the cost to all of the other guys is considered to be infinity. So it has costs of 6 and 6, so it pulls in one of them without loss of generality. It just picks one of them. And, now, it has costs to both of them. And it says that the route from A itself, at A, the route to C is this link. What it then does is it goes and looks at all of the neighbors connected to A and C. And in fact, it only has to look at the new node it pulled in. And it has to adjust the cost of the minimum cost path to the destination that's connected to that. In this case, it adjusts from infinity. It brings down the cost to D to 13. Because it knows that it can get to C in 6. 6 plus 7 is 13. Similarly, it does that for E at 10. And then it does that to F at 8. So, now, it has costs of 6, 6-- that 6 is already in-- 6, 13, 10, and 8, and infinity. So, now, it has to decide what node to pull in next. And it pulls in the node with the minimum cost among the costs that you have so far. And that's this node over here. So it pulls that in if my wireless works. There we go. And then once it pulls that in, it adjusts the route to that to be that green link. And then it goes ahead and looks at the neighbors of B. And it adjusts the shortest path cost. In this case, that 13 now becomes 11 because 6 plus 5 going through B is shorter than 6 plus 7 going through C. And, now, it repeats. It pulls down the minimum. The minimum, in this case, is 8. It pulls in F, makes that be the route to F. Now, the route to F is not that link. The route to F is, in fact, this link at the routing table. But it knows that because it knows that F is connected to C. And, therefore, the route to F is equal to the route to the parent, which is C. Therefore, in its routing table entry, it makes the route to F be that link, which is exactly the link to C. So that's the subtlety you have to keep in mind when you implement this stuff in the lab. It goes ahead and adjusts that to 16. It now pulls the minimum, which will be 10, and then adjusts the cost of the guys connected to it. So D changes to 10. And then it now goes ahead and pulls the minimum in. In this case, it's D with the cost of 10. That's the link to use. And that link, therefore, the route to D is the same as the route to E. The route to E is the same as the route to C, which is that link. And, now, you finally conclude the algorithm by getting that last node in. So I'm going to stop here. That was Dijkstra's algorithm and the two routing protocols. We'll pick it up in recitation tomorrow.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
21_Reliable_transport.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HARI BALAKRISHNAN: So, good afternoon. Continuing our story about networks, what we've seen so far is a story where you have a network that you're trying to connect computers to communicate together. And we use a network with switches arranged in some topology to allow us to find paths between computers. And so we looked at the routing problem. We looked at two different routing protocols to solve that. Earlier, we talked about this idea of a packet-switched network and there are queues in packet-switched networks. And when traffic comes in too fast, and the queues overflow, packets may get dropped. We also looked before at links which have errors on them. And so if you have errors on links, then your coding scheme isn't able to correct for those errors, packets may get lost. And when we looked at shared media networks with MAC protocols, depending on the MAC protocol you use, you may collisions, which means that packets may get lost. So what you have is a packet-switched network that has the property of something called a best-effort network. And what "best effort" means is that the network has a few properties that you have to cope with. The first property of a best-effort network is that packets may get lost. The second problem in a best-effort issue that arises in a best-effort network is that packets have delays, but the delays are variable. And particular queuing delays that happen in switches are variable delays. The good property of a packet-switched network is that each packet is treated independently by the network. So it could be that you have a stream of packets you want to send, say, belonging to a video stream or a file. And the sender sends them in some sequential order, but these packets may take different paths through the network. And in fact, there may be switches in the network, for whatever reason, that may not treat packets in first in, first out order. They may reorder packets. But more generally, packets may take different paths through the network because the routing protocol might decide to change the paths on you, and so packets make get reordered. And the fourth issue in a best-effort network is that, in fact, packets may get duplicated. So you may have the same packet show up multiple times even though it was sent only once for a bunch of reasons. One of them is that there could be problems in the implementation of the switches or the nodes that cause packets to get duplicated. But it could also be that you may have a link with a high packet loss rate, or this may be a shared medium where you have a MAC protocol that has collisions. So you may have a retransmission protocol-- let me try to resend the packet a few times at the lowest layer over a shared medium or a link. And sometimes, multiple versions of the multiple copies of the same packet may get through. And we'll actually understand why that happens more today. So packets may get duplicated. So in a way, a packet-switched network like the internet is great because it's very easy to build. And the reason it's easy to build in some sense is because about the only property that you're providing from the design of the network is to tell the endpoints, oh, I might get your package through. There's no guarantees on anything. As long as there's some non-zero probability of getting a packet through from one endpoint to the other, that's pretty much all it takes to declare that you have a conforming best-effort network. So it's easy to build. But of course, it means that you have all these issues that you want to deal with if you actually want to run applications. So as an example of an application is, let's say you are trying to download a web page, or a set of web pictures and text on a page. What you would like is an abstraction that you can implement some sort of a scheme you can implement in the network or in between the endpoints that makes it so that an application sends a bunch of bytes or packets or sends a message. And at the receiving side, you get those bytes reliably. So that's what we're going to understand today. We're going to look at today and next week, we're going to look at how to implement a protocol that provides reliable data transport. And the ideas we're going to look at are probably the ideas that are in the world's most popular computer program. It's the most popular in that it runs in the most number of places. And it's a protocol called TCP, which stands for the Transmission Control Protocol. Now, we're not going through all the gory details of TCP. We're going to look at a simplified version of this protocol. So maybe it's TCP-lite. But it'll cover the main idea of how you can achieve reliability. And this particular program runs on pretty much every computer and every phone and every little device that's on the internet today. So it's really, really popular. In fact, we're going to start with a simpler protocol that is a reliable data transport protocol that isn't used between endpoints. TCP is used between endpoints. Now we're going to look at a version of a protocol that's a simple version that actually runs in every 802.11 device-- your laptop, phone, and access points. So we'll study that protocol too first. In the context of end to end, between endpoints, reliable data transport, the problem is the following-- you have some network here. It's a best-effort network with those properties. And what you want is you have an application at one end. And you have an application at another end running on some endpoints. What this system provides that we're doing this study provides is an abstraction where you run software here-- you run software at this end, and all of the stuff sits on your end node. This is your endpoint. And the abstraction provides some nice properties. The application writes some data in here at the sender sending in. So let's call this the sender. And the other guy's the receiver. The application writes stuff inside here. The network is a best-effort network. And there's some protocol between these two pieces of software that make sure that no matter what the network does, what goes up here into the application is exactly the data that was written from this application in exactly the same order in which it was written. So it provides reliable and in-order delivery of data, so reliable and in order. So every piece of data that's written shows up in exactly the same order exactly once at the receiver. And these two ends are called transport. These two ends constitute the transport layer. And they [INAUDIBLE] at the end points, OK? So the application writes in here, the transport protocol delivers up to the application stuff that's reliable and in order. And in particular, it provides the semantics that you can think of as "exactly once" semantics. In other words, anything that's sent is delivered exactly once to the receiver, and it's delivered in order. Now, that's the abstraction that TCP as well provides. And that's the abstraction of our 602 protocol. It's reliable, in-order, exactly-once delivery. Now, there are other implementations you can have. There are protocols you could have that provide reliability but not in order. So I'll give you all the data that you send, but it may show up in different order, and it's your problem to fix it. Or you might provide a protocol that provides in order, but not reliable. So if I'm doing real-time video conferencing, say, Skype, Skype would probably want to provide a protocol that's in order but not reliable, because if I speak, you'd like to actually get those things into the Skype application in order. But it's not really required that it be reliable because if a message shows up, say, more than 100 or 200 milliseconds after I spoke it, it's going to distort the conversation. It's not going to be intelligible to you. And the human ear is wonderful at-- the human brain is wonderful at dealing with some clippiness in the voice. You know, occasional packets get lost. It's not the end of the world. So there are lots of interesting applications. But in order is useful, but not perfectly reliable. Applications where reliable is useful, but not perfectly in order. BitTorrent would be an example of that. An application where eventually you want all of those movies that you're trying to get. But who cares what order they come in? You're not going to start watching it until the whole file is assembled. And so the protocol that BitTorrent uses in effect-- it's a complicated protocol. It's not point to point. But in effect, it provides reliability without worrying about ordering. So there's lots of combinations. The combination we care about is reliable, in order, essentially giving you the illusion that you have a circuit between the two endpoints, a wire between the two endpoints, OK? So is the abstraction clear? Everyone understands what we're trying to solve? And in between this, just think, there's an adversary or some network in the middle that you sent packets, and the thing is just throwing packets away. And every once in a while, just for the heck of it, it decides to delay a packet for a long time. And every once in a while, it decides to send packets along different paths. And your job is to deal with all of that and design the sending side and the receiving sides so that stuff still shows up reliably and in the same order in which it was sent. So we're going to try to solve this problem. We're going to solve it first by coming up with a protocol. It has a nice name to it called stop and wait. It's a very simple idea. And this will be a protocol that works, but it's slow. But the good news is it works. It's correct. It gives the semantics we want. And then we'll try to improve its performance. It's a very simple idea. I'm sure you have-- just think about this for three minutes. You'll come up with something that looks like this. You just take the message you want to send, whatever file, stream, or video, whatever it is, and break it up into packets. So far, there's nothing new here. The main first idea is we're going to number every packet with a sequence number. So that's what's shown here as "Data 1," "Data 2," "Data 3," and so forth. So we're going to use a sequence number on every packet. Now, again, there are many ways to implement sequence numbers. The way that's the simplest and most conceptually clean is every packet has a sequence number that increments by 1 for every subsequent packet that's sent. And you might initially start the sequence numbering at 0 or 1 or whatever. The sender and the receiver have to agree on that. Now, in reality, TCP in the real world is a little more complicated. TCP provides sequence numbering by numbering the bytes with the byte offset in the stream. So if you send a packet which is 25 bytes, and the next packet is 200 bytes, the first packet is going to have a sequence number of, let's say, 0. The second packet's going to have a sequence number of 26 because it numbers the starting of the byte offset. But these are all details that to first order, we don't have to really worry about. The important point is there is a sequence number. And a sequence number is a unique identifier for the packet. In other words, if I later send a packet with the same sequence number, I have to guarantee that the material inside the packet is the same as it was before. So the assumption is that this is a unique identifier for the contents of the packet. So it's a unique identifier. We want to use it for some other set of bytes. We'll always use it again for the same set of bytes if we ever retransmit a packet with the same sequence number. When the receiver gets the packet with a certain sequence number, it does what the post office does if you send registered post-- you turn around, and you send an acknowledgment. And to allow the sender to know which packet's being acknowledged, you stick in the sequence number of the packet that's being acknowledged. So you send sequence 1. Data 1, you get ACK 1. Data 2, you get ACK 2, and everything is wonderful. It's easy, easy protocol. So what happens when a packet's lost, this data is lost? What's going to happen is the sender is not going to get an acknowledgment. And after some period of time called the timeout, the sender decides that it wants to retry that packet. And it tries to resend the packet. And if it works, it gets an acknowledgment. When it gets that acknowledgment, that's when it goes and sends the next packet. So the property of stop-and-wait protocol is that you send a packet only after you get an acknowledgment. You sent packet k plus 1 only after you get an acknowledgment for packet k. If you don't get an acknowledgment for packet k, you wait for a period of time called a timeout. And after that timeout elapses, you retransmit the packet that you considered was lost, that you thought was lost. OK, simple. Now, is this protocol reliable? And when I ask that question, you have to assume that the network may drop and reorder and do whatever it does to packets. But there's always a non-zero probability that a packet or data packet or acknowledgment packet sent on the network has a non-zero probability of reaching the other side. Because if the probability of packet loss is 1, now no one can help us. So is this protocol reliable? OK. Is this protocol in order? It is in order in the way of-- I'm not actually describing what the receiver does. But I should tell you that the receiver's semantics here are when the receiver gets the packet, it delivers it to the application. Now, if you turn out that this protocol is not necessarily in order the way I described it-- and I'll come back to why. But so far, it looks like the protocol is in order. But remember what I said about the receiver? When the receiver gets a data packet, it delivers it up to the application. So is the protocol potentially not in order? It's not actually in order. We'll get back to why. You have a question? AUDIENCE: Yeah, I was just wondering if the receiver gets a data packet and then tries to send an acknowledgement back, if the acknowledgement gets wrong, I guess, the sender will resend data. Does the receiver then compare to figure out? HARI BALAKRISHNAN: Right, so I haven't specified that. And you are one step ahead of-- you're at the next picture here. What happens in this case? You get a duplicate packet. And in fact, that's precisely for this reason that this protocol is not actually-- it's kind of in order, but in order means that you deliver packets in the same order in which they were sent. And in the way I've described the description, given the description of this protocol, this protocol does not provide exactly one semantics, right? It provides at least one semantics. In other words, every package is delivered at least once to the application. And what you would like us to deliver every packet exactly once to the application in order. So what would you have to do at the receiver in the software that you write and the receiver transport to take the same idea and make it be a reliable, in-order, exactly-once protocol? Yes? AUDIENCE: Loop up if you received that sequence number. HARI BALAKRISHNAN: Sorry, say again. AUDIENCE: Look up if you've received that sequence number. HARI BALAKRISHNAN: Look up if we received that sequence number. AUDIENCE: I think that's right. HARI BALAKRISHNAN: Good. So one implementation is you perhaps keep track of all the sequence numbers you've ever received and delivered up to the application. If the new guy comes in, you look and see if it's in your list, and deliver it if not. You could do better. You have to do all that work. You have to keep track of the list of all the sequence numbers you've ever received in order for this protocol to work. AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah, is it enough to keep track of simply the very last one you've delivered and also guarantee that you'll only deliver stuff in order? So if you get up to packet number 17 and you now get 18, you deliver it up to the application and update your counter to be set from 17 to 18 of the last sequence number you've delivered. If your last sequence number delivered in order is 17, and you get 16, throw it out. If you get 17, you throw it out. If you get anything-- if you get 19, which probably shouldn't happen in this protocol unless there's a mistake in the implementation that they're sending in, if the last sequence number I got was 17, can the sender send 19? AUDIENCE: No. HARI BALAKRISHNAN: Why not? AUDIENCE: Because we have 17 acknowledgements [INAUDIBLE].. HARI BALAKRISHNAN: Because-- that's right. So unless there's a bug in either side of implementation, which, trust me, when you implement it, you'll probably end up having some bugs, and you'd know something is amiss. But there are these invariants that have to hold. The sender can send k plus 1 only if it gets an ACK for k. The sender gets an ACK for k only if the receiver got k. And therefore, if the sender's last in-order sequence number received and delivered the application was 17, it can't actually get a 19 in a correctly implemented protocol. But if it does, because in the real world, you don't know who the heck wrote the sending side. You might have done your receiver, and the sender might have been done by, oh, I don't know, Microsoft. And it may have an issue with it, or Apple, or whoever. I mean, you don't want to trust it, right? So you have to be careful about making sure that you might want to assume the protocols-- you don't want assume necessarily that the other guys implemented the protocol right because he might not have. And so, who knows what might happen? So your rule as the receiver is to rigidly obey whatever the discipline is, which is you deliver up a packet exactly in order. OK, so we wanted exactly-once semantics. And the way you get that is you get that by keeping track of the very last sequence number that you received. So this protocol-- so the first idea is sequence numbers. The second protocol is a retransmission after a timeout. Now, how big should this timeout be? This whole protocol rests on this magic timeout. What should it be? 15, 17? What are the units of the timeout? Actually, what are the units of this timeout? It's time. So it's, like, seconds or milliseconds or something. How big should it be? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: What? AUDIENCE: 5 milliseconds. HARI BALAKRISHNAN: 5 milliseconds? AUDIENCE: [INAUDIBLE] milliseconds. AUDIENCE: Units [INAUDIBLE] milliseconds. HARI BALAKRISHNAN: Yeah, units are seconds or milliseconds. Good. But how would you pick it? AUDIENCE: You know the round-trip time. HARI BALAKRISHNAN: OK, good. So that's a good idea. There's this thing I've written on the left called the round-trip time. But you don't know the round-trip time, but you could measure the round-trip time. And I'll talk about how you measure it a little bit later. But it's important to realize that if you make the timeout be smaller than the round-trip time, where the round-trip time is defined as the time at which you sent a packet to when you got an acknowledgment for that packet-- if you make the timeout smaller than the round-trip time, what happens in this protocol? Let me first ask, is the protocol still correct? By correct, I mean, does it provide reliable, in-order delivery? OK, it's correct, because that correctness does not rest on how we pick the timeout. However, what is the problem with making the timeout smaller than round-trip time? AUDIENCE: [INAUDIBLE] HARI BALAKRISHNAN: Yeah, if this protocol's going to be here, you're going to be retransmitting and retransmitting and using up a lot more of the network's resources than you need to in order for you to actually get your protocol to work correctly. And you might, if the time out is really, really small, you would probably congest the network. OK, so the timeout has to be bigger than the round-trip time. The trouble in a packet-switched network is that delays are variable in a best-effort network. And, in fact, packets may be reordered. There may be weird things going on in the network, which means that the round-trip times are actually not constant. They vary with time. They vary with other traffic. They vary with lots of other factors. And so what you want is an adaptive method that would measure the round-trip time, estimate the round-trip time, and then come up with some sort of an algorithm to compute or to set the timeout as a function of the observations of the round-trip time. I'll get back to that later on today. And we'll also talk about this in recitation tomorrow. It's actually a very nice application of a very simple low-pass filter. So we'll actually come back to this idea. But what I want to you to have in your head right now is this idea that there's a timeout. And the timeout has to be-- which I'll call RTO for Retransmission TimeOut. We have this idea that our retransmission timeout has to be bigger than the round-trip time, OK? So what I need to tell you still is how to measure and estimate the round-trip time and how to use these estimates of the round-trip time to pick the timeout. But let's subcontract that problem to someone. Let's say that there's the black box that will tell you what the timeout should be, and now you have this protocol. So assuming we have that black box and someone telling you the retransmission timeout, what I would like to do now is to spend some time telling you how well this protocol works. I'd like to understand what is the throughput, which is the data rate that you get if you're on the stop-and-wait protocol. So that's what I want to do now-- throughput of stop and wait. So the input here is-- I'm going to assume a very, very simple model. I'm going to assume for a minute that the round-trip time doesn't change a whole lot. This is a very simplifying assumption, but there's some average round-trip time. I'm going to assume that the round-trip time is RTT. The same result holds if the round-trip time varies, but just simple model. Let's just assume the round-trip time is fixed. And let's assume that somebody tells us what the retransmission timeout is. And I need one more parameter. I'm going to assume that I know the network's packet loss rate because intuitively, if the network's packet loss rate is zero-- that is, no packets are lost. No data packets, no acknowledgments are lost-- then you would expect this protocol has higher throughput than if packets were lost, right? If the packet loss rate is 50%, you would expect that what would happen is, well, half the packets or ACKs are getting lost, which means you have to retransmit the packet. And every time you retransmit the packet, the protocol comes to a wait. And you have to wait until the timeout happens. So the bigger the packet loss rate, you would expect the protocol to be slower. So I'm going to assume that we have RTT and RTO. And we have a packet loss rate of L. So what does that mean? What it means is that if I send a large number of packets to the network, a fraction, L, of them will get lost. And I'll just assume in the simplifying model that the packet losses are independent. So they're sort of Bernoulli losses. You know, every packet gets lost independently with some probability. Now, I also will assume in this protocol-- does it matter to the performance of the protocol if the data packet is lost or if the ACK packet is lost? It doesn't matter. As far as the sender's-- and this is an important point to understand. As far as the sender is concerned, if a timeout happens, it has no way of knowing whether the timeout happened because the data was lost or because the ACK was lost. This is, like, absolutely-- the receiver knows. Or actually, the receiver doesn't know if a timeout happened, but the receiver does know whether it got a data packet or not. But the sender-- the only thing it's acting on is the absence of an ACK. And the absence of an ACK indicates either that the data was lost or that the ACK was lost, and it has no idea which. Therefore, we could assume for this analysis that this packet loss rate of L is actually a bidirectional packet loss rate. What I mean by that is L is the probability that either a data packet is lost, or its ACK was lost, OK? Now, if I give you the one-way loss probability, you can do the calculation. That's a probability calculation to find out what is the probability that either the packet was lost or the data was lost. That's an easy calculation. But let me just assume that the probability that either the packet was lost, data packet was lost, or its ACK was lost, is L. So given these numbers, what I want to do is, given these things, I want to know what the throughput is. In other words, how many packets per second am I transmitting, am I able to transmit, or am I able to receive at the receiver? So if you want to look at what happens in this picture, if you draw time like that-- you send a packet, and maybe you get an ACK here. So D1, A1. You send D2 immediately. And you get A2 after some time. And maybe you have a timeout. So you send D3. And then you have a period of time, which is the RTO. No ACK happens. You send D3 again. And maybe no ACK happens for a while. You have another RTO. I'll assume that the RTO was fixed here. And you send D3 again. And you get an ACK here. And then you send D4 here, and so forth, right? That's an example of what could happen in a particular time evolution of the protocol. What I mean by throughput is that I would like to run such an experiment for a very long time, or run many, many such experiments, which is sort of equivalent to running an experiment for a very long time, and then count how many packets did I successfully get at the receiver. Or equivalently, I can ask how many ACKs did I get at the sender over that long experiment, right? And the number of ACKs that I get at the center divided by the time of that experiment will tell me the number of packets per second. Or put another way, if I run the experiment for some long period of time, and I receive n packets coming back-- right? If I receive n acknowledgments-- and if the expected time here between when I send a data packet-- I send a data packet. I get an ACK. I send a data packet. I get an ACK. I send a data packet, and I get an ACK. I send a data packet, and I get an ACK. If I take the expected value of that time-- that is, the expected time between when I send a packet and one I get an ACK-- the 1 over that number, 1 over the expected time, is equal to my throughput in packets per second. Because if I run the experiment for a long time, I'm going to get some number of acknowledgments. So if I run it for some period of time where n times e of t-- where e of t is number here, and I get back an acknowledgment, n divided by n times e of t is my throughput. And therefore, 1 over the expected time is the throughput of my experiment, right? So this should be intuitive, because what's really happening is, with a little bit of handwaving, actually, that I send data. I get an ACK. Send data, I get an ACK. There's a certain expected amount of time, so I will send 1 over that packets per second, OK? So in other words, the throughput is the reciprocal of the expected amount of time between when I send a packet and when I get an acknowledgment. So it's enough for us to compute the expected value of this time, right, or the mean value of that time. All right, so we could do that calculation in a simpler way. There's the sort of tedious way to do it, and there's a very simple, nice way to do it. So we want to calculate expected time between data and ACK. And one way to do this is to say that-- let's say I send a data packet. One of two things can happen. I either get an ACK for it, or I don't get an ACK for it. What's the probability that if I send a data packet, I get an ACK for it? Well, the probability that I send a packet and I don't get an ACK for it is L. Therefore, the probability that if I send a data packet, I get ACK for it, is 1 minus L, right? So with probability 1 minus L, I send a data packet, and I immediately get-- and when I say "immediately," I get an ACK for that data packet, right? And how long does that take? If I get an ACK for it, the ACK comes back to me in a time which is equal to RTT, the Round-Trip Time, right? So therefore, I can write a formula that looks like this. I can write this expected time which I'm trying to calculate as being equal to 1 minus L. With probability 1 minus L, the expected time between when I send a data packet and when I get an ACK for it is equal to the RTT, right? Because 1 minus L is, by definition, the probability that I send a packet and I get an ACK for it. Send a data packet and get an ACK. Now, what happens with probability L? With probability L, I send a data packet, and I don't get an ACK for it. So now I want to compute the expected time given that I don't get an ACK for it. The first thing that has to happen is I need to take a timeout. So I have to wait for a period of time shown in this picture given by the RTO. And then once I wait for that RTO, and I now start by sending a data packet, the expected amount of time before I get an ACK for that data packet is exactly equal to the original expected time that I'm trying to calculate, right? Because it doesn't matter what happened in the past. Let's say I take a timeout. And now, I come back here and [INAUDIBLE].. I'm not going to send a data packet. What's the expected time before I get an ACK? Well, that's exactly equal to the same answer that we're trying to calculate, this expected time over here. Therefore, I could write this recursion type of relationship. The expected time is 1 minus L the RTT plus L times the RTO plus the same expected time that I'm trying to calculate, right? What this says is with probably 1 minus L, the time it'll take for me to get an ACK is equal to the RTT. And with probability L, it's equal to-- first of all, this RTO-- I have to wait for that retransmission timeout. And then once I do that, well, I have to add some more time. And that time that I have to add is exactly equal to the same expected time from the left-hand side that I'm trying to calculate. Does this makes sense? You could kind of do this in a more tedious way. You could say, well, with probability 1 minus L, my time is RTT. With probability L times 1 minus L, the time is equal to RTT plus RTO. With probability L squared times 1 minus L, this is, like, two losses. And then a retransmission-- the time is 2 times the RTO plus RTT. With probability LQ times 1 minus L, it's that. If you do all of that stuff, you'll get the same thing. But this is the more-- there's a simpler way to do it. So if you run take the expected time over to one side and solve this equation, what you'll end up with is that the expected time is equal to RTT plus L over 1 minus L times the RTO. I mean, as the packet loss rate becomes larger and larger and larger, this term starts to dominate because L over 1 minus L starts to be bigger and bigger and bigger, which is what you would expect. If the directional packet loss rate is large, you'd expect the RTO terms to start to dominate, and the expected time is larger and larger and larger. If the packet loss rate is zero, then the expected time is exactly equal to the RTT. You send a packet. You get an ACK. And with an RTT, you send the next packet. You get an ACK. And of course, the throughput's equal to 1 over the expected time. That's the reciprocal of the expected time, OK? Now, what's the best case here? The best case here is that you get one packet per round-trip time. The worst case is arbitrarily back depending on the packet loss rate. But the important point here is that even in the best case, you're able to send only one packet, at most, one packet per round-trip time. The question is, how good or bad is one packet per round-trip time? Is this clear, this intuition behind why this is one packet per round-trip time in the best case? That should be pretty obvious, right? I send a packet. I get an ACK. Send a packet. I get an ACK. This calculation just shows a little bit more detail about what happens when the packet loss rate's non-zero. So if the packet loss rate is, say, 20%, you take 1/5 over 4/5. So it's RTT plus 1/4 of the retransmission timeout. That's what it says the expected time is. And 1 over that is throughput. Now, how bad or good-- is it clear? Any questions? OK, so now, how good or bad is this 1 over the round trip time? So let's say that you have a network between Boston to-- I don't know-- San Francisco. And if you do these pings or whatever, let's-- I mean, I don't know the real numbers, but let's say it's 80 milliseconds. Just for the calculation to be easier, let's assume it's 100 milliseconds. And let's say that a packet on the internet it's about 10,000 bits. So let's make it bytes. Let's say that it's 1,000, say, 1,500 bytes. So what this says is that the throughput that I would get with the stop-and-wait protocol if I ran it on this internet path would be 1,500 bytes divided by 100 milliseconds. So that's 15,000 bytes per second, 15 kilobytes a second, which might have been really, really good in 1985, but no one's going to be happy with this today. I mean, you might have a link that's a megabyte a second or a gigabyte a second or 10-- you know, bigger than that. But no matter how fast the network links are, this protocol is completely dominated by the delay or the latency, the round-trip latency, between the sender and the receiver. And you end up with a throughput that's pegged to a small value. And so, people don't like that. So question is, how can you do better? What can you do now to this protocol? Or come up with a new method, a new protocol that would improve the throughput of this system. Because if people pay money for network links, they'd like to actually get higher performance from it. So what could you do? AUDIENCE: Larger packets? HARI BALAKRISHNAN: What? AUDIENCE: Larger packets? HARI BALAKRISHNAN: Larger packets. Well, yeah, larger packets is-- yeah, why don't we make our packets as big as the five we want to send? Actually, I digress. Why don't we make packets really big? Like, I got a megabyte file or a gigabyte file to send. Why do I have to break it up into smaller packets? AUDIENCE: Larger packets use more bandwidth? HARI BALAKRISHNAN: Well, to send the data, no matter if we break it up small or big, you're going to use the same bandwidth. I mean, that's a good question. Yeah, you have an answer? AUDIENCE: [INAUDIBLE] how to [INAUDIBLE] over time. HARI BALAKRISHNAN: That's kind of true. You know, if a packet is, you know, let's say a gigabyte file you want to transmit. And you send that in one atomic unit, and goes through four hops in the network, and then it gets dropped on the first hop, you end up having to send an entire gigabyte again over all those other hops. That's actually not good. But in fact, really large packets are probably a bad idea even for networks which don't drop any packets. I mean, think of the case when I have a gigabyte file to send, and you have a gigabyte file to send. The problem if you make these packets really big is that one of us is-- on a shared link, only one of us can send that packet, which means the other guy is going to be waiting a really, really long time for him to send that packet. So the reason why, in the end, packets are modest size has to do with our wanting to share the network evenly over smaller time scales. It's because we want to give fairness across smaller time scales, allowing everybody who's competing access to the network. So even if we have big amounts of data to send, we prefer to break them up into smaller chunks among other reasons, one reason being we don't want to start other connections and prevent them from gaining access to the network because there's some huge transfer sitting in front. So that's part of the reason. So anyway, so bigger packets doesn't quite cut it. So what else could you do? Yes? AUDIENCE: [INAUDIBLE] to send. So if you cannot [INAUDIBLE]. HARI BALAKRISHNAN: OK, you know, well, I'll come back to this on Monday. That's actually a really good idea. But when would you stop for 8, 16, 32? I mean, at some point, this is like-- AUDIENCE: [INAUDIBLE] at some point, it's going to tail. HARI BALAKRISHNAN: Because packets are lost. AUDIENCE: Yeah, so you go back. HARI BALAKRISHNAN: OK, this is a really good idea. We're not actually going to teach that here in this course. This is actually what TCP does in the beginning of the connection. But before we-- what else could you do? That's a good idea. Yeah. AUDIENCE: You could send a fixed number. HARI BALAKRISHNAN: Yeah, you could do a fixed number. You know, somebody could pick-- I actually kind of-- it is a really good idea to do 1, 2, 4, 8. And then if it fails, you come back down to, say, 1 or 1/2 of whatever worked the last time and then continue from that. That particular thing has a name to it. That protocol is called slow start. It's ironic because it's really fast. It's exponential, right-- 1, 2, 4, 8. But yet, it's called slow start. I'll probably tell you more about it on Wednesday. But we'll ease into that solution. We'll do something simpler. We'll use something called a sliding window protocol with a fixed-size window. You just make that 1 be 7 or 4 or 6 or 8. I'll tell you later next time how you pick that value, OK? And one way to pick that value is to do it dynamically like the gentleman in the front said. It's more complicated. But let's just pick a fixed-size value. So the idea is actually very, very simple. Now that I have one packet outstanding. We use this idea in computer science. We use this over and over again-- pipelining. So you just send multiple of them and have multiple outstanding packets. By "outstanding," I mean a packet that hasn't yet been acknowledged. A data packet that hasn't yet been acknowledged is called an outstanding data packet. And you have multiple of these outstanding. And every time you get an acknowledgment, you send one more packet. So that's shown in this timeline here, right? So you start here. You send a packet. I don't know why this isn't working. Ah, there we go. You send a packet. You get an acknowledgment. When you get an acknowledgment, you send another packet. Get an acknowledgment. You send another packet. But in the meantime, there are these other acknowledgments coming in. And the rule is very simple-- every time you get an acknowledgment that you have not seen before, send the next packet in sequence. So the sender just keeps sending packets in sequence order. Every time it gets an acknowledgment that it hasn't seen before for a packet that it had sent before, it sends the next incrementing sequence number. So this painstaking animation will attempt to show you that, assuming it's correct. So the window here is five packets, OK? I'll tell you later some guidelines on how to pick this window size. But this number of packets here is called the window, the number of outstanding packets, or the number of unacknowledged packets. It's always going to be 5. It's going to be 5 in this example. It's always going to be a fixed value in our protocol, OK? So you send the first packet. When you get an acknowledgment for that first packet, you slide the window forward by 1, and you send packet 6. When you get an acknowledgment for a packet 2, you slide the window forward, and you send packet 7. When you get an acknowledgment for 3, you slide the window forward, and you send packet 8. This is-- sorry? Yeah. AUDIENCE: So it appears [INAUDIBLE] out of order. HARI BALAKRISHNAN: That's a good question. I'll get to that in a moment. The answer is that the sender's rule is always the same. Yes, you get acknowledgments out of order. As long as it's an acknowledgment for a packet-- sorry, as long as it's an acknowledgment that you have not seen before for a packet that you have actually sent, you slide the window forward by 1 and send a new packet, OK? And you keep track of the fact that you received an acknowledgment, so you know that you should never retransmit that packet. I want to define this thing and pause here. I want you to understand the definition of a window and internalize it. If the window size is W, what it means is that the maximum number of unacknowledged packets that you can have in the connection is W. There are many different ways of defining a window. In fact, TCP inside it has two windows. This definition is one of those windows. I won't talk about the second definition here. I'll get to it next week. It's not important for us right now. So again, to repeat, if the window size is W, it means that the maximum number of unacknowledged packets in the system in the protocol is W. So the rule of the sender is going to be to very religiously adhere to this rule. In other words, every time it gets an acknowledgment, it waits and sees whether it's an acknowledgment for a packet it has sent before that it has not seen before. If you get an acknowledgment like that, it means that some packet has been received, which means you can get rid of that packet from the stack of unacknowledged packets that you have and send a new packet. Because you can send a new packet because you know that the number of unacknowledged packets reduced by 1 because you got an ACK, which means you can now send a new packet, OK? It's a very simple rule if you just follow that idea to implement. It also is surprisingly easy to get wrong. Yeah. AUDIENCE: So the window doesn't necessarily have to be consecutive? HARI BALAKRISHNAN: The window doesn't have to be consecutive. This is a really, really good point. And it's very tempting to implement a window that's consecutive. And you'll find that after a while, if you follow that idea, and you do it wrongly, the protocol will just stall. And every time, there's about a quarter of the students, the first time they implemented this, it just stops working after a while as the packet loss rates grow. So it's important that in this definition of the protocol in the way it's defined here, the window of unacknowledged packets-- it's not necessarily consecutive. So you could have packets 1, 2, 3, 4, 8, 9, 10, 11 outstanding if your window size is 8. The other guys may have gotten acknowledged. That's absolutely true, yes. OK? All right, now what happens under all these other weird cases that are going to happen here? So let me first show you a timeline of how a timeout is dealt with. So let's say in this case, the window size is 5 again like it was before. So everything is going wonderfully well here. And let's say now you move on. You send packets 6, 7, 8. And let's say packet 8 is lost. What the sender is going to do is it's going to send packet 9. It's going to send packet 10 based on acknowledgments for 4 and 3 that it received before. So, sorry, when it got acknowledgment 3, it sent packet 8. 8 was lost. The sender didn't know that at this point. When it got an acknowledgment for 4, it sends 9. When it got an acknowledgment for 5, it sends 10. When it gets an acknowledgment for 6, it goes ahead and sends 11. When it gets an acknowledgment for 7, it goes ahead and sends 12. So at this point, the sender actually has outstanding 12, 11, 10, 9, 8, OK? Now, at some point, it discovers that-- in fact, this picture continues. In this picture, what happened is that you sent out 9. You've got an acknowledgment for 9. And at that point, you send out 13 because whenever you get an acknowledgment, you send out the next consecutive packet you should be sending out. So at this point in time, the sender has a bunch of outstanding packets in it, and it's got acknowledgment. And this is an interesting case because packet 8 was lost. 9 was sent later, and 9 got acknowledged. But we still haven't timed out on 8. So at this point in time, the outstanding packets in the window are 13, 12, 11, 10, and 8, giving you that nonconsecutive observation that you notice. And then at some point in time, based on the round-trip time, based on the black box, the guy timed out, and 8 got retransmitted. And then 8 got retransmitted. You got an acknowledgment for 8, and the protocol sort of continues in that fashion. Does this makes sense? So it's actually not that hard when you think about it. What the sender does is a very simple idea, which is every time it gets a new acknowledgment for a packet it had sent before but hadn't seen an acknowledgment for before, it just goes ahead and sends the next packet. And then it has a separate process by which it maintains this timeout. And whenever an acknowledgment does not arrive within a timeout, it goes ahead and retransmits that back. Now, when it retransmits the packet, the assumption here is that the original packet was actually lost. So the timeout has to be bigger than-- for the system to work well, the timeout has to be bigger than the maximum time that a packet can sit around in the system. So if the timeout is too small, and you retransmit 8 too early, it could be that 8 is not lost, but 8 is just being reordered in the network and going on some very long circulous path. You know, I recently read that someone got a letter in New York 70 years-- it was sent in 1943, and it showed up, like, two weeks ago. So, I mean, it could happen on the internet too. I mean, quite literally, if you're on an Amtrak train, and you're using their wireless network, some packets come to you in 300 or 400 milliseconds, which is arguably very long. But literally, there are packets that will come back to you a minute after you sent them-- reach the receiver a minute after you sent them. And they could be out of order. So in fact, this is my student's, and I call this the great Amtrak network. It's really good because it gives us really interesting research for others to work on. But the people who are on that train probably-- it's miserable. So anyway, this could happen. But, and so, the timeout is a heuristic. It could be that 8 was retransmitted wrongly in a spurious way. But our hope is that if the timeout is long enough, it could be that-- if the timeout's long enough, the idea is that you retransmit 8 because the original 8 was lost. And now the outstanding packets in the window are 8, 13, 12, 11, and 10. But the 8 here is not that 8, but this 8. But as long as the contents of 8 are the same, it doesn't matter which 8 it is. But of course, if the timeout is too small, there are two 8's sitting in the network. And now you actually have more than W packets in the window. But as far as the sender is concerned, it has exactly those 8, 10, 11, 12, and 13. It's true that there's one more packet if the timeout happened too early. That's the network's problem. As far as the sender is concerned, it has five outstanding packets. The receiver is a little trickier than in the other case because what it has to do-- I mean, it's trickier in that it has to maintain a buffer of packets. So the receiver has a little more of a job to do. In the previous stop-and-wait protocol, any time it got an out-of-order packet, it's probably because the sender is badly implemented, right? If the last sequence number I deliver to the application was 17, and I got a 19, it's a bug. Whereas here, is the last sequence number I delivered to the application is 17, and I get a 19, it means that, well, there's a window. And maybe 18 was lost, or who knows what happened, right? Maybe 18 will show up later. So the receiver now has an interesting job. And this is important because when you implement this stuff in this protocol [INAUDIBLE],, you've got to make sure that whenever you deliver packets from the receiver protocol to the application, you deliver it in order and update the last sequence number you delivered, OK? So that's important to do. But if you do that and then acknowledge a packet when it's received, just send an acknowledgment for it. The protocol will continue, and it will work well. So this is what you'll be looking at in the lab, the piece that's going to go out today. This is the last lab. And then on tomorrow in recitation, we'll look at how the timeout is selected. And then on Monday, I'll talk more about an analysis of this protocol.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
23_A_brief_history_of_the_Internet.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So this is the last but one lecture of the term. So what I'm going to do today is in about 45 minutes, give you a quick history of the internet from-- we'll start in the late 1950s and then get to today. And then next time on Monday, I'll conclude this class by first doing a wrap-up of 6.02, and then also telling you a little bit about where I think the future of communications systems might be going I'll probably be wrong about it, but I'll be confident about it. And today, the idea is to try to connect some of the topics we've studied in the class so far to this history. Of course, we're not going to be able to do all of it. So the story so far in terms of the history, you have to assume-- so we're going to start in 1959, or 1957. And by this time, the history of communication systems has had a lot of successes. And a common theme is that there's the technology that comes around, it succeeds, it tries to take over the world. And right at the time and it looks like there's nothing else that's going to happen, some other new technology comes around that over time kills it. So the first successful network technology was the electric telegraph. And the electric telegraph was done by a number of different people-- Wheatstone and Cooke built an electric telegraph in England, Morse and Vail built one in the US. The Morse code was developed as part of the electric telegraph. And that did a great job in the 1830s, 1840s, 1850s, and so forth. And then other technologies came around. Now, this sort of story keeps repeating. And so by the 1950s, the dominant player in the communication area is the telephone network. So we have the Bell Telephone Company, and in the US, it dominates. There's equivalent telephone companies in other countries. And you have this massive, amazing telephone network. And increasingly, many people have telephones. On the wireless side, there isn't a wireless telephone system in the 1950s. But what we have are radio-- broadcast radio and broadcast television, and some very powerful companies that own wireless spectrum to offer television and radio. So the story starts in the late 1950s. And in 1957, a big thing happened-- Sputnik launched. And that caused the US to assume that they were falling behind in science and technology. And that led to the creation of ARPA, the Advanced Research Projects Agency that still exists. It's known as DARPA today. And it is probably the biggest federal funder of fundamental research and a lot of applied research and development. Paul Baran in many ways is one of the fathers, or the father of packet switch technologies. He was working at a think tank called the Rand Corporation, which was an organization that really was a think tank. It allowed people to think about long-term fundamental directions in terms of technology and where technology was heading. And he was looking at the problem of trying to build what he called a survivable communication network. And the story is that he's trying to build a network that can continue to work in the face of a nuclear war. But that's really not what he was trying to go after. What he was trying to build was just understanding how do you design communication networks that allow you to handle failures? And a lot of the then telephone network was a very centralized kind of structure where you would have a network that had very little redundancy built into it. And you'd have lots of these central star-like topologies. And the problem, of course, when you connect these stars together, and these star-like pieces connect to other star-like pieces, is that you had better make these points here, these nodes or switches here, extremely, extremely reliable, and those links that are connecting them to these other central structures. Because otherwise, the failure of those things would kill the system. And the Bell Telephone Company actually understood how to build these very expensive, very, very reliable switches, but they were very, very, extremely expensive. The other problem is, and we'll get back to this later today, is that a telephone network is a great network. But it supports exactly one application. The application is you pick up the phone and you talk. It's very hard, on the face of it, to imagine how a telephone network would do a great job at supporting the web, for example. No one even thought of the web. But even something basic like being able to watch a video stream whose quality might vary with time-- these are things that the internet did differently and did better than the telephone network. But the telephone network fundamentally had, in those days, a fault tolerance issue. And the way they dealt with it was to build extremely reliable components. Paul Baran's idea, which other people had been thinking about and toying with, was he observed that-- and in those days, the telephone network was largely analog. The digital telephony wasn't really there. And digital computers were just starting to come in. And Paul Baran was probably one of the first few people who realized that computing and digital technologies can change life in terms of how you build these systems. Because the digital abstraction allows you to know for sure whether a component is working or not. Because if it works, it gives you an answer that then you can verify. And if it doesn't work, it just stops working. It isn't like an analog system where it may or may not be working, and as the noise increases or some fault occurs, you're starting to see a lot of noise. And it's garbled, you're not quite sure. With a digital system, it either works or it doesn't work. You can build systems like that. And he noticed that you could now start to think of building reliable systems out of lots of unreliable individual components. And this is the fundamental guiding theme for large-scale computing systems from the late 1950s. I mean, this goes all the way to how Google, or Amazon, or Facebook, or any of these big data centers work. Any single one of those computers there is highly unreliable relative to what you could do if you put in a lot of money to build a very reliable computer. But the ensemble is highly reliable. And to do that requires a lot of cleverness and care. And the first real example of this is a digital communication network built out of this idea of packets. In a couple of papers that he wrote in the late 1950s and early 1960s, he said that with the digital computer, you could now start to build highly reliable communication networks out of unreliable components. And so he articulated this idea that you can connect these switches or nodes together in highly redundant structures. And if you have a stream of data to send, you don't have to really pick a particular path through the structure. You could take that message and break it up into different pieces and ship them in different directions. And even if you chose to ship them all in one direction, if a failure occurred, these switches could themselves start moving the data in different directions, which meant that you no longer think of a communication as a big stream that you have to send in one way, but you can start thinking about splitting it up into these different pieces. Now, like with many of these ideas, it's rare to find-- sometimes it happens, but it's rare to find exactly one person in the world thinking about it. No matter how groundbreaking the idea, there were other people working on it. And Donald Davies in the UK in the early '60s was looking at similar ideas. And he actually coined the term packet. We use packets now to mean these little messages that you ship through the network that are atomic units of delivery. This term was coined by Donald Davies in the 1960s. Now, all of this was wonderful and sort of theoretical abstractions. But how do you start to come up with some design principles for building communication networks? In particular, how do you deal with the problem of having these links come together at a switch, and try to share the links going out of a switch in a way that allows traffic from different conversations to multiplex on the same link? The idea of having a queue in a switch in retrospect seems completely obvious. But if you're the first time you're seeing something like this, and the telephone network had really no queues, the idea that you would build a queue and now start to analyze is a pretty groundbreaking result. Again, there were many people involved. But probably the leading contributions came from a person called Len Kleinrock, who was a PhD student at MIT. And in his 1961 PhD thesis, "Information Flow in Large Communication Nets," wrote about how you could use queuing theory to analyze and to model communication networks. Now, at around the same time, again at MIT, Licklider and Clark wrote a really interesting paper. It's actually worth reading now. I mean, it's 50 years ago, but it's interesting. You have to go back and think, this was at a time when people didn't really have this idea that people could sit in front of computers. Computers were used to maybe count votes. I suspect they get it wrong today more than they did in those days. But computers were used to count votes, they were used to help with the US Census. But nobody thought about people sitting in front of computers. And they wrote this wonderful paper called "On-line Man Computer Communication." I guess in those days, you know, man meant people. So anyway, they wrote this paper. And in fact, Licklider had this vision of what he called a galactic network that would span the globe and beyond, which was-- for the early '60s, it was a pretty remarkable vision. Now, using these ideas, and particularly Len Kleinrock's ideas, and this idea of man-computer interactions-- which of course, the idea of everybody having their own computer wasn't this paper's vision. This paper's vision was there's a lot of computers out there. And people just had remote terminals, and you would log in and have these big computers that you could use. But then you would have nice interactions on your own terminal. That was what that paper was about. Larry Roberts was first at MIT, and then moved to ARPA to run this program, created something called the ARPANET, and wrote a paper and wrote a program that is a call for proposals for the ARPANET, which was a plan for timesharing remote computers. So the internet-- the ARPANET was the precursor to the internet. And it started not because we wanted to build a communication network to prevent-- for it to work when there was nuclear war or any of these major disasters. It actually had a very concrete goal-- just allow people-- computers were really, really expensive-- just allow people, no matter where they were, to be able to harness the power of expensive computing far away, and make it look to the extent possible as if the computers were with you. That was the vision-- pretty compelling, but simple. And they decided, for very good reason, to pick packet switching. The reasons primarily had to do with economics. This was a network that was being proposed for an application whose utility was questionable. And the idea of investing huge amounts of money was not quite palatable. And Larry Roberts and others were very taken by this vision of packet switching. So they said you know what, the ARPANET is going to be a packet switch network. I'll come back to this later, but of course, the telephone companies like AT&T just thought this was a terrible idea, and were using every opportunity to ridicule the idea. The ARPANET was created, and a few teams bid on the contract for it. And BBN-- Bolt, Beranek, and Newman, that's near Alewife in Cambridge. They're still there, they're part of Raytheon now. And they still continue to do pretty interesting research. They won the contract to build this network, build the technology for this network. And because the processing involved in the network, the protocols that were involved were considered complicated, and considered to be computationally intensive, they had to build a separate piece of hardware that they named the IMP, or the Interface Message Processor. And BBN won the contract to do that. The idea of an interface message processor is that every computer, as well as every switch, that the switch would have some hardware to forward stuff. But you needed something to do the computation of both the routing tables as well as actually every packet. Every packet would show up, and you would have to compute some sort of a checksum on it. And you'd have to do this computational task of figuring out how to forward that packet. And that was just considered too much work to have on an actual little computer. So they actually had to build a separate piece of hardware that you would attach to your computer, and it would probably be about as big. And you can see the picture here. These IMPs were attached to bigger computers or computers of the same size. And these did the networking. I mean, today, all of that stuff, a million times more is going on this device. But back in the day, that's how it was. So they won this interface message processor contract called the IMP. And when you win a big federal contract, oftentimes, your Congressman or Senator writes to you. In fact, it's funny because a bunch of us got-- for a period of time before the Senate election, a bunch of us were getting emails, letters from Scott Brown congratulating us on winning some dinky little NSF proposal. I don't know if other people here got it, other faculty here got it. But it's sort of like in those days, if you won a big contract, you'd get money-- you'd get a letter from your congressman. So in fact, Ted Kennedy, who was the senator at the time, and was for many years, congratulated the team for winning this. Except he got it wrong-- he congratulated them on winning the contract to build the interfaith message processor. I assume that if they actually had built that, it might have been a more useful contribution to world peace. But all they managed to get was the contract to build the interface message processor-- just details. Anyway, so this team was a pretty remarkable team. They built the first-- they didn't build the first email program. That was done over at MIT in the '60s. But what they did do was the first email program that crossed different organizations. And in fact, the @ symbol in your email addresses, which of course, is sort of the right symbol to use, if you use the @ symbol. But there was a person at BBN, Ray Tomlinson, who said, I'm going to put the @ symbol in email addresses. And a lot of early stuff happened that continues to this day. So they built this network. And Kleinrock over at UCLA was the principal investigator on this-- now building systems out of this piece of hardware that was built. And this was the picture of the ARPANET in 1969. This became the internet. There's a continuous evolution from this four-node picture to the internet today. And in 1969, they finally connected initially two, and then four nodes. And they had to do the first demonstration. And to listen to Len Kleinrock tell the story, this was his story. He says that his group at UCLA tried to log into a computer at SRI, which is in Palo Alto. And he said, we set up a telephone connection between us and the guys at SRI. We typed the L, and we asked on the phone-- because they had to check whether it was working, so they had the phone to check. We asked on the phone, do you see the L? Yes, we see the L, came the response. We typed the O, and we asked, do you see the O? Yes, we see the O. And we typed the G, and the system crashed. But you know, they got something working. And of course, there's a nice statement here. You know, a lot of people worry about performance optimizations. But the most important optimization in a system is going from not working to working. And the fact that something worked is extremely important. Very soon after, they connected the East Coast-- a bunch of computers and organizations on the East Coast to the West Coast, MIT among them. So there was a team over at BBN, and a lot of them were from MIT. So MIT, BBN, Harvard, and Lincoln Labs on this side, and MITRE got connected over at Carnegie-- today, it's Carnegie Mellon University, I think it was called Carnegie Tech at the time-- University of Illinois, and then long lines across the country. There was a group of Utah and in California. Now, what were these links? Anyone want to guess-- these links across the country, or between Harvard and MIT, or across over there, do you think they actually went and put in new cables and laid these wires? What do you think they were? They were phone lines. And so this idea shows up over and over again. The ARPANET was essentially an overlay built on top of the telephone network. And in fact, it was a hostile overlay. Because the telephone network didn't really like-- I mean, at the time, they thought this was just an academic joke. But over time, it became clear that this underlying network was being used on top to do something different. And so there is an overlay network that's built-- and overlays show up again and again and again. It's just that they're not as hostile these days. Another example of an overlay is BitTorrent, or any peer-to-peer applications-- Skype, all of these things are overlays that are built on top of the internet. And in fact, a lot of the reason for their existence is because the internet doesn't quite do the right thing in terms of the right behavior for certain applications. So people say, let me go build an overlay on top of it, wherein you take a path involving multiple links on the underlying network and make it look like one link in the higher level network. And when you do that, you get an overlay network. So this single link on the ARPANET is actually many, many links with many switches, and who knows how expensive it is underlying in the telephone network? But all you have to do is to pay the telephone network some amount of money and make a call or whatever, and you get to view it as a single link. And you can do the same thing on the internet. Now, this protocol the routing protocol they used was a distance-vector routing protocol. It wasn't actually even as sophisticated as the one we studied. But it was a distance-vector protocol. And distance-vector was the first routing protocol ever used in a packet switched network. And it continued on the ARPANET for many years. They continued running this protocol. OK, moving on, we move from basic packet networks to this problem of internetworking. And that went through a series of demos. So one of them was they had a big conference in 1972. And they were demonstrating the simple packet switched ARPANET. And it worked really well except when they demonstrated it to a team from AT&T, and it didn't work at all. And in fact, there were news articles that were written. And some people wrote this was a nice network, some people wrote it never worked. And AT&T just thought, ah, bunch of academics, it's never really going to work. They wrote a modified email program. And the US was not the only place where work was going on. In France, there was a really good team building a network called CYCLADES. And Louis Pouzin was the principal investigator of that system. I think that CYCLADES doesn't get enough credit because often, as it is with these things, the winner kind of-- ARPANET became the internet, and so sort of everybody forgot everything else. But CYCLADES actually came up with some pretty interesting, groundbreaking ideas. The idea of articulating that this network is going to be a best-effort network with these packets that they called datagrams, which is a word that continues to be used to this day, was in this French network. They originated the sliding window protocol. It looks obvious, but it's not. You can see there's lots of subtleties in how you build such a protocol and how you argue that it's correct. The first sliding window protocol was in CYCLADES. And TCP, which today is the world standard, used a very, very similar idea. And they also use distance-vector routing. And they also implemented, for the first time, a way to synchronize time between computers. And they had a number of interesting ideas in this network. The work was not just being done in the wide area. In 1973, ethernet was invented at Xerox PARC by a team that included Bob Metcalfe, who was another alumnus from MIT. That was inspired by this Aloha protocol that we studied. And ethernet was essentially Aloha with carrier-sense multiple access, very similar to what we did study. This idea of contention windows is a new idea. They actually used the probability method. Ethernet standard evolved in the late '70s and '80s to use the contention window that we now know. And that same contention window idea and carrier-sense is used in Wi-Fi. So you can draw this stream of ideas through that continue to exist to this day. It's interesting that ethernet today doesn't use carrier-sense multiple access. Because ethernet today is no longer a slow speed network, it's a very fast network. It's not a shared bus, it's point-to-point links. But it's called ethernet, and it doesn't use the same MAC protocol other than when you have low-speed ethernet. On the other hand, wireless uses the idea from ethernet. And in fact, a lot of people call 802.11-- they used to call it wireless ethernet. And the ideas just got moved to a different domain, but it's the same ideas. And in fact, a lot of the early chipsets that ran the MAC protocol on 802.11 networks essentially were the same as the ethernet protocol. They use the ethernet MAC. They had that piece of hardware, they would buy it and build the box around it. So this idea of taking older technology, and applying it to a new context, and then modifying it is something that works pretty well. Because it means that you can leverage something that already exists and start making changes to it. And over time, it looks completely different. Now, the US government and DARPA-- ARPA was funding the ARPANET. But there were companies and other research groups in the mix here. And in those days, it was not very clear what was going to win. And everybody was doing research on coming up with different ways of connecting networks together. And Xerox had a system called PUP. I don't know what it stands for. I think it stands for the PARC-- Xerox PARC, Palo Alto Research Center, PARC-something protocol. I don't know what the U is. And in a way, there were many technical ideas in the Xerox system that actually were arguably better in technical terms from the ARPANET and TCP/IP. But it was proprietary, whereas TCP/IP was completely open. And open meant that you didn't have to pay anyone, you didn't have to get someone's permission to do it. The process by which things were standardized was far more open and democratic. And it won not because it was better, but because it was out there and open and free. There's a lot to be said for that model. Because for a network to succeed, you need to lower the barrier of entry so everybody can participate and implement it. And if you make network protocols proprietary, it usually ends up not benefiting anybody. So now, I think companies have started to realize that. So everybody understands that you want to make standards open, and then keep secret any particular implementation strategy for how you implement it. So you might gain commercial advantage from implementation, but you gain no commercial advantage from keeping a protocol closed. There are exceptions to this rule. Like, Skype is an exception to this rule. But who knows? In 5 or 10 years, I suspect that Skype is not likely to remain dominant. There are going to be other things that will come about. And some of them might be open. In the mid-1970s, this idea that you now really start to connect many different kinds of networks together, networks that are being run in different organizations, took root. And this was the internetworking problem. And this is the problem-- people were working on this packet switch technologies. And there were many different kinds of packet switch networks that showed up. So there was the Aloha network over in Hawaii. There were people building packet switched networks out of ethernet. At MIT and Cambridge University, there were people who were very enamored of something called Token Ring. I don't know if Victor was at MIT at the time, or any of my colleagues were, but people were building these Token Ring-based systems that were technically pretty superior in some respects and interesting. And so there were many different kinds of networks that people were building and connecting their own campuses internally. And you had to communicate between each other. The trouble was, there was no single protocol to do this. So ethernet had-- back in the day, when you bought an ethernet technology from say, Digital Equipment or Xerox or one of these companies, you wouldn't just get ethernet. You'd get the ethernet MAC protocol. Then you'd get some sort of network communication, network layer between the different ethernet devices. And you'd get something called EFTP, which was an internet file transport protocol. So you'd get applications around it. So imagine now, you're buying a network thing. And you don't get to run your own applications, you get a stack of everything. And you get a box, and you only get to use whatever the vendor gave you. That was the state of networking at that time. And people recognized this probably wasn't a very good thing. Because what you would like is to have a network where people can come up and invent their own applications and run their own applications on it. But you now needed a way to communicate between these different networks. So how do you do this? So this was a huge project that a lot of different organizations were involved in. But a large part of the credit is given to two people-- Vint Cerf and Bob Kahn, who were in some sense the lead people in getting a community of other people together in building the system. And they articulated these visions and these ideas. So Kahn's rules of interconnection are as follows. He first said that each network is independent and must not change. So the idea that you can bring networks together and communicate, if it required every network to change, that wasn't a palatable idea. The second is that he agreed with CYCLADES and said, best-effort communication is what we need. Because we cannot assume that every network will guarantee delivery. There are some networks that may guarantee delivery in order, but you can't mandate that. And what they said was, we will design this network with these boxes that we'll call gateways. And these gateways will translate between different network protocols. And in a pretty radical departure from the Bell Telephone network, they said that there will be no central global management control. There is no central place where the operation of this worldwide network or countrywide network is going to be managed. So it's kind of a simple idea-- you have your own internal network. This might be an ethernet, this might be some sort of Aloha network or what have you. But you have these gateways here that sit and translate between these different protocols. And we know the stuff as-- we now know that what they did was a pretty good decision, which is they made it so that these gateways will all agree on one protocol. And the protocol they standardized-- they got it wrong initially, but by the late '70s, they figured out that that protocol will be called IP, or the internet protocol. So a node is on the internet if it implements the internet protocol, which means it has a agreed-upon plan for how the addressing of nodes works, and it has a plan for what happens when you forward a packet. You have a look-up and a routing table that looks up the IP address and then decides on the link. And that's all you have to agree upon. So to be on the internet, all you have to do is-- a network has to support IP addressing, and it has to agree that it will send packets of at least 20 bytes in size, because that's the length of the IP header. There's very little else that it has to do, so much so that people have written standards on how you can send internet protocol over, you know, carrier pigeon. And you can-- and in fact, someone demonstrated something like this, where they had these things, and these pigeons were delivering these scraps of paper, and there was something looking it up and sending it on. So it doesn't take much to be on the internet. So Cerf and Kahn started then designing the network. And they wrote in their original paper that you needed to identify the network you're in, and within the network, a host that you were in. And they said the choice of network identification allows for up to 256 distinct networks. Like, how many networks do you possibly need? How many organizations can you possibly have? And they wrote, you know, famous last words-- this size seems sufficient for the foreseeable future. The problem is they were slightly wrong. The foreseeable future in their case was probably less than 10 years, and it may not have been more than five or six years. But you know, they made a mistake. But what was interesting was, the next time the community got to make a change in that decision, they still made a mistake. They decided that 32-bit packet IP addresses are enough. And we've run out. We literally ran out, right now, of IP addresses. So they had these gateways that would translate, and you would run the internetworking protocol, or IP. So in the 1970s, this idea of internetworking was all the rage. And in 1978, there was a really good decision made to split TCP from IP. And a lot of that motivation was from a group of people at MIT. There's a paper here that you'll study at length in 6.033. It's one of these papers that you'll study two or three times, because it'll keep coming back, because these concepts are pretty important. It's called "End-to-End Arguments in System Design" by Saltzer, Reed, and Clark. They have many examples, but the gist of the end-to-end arguments is that if you have a system, like let's say a network, and you want to be able to design a network, and you have to make a decision of what features do you put into the network? The end-to-end arguments say that you only put in features in the network that are absolutely essential for the working of the system. Anything else that's not crucial to the working of the system, you leave to the endpoints. So if you think about reliability as a goal-- like, does the network need to put in a mechanism to guarantee the delivery of packets, the answer is no. The reason is is that that property is required, for example, if you're delivering a file, but not if you're delivering a video stream or talking. Because not every byte needs to get there, which means you don't put that functionality inside the network. You leave the function of achieving reliability to the endpoints, because not everybody needs it. And the only exception to the rule that the only function you put inside of the network is functions that are absolutely essential for the system to work is if the mechanism leads to significant improvements in performance. So for example, if you run on a network with a 20% packet loss rate, it makes sense to have some degree of reliability and retransmission built on a network hub. Like, that's what Wi-Fi would do. Because if you didn't do it, you'd have sometimes a 20% or 30% packet loss rate. And that would make everything not work. But we don't try to design our network so that between the Wi-Fi access point and your computer, we produce perfectly reliable transmission. If you did that, it would then mean that you would have really long delays. And you would be providing that function for applications that don't need it. If I want an application that I would like to just send the bytes through, if it gets through, great, if it doesn't, then I'll do something else, that's a bad network design. Unfortunately, there are real networks today that don't obey this principle. cellular networks are sometimes problematic, like Verizon or AT&T or something. You find there in real data that there are long delays in these networks. Because between the cellular base station and your phone, they have decided to provide something that looks like highly reliable TCP. It's kind of a bad network design, but that's how they do it, some of them. And so this is an old principle, but it's sometimes not followed, and that's not so good. Now, the reason why packet switching and this TCP/IP split won in the internet compared to various other proposals that were floating around at the time is that this architecture, the internet architecture, is good enough for everything, but optimal for nothing. There is really no application for which the design of the internet network infrastructure is optimal. If you wanted to build a network to support voice, you'd go build a telephone network. You wouldn't build a network that looks like this. If you wanted to build a network to distribute television data to television streams to a bunch of people, you wouldn't build the internet. If you wanted to build a network that wanted to support Facebook and nothing else, you probably wouldn't build the internet. But if you want a network that's going to support all those applications reasonably well, including applications that you cannot imagine today, this design is a very good idea because it's very minimalist. There's almost nothing that the network does. It leaves everything to the endpoint. So I would say in fact that the most useful lesson, which you will apply over and over again-- I mean, let's say you go work at a company, or work on some sort of research project. And at various stages of the project, there are endless discussions on what you need to do-- whether it's worth doing something or not doing. And the most important lesson that you can take away in system design is that when faced with a choice, try to make a choice that's the simplest possible choice that gets the job done. Because most likely, if you get the application wrong of whatever it is you're building, if it's simple and minimalist, you could probably pivot around and use the same thing for that other application. So there's a famous set of quotations here. One should always architect systems for flexibility because you'll naturally never-- almost never know when your design-- everybody has these use cases in mind. But let's face it, when you're at the early stages of a project, nobody actually knows. And you'll almost always get it wrong. So it's important to architect them for flexibility, not for performance, not for lots of functionality, just for being flexible, and the bare minimum to get the job done based on what you think is necessary. And it usually means doing that even if it means sacrificing performance. Like I said, the most important improvement in the system's performance is getting it to work. Everything else is secondary. So there's a nice quote here, I don't know if-- any French speakers or readers? Yes? AUDIENCE: [SPEAKING FRENCH] PROFESSOR: Yeah, I know. Just tell me in English. [LAUGHTER] Good enough, all right. AUDIENCE: Seems that perfection isn't attained when there's nothing to add, but there's nothing to-- PROFESSOR: Yeah, that's great. Yeah, perfection is achieved not when there's nothing more to add, but there's nothing left to take away. And this is a really, really good lesson. I mean, you guys, every one of you is going to go into the real world, either at a startup company or a big company where you're defining a new product, or a research project, you go to graduate school-- at the beginning, you won't know the right answers. You have some vague ideas of what it's useful for when you design anything. And it's really important to understand that you should do the bare minimum to get it work. And it's a really, really good idea-- less is more. I have a very simple way to think of it. I tell my students this repeatedly-- when in doubt, just leave it out. If you're not sure if you need it or not, don't do it. There's enough stuff to do. And that's probably the most important lesson from many of the classes, at least on the system side that you'll be learning. Of course, it takes a lot of good taste and insight and intuition to figure out what's really important. I can't help you there. OK, so by the 1980s, the internet started to grow up. And the way in which you wanted to handle growth was this simple but brilliant idea called topological addressing. So I'm going to explain what that means. In the very early days of the internet, and including the simple small networks that we studied, every network node had a network identifier-- an IP address or some sort of a name for that node. So in the way in which we looked at it, nodes would have names like A, B, C, D, and E. But in reality, A, B, C, of course, there are some set of bits that communicated. And in the old days of the internet, you would have a two-phase identifier. You'd have a sort of a network identifier or an organization identifier. So MIT would have a set of 8 bits, and then you would have a set of other bits here that communicated within MIT what that number meant. So just abstractly, these numbers meant nothing in the global internet. This could just be 110111-something. And then you would have another sequence of something else. That was the basic idea. Now, in the networks we studied so far, we wouldn't even have this. Every network node would just have some name. So what that meant is you could have a network address that was some set of bits. I could have a network address that was some other set of bits. And the switches in the network, in order to forward packets to you or to me, would have to have entries in the routing tables that were one-to-one with all of the different nodes that they wanted to communicate with. So you would have a routing table that would be essentially one entry for every host in the network, which doesn't scale. It's just too much information. So topological addressing is the idea that per-node routing entries don't scale very well. So what you would like to do is organize the network hierarchically. And it's sort of similar to the way in which the postal system works. So in the 1980s, they came up with a way of doing it using three kinds of addresses. I'll call it class A, B, and C address. We don't use those anymore, but let me describe what this kind of area-based addressing means. So here's a very simple, abstract view of this. The internet used to adopt this in some approximate way. But this is the conceptual idea. You design the network into areas. So MIT might be an area, Stanford might be an area, Berkeley might be an area-- you know, BBN, all these different people are their own areas, organizations. Areas have numbers that everybody knows. So that's the first part, which might be an area identifier. And then this is a-- within the area, you might have a host identifier, or more generally, an interface identifier. What I mean by interface is that really on the internet, my computer doesn't have an IP address. If I'm connected to the internet by the ethernet, the ethernet has an IP address. It gets an IP address by virtue of connecting to a switch upstream of it. M Wi-Fi network has an IP address. If I use the Bluetooth, my Bluetooth has an IP address. In fact, in general, sometimes, my computer might have four IP addresses-- one if I'm connected on ethernet, one on Bluetooth, one on Wi-Fi. And if I have one of those cellular modems, if I tether through my phone, every time I do one of those things, I get an IP address, OK? So IP addresses on the internet name the network interface. So the way this area routing idea works is that within these areas, there's routing as usual and forwarding as usual. So all these nodes have-- you could recursively build sub-areas. But if you didn't, each of these guys would have an entry for all of the other nodes here. And within these areas, you would have border routers. And these border routers would only have entries for the other areas. So if you wanted to send a packet from area 1 to area 4, what you would do is you would send a packet to one of your border routers. And that border router would have an entry in its routing table to get to area 4. It wouldn't know anything about the details inside area 4. And so you have a nice hierarchy where inside the network, you only know how to get inside your network and to the border. The borders know how to get inside, and the borders know how to get to other borders. But the border of one area doesn't know how to get inside any other network. So you can see now, you can recursively apply this idea and start to scale the routing system. Now, on the internet, what ended up happening was, well, they had to apply this area hierarchy. And very soon, organizations started saying, well, I have a big area, and I have a small area. So how big do you make this thing? In the very old internet, these were 8 bits long, and these were the rest of the address. If you have 8 bits, you can only have 256 organizations. And although Kahn and Cerf thought that was plenty enough, that clearly wasn't the case. So by this time, people were starting to build equipment with these 32-bit addresses. And all this hardware was out there, so what do you do? So what they said was, all right, let's have three classes of areas. For the really big guys, we'll have class A addresses. Then for the medium guys, we'll have class B. And for the little guys, we'll have class C. What that meant is that we're going to have class A allows an organization to have up to 2 to the 24 addresses. Because class A is identified by 8 bits. So you get 24, which is 32 minus 8. So MIT was pretty smart. They decided that they would go and-- you know, they were up there, they were doing a lot of networking research. So they said, we're going to go get ourselves one of these class A addresses because we're a big university, and we've got lots of computers. And it was probably the case that at the time, MIT probably had more computers than most other places. So even to this day, they maintain this address, which was 18-dot-star, where the star refers to-- well, technically, star dot star dot star. So all 2 to the 24 addresses that start with the number 18, or in binary terms, whatever the 18 is-- 000-- I'm going to get this wrong, but there's some 8-bit number for 18. So anyway, they went and got this done. Now, nobody wanted the class Cs because the classes were-- you could get 2 to the 8 addresses because the class C was defined as a 24 base. So you have 24 bits to define the organization. So you could have 2 to the 24, or some large number of organizations-- not quite 2 to the 24. But you'd have some large number of organizations, but then you'd only get 256. Now, the organization doling out these numbers-- there's a particular organization. Actually, it was not even an organization at the time. It was like this-- Jon Postel at UCLA, one guy was doling this stuff out, and then it became an organization. Jon Postel is great. I mean, he was really-- the social aspects of how he managed this was remarkable. So he didn't dole these out randomly. And nobody wanted this, everybody wanted this. I mean, everybody wanted that, but they got this. And this was 16 and 16. So you'd get 2 to the 16 addresses, and you get your 16-bit identifiers for these areas. And over time, as the internet grew, the obvious thing happened. Because the thing is this is like the Goldilocks story, right? This is like, it's too big, and this is too small. This is just right. And pretty soon, there were 2 to the 16 organizations on the internet, and we ran out. Literally, they ran out of class B addresses. And by this time, by the early '90s, they realized that this rigid decomposition into addresses in this form was just not quite right. Because what you really wanted was a system where you allowed-- I mean, this idea of an organization ID didn't make sense. Like, what if I need 2 to the 12 addresses? Today, you would have to give me 2 to the 16, which is ridiculous. So if I needed 2 to the 12, well, how do you actually do that? Well, I could get four 2 to the 8's, but if those 2 to the 8's were not contiguous, then those routers in the middle of the internet would not be able to treat them as one and have the same prefix to define the entire network. They would have to have four different entries, defeating the purpose of, in fact, using this kind of area-based routing. So the whole thing was kind of messed up. So they actually got wise to this problem. And they came up with a more sensible, sane way to deal with this problem. I'll talk about that probably on Monday. Now, in the meantime, in the 1980s, this growth was happening pretty rapidly. And they started getting organized. Vint Cerf, who was by then at ARPA, appointed Dave Clark, who was a senior research scientist and professor at MIT to a position of internet's chief architect. And he was instrumental in writing and bringing together a lot of people in organizing how people do this kind of standardization. So there was an organization called the Internet Engineering Task Force, or IETF. That's the organization that determines and sets the standards for the protocols that run on the internet. In 1982, a really important thing happened. This idea of this community with the internet architecture community got a real boost when the US Department of Defense looked at various ways and competing ways of designing networks, and kind of remarkably for a Department of Defense decided to pick the open standard rather than some proprietary standard, rather than some closed standard that is potentially more secure, though it really isn't. Remarkably, they said, we're going to standardize our entire systems on TCP/IP. And the Defense Department-- I don't know if it's still the case, it probably is. But in those days, it was a huge consumer of information technology. It still is, it's just that other people consume it, too. But in those days, it was probably the dominant consumer of information technology. And they standardized on it, and they awarded a contract to the Berkeley computer systems group to build TCP/IP, which by then had become standard. And there were many implementations, but they said, take Unix and go build the TCP/IP stack. And Berkeley did a lot of interesting things with it including creating the Sockets Layer. They came out with what today is known as open source implementations of the TCP stack. In 1983, MIT created Project Athena, which was the world's first campus-wide-- campus-area network system. And they did a lot of work on things like filesystems, distributed file systems, and the Kerberos authentication scheme, and a lot of important ideas from this network. And they also ran the TCP/IP stack. They didn't run anything proprietary. In 1984, the domain name system was introduced. For those who don't know it, when you go type in www.mit.edu, something converts it to an IP address. And then your network stack communicates and sends packets over TCP or UDP or these protocols to that address. How do you convert this? Well, again, originally, this was maintained in a file. This file was called host.txt. And believe it or not, the way this file would work was that every night, I think in the middle of the night, every computer on the internet would go and download this file from one computer that was located somewhere on the west coast. Like, literally, you would get a host.txt file. And of course, you could hack it. You could do whatever you want. And the assumption, of course, was that no one was-- I was told that in the early days, every computer had a root password which was empty, or these computers at MIT had a root password that was empty. Everybody could log in to any computer. And everybody was completely trusted, which I think is sort of not true anymore. But you would download this host.txt file every day. And it started-- as the internet grew really fast-- You know, the internet has been growing by 80% to 90% a year, not just in the past few years, not just in the '90s. You know, it's been growing at 80% to 90% a year since 1980. So it's just on this amazing tear. And so anyway, this idea of downloading a file every night is just not a good idea. So they created the domain name system. They had to create-- the NSF, which was the National Science Foundation, got into the act. And they became the first internet backbone. And the backbone-- the idea is that this is a backbone that connects all of these different networks together-- in particular, all of the universities together. So they also picked TCP/IP as a standard. And again, the important lesson here is they picked it as a standard because it was open, because It was very clear that these implementations were available, they were free, everybody could contribute to it, and everybody could beat on it and improve it, and there was no proprietary technology that was held. So what I will do here is I'm going to stop. I'll pick it up at this point on Monday, talk about congestion control, how to hijack routes, and how to send spam without being detected, and then talk about the future of networking and communications.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
13_Frequency_response_of_LTI_systems.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today I'm going to continue with frequency response and filtering, but also begin the story of spectral content of signals. So our starting point is still something you've seen before, namely the statement that for an LTI system, a sinusoid into the system gives you a sinusoid out at the same frequency, but maybe shifted in phase and scaled in amplitude. So a bit of terminology here just for general interest, we refer to the exponential as an eigenfunction of the LTI system, because the only effect the LTI system has on it a scaling. So an input to some kind of a mapping, which comes out the same except for a scaling is referred to as an eigenfunction, or an eigenvector if you're talking about matrices. So we say that the exponential-- the complex exponential here is an eigenfunction of the LTI system. Because when it comes through, it's just the same exponential, but scaled by some number. And that number is what we refer to as a frequency response, right? And we've seen that there's a simple expression for it. And let me put that expression up, because we're going to use it repeatedly. The m here is irrelevant. It can be any dummy index, because we're summing over the m. You can call it anything you want. And I just should mention that there's other notation for this object. There are people who refer to it as-- well, it's often referred to as h of ej omega, because actually, the way omega enters is always in the term e to the j omega. So this is-- if you want to think of it that way, this is e to the j omega to the power minus m, right? Well, let me just write it as 1 over 1 over e to the j omega m. OK, so it's some function of e to the j omega. And people will often write it this way. And one of the advantages of this is the notation right away tells you that this object is periodic with period 2 pi. Because if you were to increase omega by an integer multiple of 2 pi in the numerator here, you'd get the same argument again. And therefore, h must be the same again. So this notation has the value that it keeps the periodicity front and center. It also makes sense when you're developing various other transforms. There's something called a z transform, which we won't deal with in this class. But it's used a lot when dealing with discrete-time systems. And the way that you get from the z transform to this object is by making the substitution z equals ej omega. So people will use this notation. So the z transform uses z exactly where we use e to the j omega. But for our purposes, this is a much simpler notation. It's just that we need you to remember when you see this that we're talking about something that's got period 2 pi. And if you look at the definition, that becomes clear. If you increase big omega here by any integer multiple of 2 pi, you're going to get the same thing back again. There's another bit of notational confusion that can arise, which is that people will sometimes write little omega instead of big omega. So that's also used. So this is other notation, and it's notation that we will try not to use, but you might see vestiges of this when you look through old problems. Because in some terms, we may have used this notation, and some terms, we may have used a little omega instead of a big omega. But for our purposes, we'll stick to this. OK, so when we say big omega, we're thinking of it as an angle around the unit circle. So if you've got the complex number here at an angle big omega, this complex number is e to the j omega, right? So we're thinking of big omega as an angle, something measured in radians, and it's different from little omega. You can write the expression for the frequency response in various ways. So here, I've just used Euler's identity to split that into a cosine and a sine, and that's straightforward enough. The sums are over infinite intervals. And we talked last time about how stability of the system-- bounded input, bounded output stability of the system will guarantee that those summations are well defined. OK, now there's another name for this formula. Basically, we've called it the frequency response. But when you compute an h of omega using this formula, another way to say what you're doing is to say that you're taking the discrete-time Fourier transform of the sequence h dot, OK? So it's the discrete-times Fourier transform. Again, that's just terminology for now. We'll come to expand our view of it later. But we've called it the frequency response so far, because it describes how sinusoids or exponentials here get to the output, but it's also referred to as the discrete-time Fourier transform of the unit sample response. So you've got some time signal-- happens to be a unit sample response. You compute an object through this formula to get an h of omega. That's the DTFT, OK? Another thing we've already seen is that knowing that you have an LTI system, and that a cosine is a superposition of complex exponentials, you can use the result that we had so far to just describe what happens to a cosine when it goes through the system. So it's no longer a complex exponential. It's a real signal of the kind that we're more likely to work with. And we've seen that the only thing that happens is the cosine that went in gets scaled in amplitude by an extra factor, which is the magnitude of the frequency response. And whatever phase it had, you get an extra phase, which is the angle of the frequency response. So actually, if you had an LTI system, this is a good way to measure the frequency response in the lab. What you do is you take your system there, excite it with a sinusoid. In continuous-time, we know we can do that with an oscillator. In discrete-time, you generate a sequence like this. And then look to see what comes out of the system and express it in this form, and you'll label the scale factor there as the magnitude of the frequency response and the extra phase angle as the phase angle of the frequency response. So it makes for a very systematic way to probe a system and get at the frequency response. Again, of a point that I've made before, which is that when you do this probing, you only need to vary big omega over the range minus pi to pi. So when we write a frequency response, because h of omega is periodic with period 2 pi, we only need to probe h of omega-- either the magnitude, that would be one plot, and the angle would be another plot. Both of these would be plotted from minus pi to pi. Because outside of that range-- well, you can see it already with the cosine. If I added an integer multiple of 2 pi to omega 0, I'm going to get an integer multiple of 2 pi added into the argument of a cosine. And I'm getting the same cosine back again. And the reason that's the case is because the n that's multiplying it here is an integer. So in continuous-time, it doesn't work quite the same way. If I had a little omega 0t and I added multiple of 2 pi, I wouldn't get the same argument back again. So what's different here is that if I increase omega 0 by an integer multiple of 2 pi, because I've got an integer, and outside that, again, I end up adding an integer multiple of 2 pi, and I'm back at the same cosine. So frequency response for a discrete-time system is always in the interval minus pi to pi. It repeats periodically outside of that if you chose to look at some other omega. And I've said that already. You actually heard the term frequency response in all sorts of settings, I'm sure. One setting in which it's used a lot is in describing, for instance, the performance characteristics of a loudspeaker. So people will tell you how good their loudspeaker is by showing you the frequency response of the speaker. And what they're doing is they're applying a sinusoidal voltage to the input and looking at the sound pressure that comes out. SPL here is sound pressure level. This is measured in dB, so it's actually a measurement of the ratio of the pressure that you hear under certain standardized conditions to a pressure which is taken as the lowest audible pressure on the ear. So there's a particular ratio there. So what they'll do is they'll feed the loudspeaker with 1 watt at 1,000 Hertz, so just a steady tone. And then, a meter away from the speaker in an anechoic chamber, they'll look to see what sound pressure they pick up on a specialized sensor-- a detector, a microphone basically-- and that number in dB is what they'll represent. And so typical speakers are-- have values in that kind of range. Now, if you probe it at different frequencies applying the same input voltage and looking at pressure, you'll get varying pressure depending on the frequency that you probe at. So this is the frequency response of the speaker, and if you go too low in frequency, then you don't get much of a response. If you go to high in frequency, you don't get much of a response. Now, of course, when you use the speaker, you're not going to probe it with sines and cosines. You're actually going to put more complicated sounds in there. So what you're really interested in is how does the speaker behave to signals that are combinations of cosines? And again, we're using our model of the speaker as an LTI system. All bets are off if you drive your speaker so hard that you get distortion and exercise all the nonlinearities there or burn it out. But if you're in a normal range, the speaker is acting linearly, you can talk about its frequency response. And what you're really interested in is how does the speaker respond to linear combinations of cosines? And all of these various signals can be thought of as-- at least over reasonable time intervals-- as combinations of cosines appropriately chosen. So if you hit a particular key on the piano, you get a dominant note, but you'll get harmonics of that. And that's what's going into your speaker. So knowing how an LTI system responds to cosines then puts you in a position to say how it responds to combinations of cosines, or signals that are combinations of cosines. So the other part of the story that we're going to get to-- and maybe even by the end of this lecture-- is we need a way to take a general signal and represent it as a combination of cosines. And that's what we refer to as the spectral content of the signal. So when we talk of exposing the spectral content of a signal, as over here, what we're saying is we're going to show you what combination of cosines it takes to make up that signal. And once you figure that out, and you have the frequency response of your LTI system, you can say how your system responds to that. OK, so this theme runs through every stage of what happens, actually, in communication. Now, the example I've given you here is one that you would typically probe with a continuous-time oscillator in the lab. And so there's some connections that you might want to make between probing with a continuous-time signal and probing with a discrete-time sequence that comes from sampling that signal. But I'm going to leave you to look at that later or leave your recitation instructors to pull that back, or leave you to draw this up if you have a homework problem that needs you to think about how continuous-time maps to discrete-time. But the basic point is the actual, physical speaker you might probe with a cosine in continuous-time, if you're generating that signal from a computer, what you'd actually be sending to your amplifier is a sequence of numbers. And the frequency of the numbers that you would send, this frequency is related to the frequency of the continuous-time cosine that you want in a very particular way. So I'll leave you to chew on that. But I don't want to spend time on that now. OK, so let's spend a little time talking about the properties of frequency response now that we know why we would use it. And this I've already said. The value of the frequency response at-- some of this, by the way, you may have seen in recitation. But it doesn't hurt to repeat. The frequency response at frequency 0-- well, we've said if you've got e to the j omega sub 0 and some frequency omega sub zero going into a system h omega, a system with frequency response h omega, an LTI system with frequency response h omega-- all right, I'm leaving out lots of words. But frequency response doesn't make sense unless you have an LTI system. OK, so for what kind of input signal would you be looking at omega equals 0? DC, right-- a constant signal. It's what the electrical engineers call DC, which used to stand for direct current but has now come to mean constant. When we say a DC input, we just mean a constant input. So if I pick omega sub 0 to be 0, then e to the j 0n, well, that's just one for all time. And so I'm feeding the system with a constant. That's the slowest possible input that you can find. It's a 0 frequency input. And the amount that it's scaled by is the number that you're going to plot here. So whatever value you get is going to end up being plotted there at omega equals 0. And let's see, do we believe this other statement-- h0? It's just a substitution in here. If I put omega equals 0, it's a summation of all the hm's. But there's another way to think of it also. If you want to think of it in the time domain-- let's see. I have an LTI system. It's got some unit sample response. And I'm feeding it with an input that's constant for all time. It's actually constant at the value 1 for all time. If you're thinking in terms of convolution, the flip slide and dot product picture, what is the output at any time here? You're going to draw out your unit sample response. You're going to draw out your input, which is 1 for all time, take one of them and flip it over, slide it the appropriate amount over the other, and then take the dot product. Well, for every shift of this flipped and shifted input, you're going to pick up all of the unit sample response. So every time, you're going to get summation hm outside. So if you fed an input that was DC at the value 1, this is what the output will be at all times. You can see that from the convolution picture. So what's the frequency response at frequency 0? What's the ratio of the output to the input-- the output amplitude to the input? This is for all time. So the input amplitude was 1 at each time. The output amplitude was that. And so that's the DC gain-- the DC gain of the system, or the frequency response at 0. Let's say, so h0 is what's referred to as the DC gain. What about high frequency? So what's the highest frequency variation that you can have with a discrete time sequence? I've got a sequence here at the input. We've seen what the slowest variation possible is. It's something that's constant. If you're talking about a discrete-time signal that can only take values at integer times, what's the highest frequency variation that you can get? Just something that alternates in sine, right? So you're going to have-- OK, so is this of the form e to the j omega 0n for some omega 0? Is that a signal of exponential form? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, if you take omega not equal to pi, this is just e to the j pi n. In fact, you can take plus or minus pi. So when you probe the system with an input of this type, which is the highest frequency input that you can probe with, what you're really probing is what's the frequency response at this point? You get the same value at minus pi or pi. So these are the two extremes. And then the frequency response, the rest of it lies in between for other sorts of inputs. Now, do you believe this other identity that I have up there? Well, you can go back to the definition, set big omega equal to pi or minus pi, and you'll get an alternating sequence of 1's and minus 1's here. And so that verifies that identity. Or you can think in terms of convolution. If I convolve a sequence like this with a system with this unit sample response, what comes out at every time is an alternating sum of the hm's, except the sine flips from one time to the next. And so, again, you can verify in the time domain that that's actually the high frequency gain of the system, OK? Now, there's a bunch of other symmetry properties of the frequency response that I think in-- at least in some of the recitations you've done. And the easiest way to see these symmetry properties is to actually go back to the rewriting I did of the frequency response in terms of sines and cosines. This first term here I'm calling C of omega. The second term here, the summation, I'm calling S of omega. So where would a statement like this come from? Let's see. For real h of n, that's the only kind of h of n we're going to worry about in general. We're going to talk about systems with real unit sample responses. if h is real, why would it be true that the real part of the frequency response is an even function of frequency? Well, the real part of the frequency response is this term, because the other term is the imaginary part. So the real part of the frequency response is this term. And if I change big omega to minus omega, the cosine doesn't change. It's the same. And therefore, the real part is even, OK? So the real part of the frequency response is an even function of omega. The imaginary part, which is the minus S omega, well if I change omega to minus omega, I flip the sine. So that's an odd function of omega, and so on. So you can go through these properties. Whenever you're stuck trying to figure out a property, this is the expression to go back to. So rewrite the basic definition in this form, and you'll understand a lot of this. And again, you'll get practice in recitation if you haven't done that already. Another important property of-- that you encounter when you go from the time domain to the frequency domain-- so remember, in the time domain, we said that if you have an input here, you convolve that input with h1 to get the output of the intermediate point? OK, so if I call the output of the intermediate point-- I should have done it there. But here's h1. If I call this w, this is x. And then I go into a second system, h2. And here's y. OK, well w is equal to h1 convolved with x. And y equals h2 convolved with w. So that's this. But I can put the parentheses any way I like for convolution, right? We've already established that property. So the net effect of the cascade of systems is the effect you'd get by having a single system LTI with this unit sample response. Now, if I think the frequency domain-- if I put e to the j omega n here, then what comes out at the intermediate point? At the intermediate point, I get h1 omega ej omega n. All right, so that's wn. But this is, again, an input of exponential form. So what comes out of the second system when I put this input into it? So what's w of n-- sorry, what's y of n going to be? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, so it's basically the second system's frequency response scaling the exponential that went into the second system, which is this. So the net effect when I put ej omega in at the first spot is at the output, I get the same ej omega n, but scaled by the product of the two frequency responses. So the nice thing here is that when I'm describing a cascade of two systems, if I describe the net effect in the time domain, I've got to do a convolution of these two units' sample responses. If I think of it in the frequency domain, I just have to take the product of the individual frequency responses. So the key observation here is that convolution in the time domain maps to multiplication in the frequency domain. So if I wanted the DTFT of this-- if I wanted the discrete-time Fourier transform of this result of a convolution, I can find it by just multiplying the individual DTFTs, all right? So convolution in time maps to multiplication in frequency. And this actually makes design much more easy, because we're often cascading systems in this form. And if you think in terms of frequency, you can track a frequency component through a cascade of such systems just focusing on the frequency response of each system as you go. So here's an example. Suppose we have a channel. Let's say that it's a channel with an echo, so when I put-- let me actually draw it out here. So I've got a channel here which I'm modeling as LTI. And if I put in a unit sample function here, so this has the value 1 at time 1, suppose the channel is one that has some echoing in it. So what I actually get out for this input is the same delta of n plus 0.8 delta of n minus 1. So there is a later arrival scaled by something which corresponds to the echo. So this must be the unit sample response of the channel. What's the frequency response of the channel? So if I call this h1 of n, what is h1 big of omega-- big omega-- h1 of big omega? I don't have it up there, do I? No. Anyone? Just from the definition. Is the problem here that you don't quite see what h1, 0 is, h1, 1, h1, 2, and so on? If I asked you to plot this out, how would you plot it? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Is it 1.8? Where? Where would you put the 1-- just over there? Oh, you're talking about the frequency response. Let's get the unit sample response first. Let's sketch this. What's your sketch of that? AUDIENCE: At 0B1 PROFESSOR: At 0B1? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, on 0 everywhere else-- that's the unit sample response. OK, so what's the frequency response? Well, we just plug it into the definition. All the h's except the ones that argue in 0 on 1 are equal to 0. So this is going to be 1 plus 0.8 to the minus j omega. Is that what you said? It was not quite what you said, right? What you said was the number I'd get at omega equals 0-- the DC gain of the system. But the frequency response is that. Let's just work backwards here. So the frequency response is that. Or if I wanted to write it-- we're going from that board to here-- h1 of omega, I can write it as a real plus imaginary part. So it would be 1 plus 0.8 cosine omega. This would be the real part. Then I have a minus j-- sorry, 0.8 sine omega. OK, so that's the frequency response-- some complex number with a real part and an imaginary part. OK, and if I asked you to give it to me in magnitude and angle form, you could do that. It's just rearranging things. So you'd-- the magnitude would be the square root of the sum of squares of these two pieces. And the angle would be the arctan of the ratio. So I assume that you know how to do all of that. And what you find-- actually, you can see it in these expressions already. Just as-- well, I didn't quite claim this earlier. But the magnitude of the frequency response will always be a real function of frequency-- sorry, an even function of frequency. And the phase will always be an odd function of frequency. So if you're drawing the results of a computation like this and you find that you don't have an even function for the magnitude, then you know you've done something wrong. So I'm not-- I'm going to sketch something here which I'm not pretending is the magnitude of that. I just want you to get the idea of what I mean by even. It's going to be something that's symmetric in omega. This is the magnitude. And then, if I did the phase, the phase is always going to be something that's an odd function of frequency. So if it's an odd function of frequency, what's the value at 0 of the phase? It's got to go through 0, right? And so I might get-- well, what would it actually be? It would be some shape. I'm not pretending I have the right shape here. But it's going to have an odd symmetry. I'll leave you to figure out what it actually looks like. So that's the frequency response of this echo channel. So here's what I want you to do now. At your receiver, build for me a filter that's going to undo the distortion that the echo has produced. So what I'd like is, I'd like an output, after you've done your filtering, to be exactly equal to the input. So my question is, what should-- and my claim is you can do that with an LTI filter. How would you describe that LTI filter? What should that LTI filter be? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Right, OK, so if you wanted the output to be exactly equal to the input, no matter what input was, you want a frequency response of 1 overall. And the overall frequency response we know is the product of the two individual ones. And so we want h2 omega times h1 omega to be equal to 1. And therefore, h2 should be 1 over h1. So you can see here how things get a lot easier when you think in the frequency domain. If I had to do this in the time domain, I would have had to say h2 convolved with h1 has got to give me the unit sample function. And I'll give you h1, now you've got to figure h2. Well, you've got to go and work the convolution picture backwards, which is doable for simple cases. But this is much simpler. So this shows that h2 should be 1 over h1. Seems like a reasonable way to go. And you can actually work the whole thing through. But there's a problem with this. And we've seen this in other settings as well, which is something that works fine in the noise-free case doesn't work so well when you've got noise in your system. So look at what this receiver filter is doing. The receiver filter-- let's see, what is its magnitude? How does the magnitude of the receiver filter relate to the magnitude of the channel filter-- of the channel frequency response? So this magnitude is a magnitude of 1 over h1. Is that the same as 1 over magnitude of h1? Is that how complex numbers work? OK, right? So look what happens. Where the channel has a very low frequency response-- in other words, where the channel output is very low for a sinusoidal input at that frequency, the receiver filter is going to have a very high magnitude. So the receiver filter is trying to boost up whatever signal it sees in a frequency range where the channel actually has very little output. So what happens if I come and have a bit of noise here where I'm receiving the signal? Well, it's going to be very badly exaggerated by the inverse filter. So a little bit of noise here will get accentuated at frequencies where the frequency response of the receiver filter is large. But that's precisely where the channel had a very low frequency response. And it's precisely where the output-- the channel-- has nothing interesting for me. So my receiver filter ends up accentuating the noise. OK, so yet again, we see that these sorts of inversion operations may look nice on paper. But if you don't take account of what noise does, then you can run into trouble. And the picture is very transparent when you think in the frequency domain. OK, some more practice with filters and cascade-- I think I'm going to leave you to work through this in recitation, perhaps. So I'll leave it on the slides. But let's go to design of filters. So now, we've seen one example of trying to design a filter-- the receiver filter-- to undo the distortion of the channel. Here's another-- actually, I want that. Here is another design problem that you run into all the time, which is that you see a signal that's got a whole bunch of frequencies mixed up in it, and you want to exclude some of them. So maybe you're looking for an audio signal. You know that the combinations of sinusoids that make up an audio signal are unlikely to go above-- whatever you want. Pick your number-- 10 kilohertz, 20 kilohertz. And so you want to exclude frequencies outside of that range. So you're very often in the position of trying to build what's called an ideal low pass filter. So here's an ideal low pass filter. I'd like you to build for me a filter that passes all frequencies in some range without distortion, and that completely kills everything outside. So let me call this the cutoff frequency. So that's the h of omega I want. And now my question is, how are you going to build this filter? I want you to give me the unit sample response that goes with it. And you see a hint over here. But can you tell me how you might go about that? Not so obvious, right? Because we've specified the filter characteristic in the frequency domain, and now we want to find the h's that go with it. So what we're really looking for is a formula that will give us the time domain signal in terms of the frequency domain. So we want to invert this somehow. So what we're looking for is really what's called the inverse DTFT. And actually, if you've done Fourier series, you've seen this trick before. Because really, we're not far from Fourier series here. It's just that the domains are a little different, so maybe you don't recognize it. Here, we've got a periodic something expressed as a combination of sines and cosines, or as a combination of exponentials. And now, we want to invert that, OK? If you thought of these as Fourier coefficients for some periodic signal, and then went and looked up whatever book you use for Fourier series, you'd get the formula. Because we're just trying to extract the Fourier coefficients for this periodic signal. But you can actually do it from scratch. So if you think of multiplying both sides of this by, let's say, e to the j omega n-- OK, so I'm going to multiply both sides. So I've got e to the minus j omega m minus n now, right? And I'm going to then integrate both sides over an interval of length 2 pi-- any contiguous interval of length 2 pi. It actually does matter because of the periodicity. So I'll take any interval of length 2 pi and I integrate both sides. And I'll assume that I can hop this integral in there. I'll assume my signal is well-behaved enough for that. So here's what I end up getting. On this right hand side, I get summation integral hm. Oh, I should put a d omega there. Sorry. I've gotten casual with my integration. So on this side, I have this integral. On this side, I have that integral. And if you work through this, out of all this infinity of terms, there's only one term that survives. Because any term in which m is different from n will have this exponential still sitting here. This exponential is like a cosine plus a j sine, or a cosine minus a j sine. You're integrating it over an interval of 2 pi. So any term here that has the exponential, or has the sine or cosine in it, will disappear under the integration. The only term that survives is the one where m equals n. And so what you discover is that this is 2 pi hn when you're all done. I'm not going through the details here. So here is the formula we wanted for the inverse DTFT. Here's the inverse DTFT, OK? I've forgotten my colored chalk today, but that'll do. So if I gave you a filter characteristic like this and asked you to find the unit sample response of the filter that went with it, you would just have to plug in the frequency response characteristic that I gave you and solve for the h's. I think I have a bunch of this on the slides. This is what we just went through. So let's do this now for the ideal low pass filter. What is it that we do? I've got the formula that I just derived for you there. h is equal to 1 in the pass band of the filter, and it's 0 outside of that. So I set h equal to 1 in the pass band of the filter, which is from minus omega C to plus omega C, and the rest of it doesn't contribute anything. And then, I just work out this integral. And I've actually got to do it in two pieces. For n not equal to 0, this is what I get. For n equals 0, this is what I get. If n was continuous, actually, you'd say that this is the same expression as here, because you just use L'Hopital's rule and you'll get from here to here. But since n is an integer, we've got to be a little careful how we write it, OK? So you can't really say you're going to use L'Hopital's rule to see what this is in the limit of n going to 0, because n takes integer values. But if you work it out from scratch for n equals 0, you'll see that you get a formula that's consistent with using L'Hopital's rule. OK, so this is a function that we'll see again and again when we do filtering of this type, and it's referred to as a sinc function. So it's not S-I-N, but S-I-N-C. And if you plot it out, this is what it is. So it's got the oscilation that comes from the sine, but it's got a reduction in amplitude that comes from the 1 over n. So it's a signal that falls off as 1 over n with this kind of a characteristic. Do you think it's a bounded input, bounded output stable system? What's your hunch? Remember what it takes for a system to be stable? The unit sample response has to be absolutely summable. So if you take the absolute values here, and sum from minus infinity to infinity, you want to get something finite to call this stable. Well, since this only falls off as 1 over n, it turns out to not be stable. So it's actually an extreme idealization that is not bounded-- input bounded, output stable, but it's close. Just to go back when I showed you this filter characteristic here, to give the cheap version of a low pass filter, what we actually did was take the sinc function and truncated to a finite interval. And so what happens when you truncate it to a finite interval is that instead of the sharp box-like shape for the frequency response, you get a closer approximation to it-- not exactly the ideal low pass filter, but maybe good enough. The other thing that you might notice if you're looking carefully is that I had a sinc that was centered around 0 and even. And now, I seem to have a causal version of the filter. And I think I'll leave you in recitation to figure out how you can go from the centered, non-causal filter to a causal filter, and what that does to phase and to frequency response magnitude. So basically, I'll leave you to go through the details here. But the key idea here is the inverse DTFT. So now, I want to just take a slightly different perspective on this formula that we derived. We said we've got a frequency response, which we're calling the DTFT of the signal h of n-- the unit sample response. We've got an inverse formula that allows us to get the time signal from the frequency response. But here's yet another way of looking at what this formula is telling us. This formula is saying, I can think of h of n as being made up of a whole bunch of complex exponentials. So you see that this is what we were looking for. We were looking for a way to take a signal and figure out its spectral content. We want to know what complex exponentials, or what sinusoids does it take to make that signal? Well, we have a hint of that in this expression, because this is saying, take the time domain signal. I can think of it as being a combination. Now, this is not a finite combination, it's a continuum. But it is a combination of exponentials of the type that we know to work with. So this is actually giving us a spectral decomposition of the unit sample response, where the amount of e to the j omega n that it takes to make up the signal is told to me by h of omega. So the h of omegas are sort of the weights that we use to combine these exponentials to get the signal. So the idea for a spectral decomposition, or for describing the spectral nature of a signal is actually sitting there. All we have to do is say, we'll use the same formulas, but let's no longer restrict it to the unit sample response of a system and the frequency response of that system. Let's use it for any signal-- so the same formulas, but now for any signal xn. Give me any signal xn, I'll compute for you this object, which is the DTFT of that signal, just the same way I did for a frequency response. So I'll compute the x of omega for you. What's the significance of x of omega? Well, it tells me in what combination I have to wait the e to the j omega n's to construct for you the signal. So the x of big omega, the DTFT tells me what the spectral content of the signal is. If I plot that as a function of frequency, it tells me how to assemble the signal out of sums of sines and cosines. So let's see here. More specifically, what I would say is that the DTFT at omega 0-- omega sub 0 times d omega is the spectral content of the signal in that particular interval. And if I add up all those components over all frequencies in this 2 pi range, I'll get the original signal that I'm interested in. So what we'll do next time is work with this idea to see how it lets us think about signals through systems, and how it enables us to do filtering in a systematic way. All right, let's leave it at that for now.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
20_Network_routing_with_failures.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we're going to talk about routing protocols. And today we're going to talk about how routing protocols handle failures. So I'm going to-- I want to bring everybody back onto the same page with respect to where we are in the story. So in terms of routing protocols, if you imagine a network topology like this, and let's say that links have costs associated with them. So I'm just going to make up some numbers here. We studied two classes of routing protocols. And we looked at these in the absence of any failures. The first is the distance vector protocol, or more generally, vector protocols where, in an advertisement sent from one node to another, each node sends a subset of its routing table. In other words, it sends two columns from its routing table. Recall that the routing table at every node contains a destination, a route, which is a link, the name of the link, and a cost. And in a distance vector protocol, you send these two columns. You send these tuples for all of the destinations that you know. And that's what's spread out to all of the other nodes. In contrast-- so this is distance vector. In contrast, in a link state protocol, what you're sending to all of the other nodes is information about all of the links that you have. And in particular, you send information about the neighbor in your topology together with the link cost to that neighbor, whereas here it's the cost of the route to the destination that you send. In the distance vector protocol, the computation of the routes is distributed. So every node kind of computes and updates its routing table using the Bellman-Ford update step, which essentially updates the route to a destination if you find a better route to the destination. Or if you find that the route to the destination has-- that you currently have "has a changed cost," in which case, you update your routing table. In a link state protocol, the computation is not distributed. What is distributed is the process of flooding this information of the local links. That's why it's called a link state protocol. And the computation of the routes themselves are centralized. Each node just runs a shortest path computation-- for example, Dijkstra's shortest path computation. And in a link state protocol, as long as all of the nodes run exactly the same computation-- in other words, they all try to minimize or optimize the same metric-- then you're guaranteed that all of the nodes end up computing the correct routes, and that when you send a packet from one node, it will reach the destination, assuming there's a path in the network. And in the problem set-- maybe not in the problems. Certainly at the back of the chapter, there are problems on what happens if some nodes run one protocol and another node runs another protocol. For example, you might have-- imagine a routing protocol where one of the nodes is doing minimum cost routes and another node is doing shortest paths-- in terms of the number of hops, not costs. And if you have that, you might end up in a situation where, in fact, the routing doesn't work correctly. But assuming that all of the nodes are on the same optimization in a link state protocol, they'll all get the correct answer. And similarly, in a distance vector protocol, actually the update rule can do anything, as long as the nodes believe that this route-- and they perform some sort of an update to the routing table entry consistent with the information that they hear, then the routing will work. Now, what happens when there are failures? And how does it work? So let's say that you run this protocol and this topology. And you give it enough time. Once you hear all of the advertisements and compute the routes of the different nodes, if you look at this node over here, C four destination D would have a route-- let's call this link L0. It will have a route, L0. And the cost of that route would be three. And similarly at B here, once it converges you'd have-- for destination D, let's imagine this link is called L0, as well, at node B. You might have L0 and the cost would be four, and similarly at the other nodes. Now, let's say what happens is that some sort of a failure occurs. And failures could be one of-- a variety of failures could occur that cause the routing to get screwed up. So one of the failures-- the easiest form of failure-- is that a packet could be lost. In particular, an advertisement could be lost. The second thing that could happen is that a link could fail. I mean, if it's a wire, maybe a backhoe runs over it, which is actually more common than you think. In fact, you could have undersea cables where sharks bite them and they get destroyed. I mean, lots of things could happen. So links could fail. And the third thing that could happen, which is more common than you might think, is that entire switches could fail or nodes could fail. Since we actually don't know how to write bug-free software, software bugs may cause things to fail. They may cause things to crash. In fact, they may cause things to fail in mysterious ways where it looks like it's not failed, but in fact, there's a fault in the software causing you to send bad advertisements. And for this course, we won't worry about this. We'll actually worry about only a class of failures that we think of as fail stop. What I mean by that is if it fails, it just stops, as opposed to it fails and then it gives you wrong information, which is actually a lot harder to deal with, because it's much better for you to fail and just stop rather than for the node to fail and pretend that it's correct and send you bad information. That's a lot harder to deal with. We'll worry about that later, not in 6.02. Anyway, these things could happen. So let's concretely assume that you have this topology here. And what happens is this link fails. Now, if you did nothing and you had that link fail, what would happen? Now, let's look at it node by node. What would happen here is that C would end up with no route to the destination, because what would happen is assuming that C has some way of discovering that this fault has happened, it would have no route to the destination. But if it doesn't discover the fault had happened, it has this route to the destination. But if it sends packets on that link, it wouldn't reach the other side. Packets would be lost. But assuming that it discovers, has a way of discovering that this link has failed, it has no route to the destination. Now, B has a route to the destination. It has this link. But in fact, the next advertisement from C, if C were to make another advertisement, would tell it that it had no route to the destination. But if it did nothing, if you did nothing in the protocol, B thinks it has a route to the destination. But in fact, it reaches C and it's actually a dead end because it reaches C and C doesn't know what to do with it. It just drops the packet. The technical term for this is a dead end. You send packets. You think you're reaching the destination. But it's actually getting dropped. What about S? Does S have a route to the destination? Well, S doesn't have a route to the destination, either, because the right route to the destination would probably have been this link over here because the cost was 2 plus 1 plus 3. And that wouldn't have a route to the destination. It would have a route, but the route would lead to a dead end at C. And in fact, in this particular example, no node would have a route to the destination in terms of the routing table itself. No node would have a route that actually worked, in that the route wouldn't correspond to an actual working path. But if you look at the picture, clearly there are other ways to get to the destination. I mean, we designed this topology presumably because we had some sort of redundancy. So if this failed, what you would like to have is for C to use one of the other paths. C might use this path or C might use this path. Or C might-- yeah, those are the two possible paths it could use. And similarly, S might use-- should use that path, and so forth. So what you want is a routing protocol that converges to the new correct answer, assuming there is a new correct answer. And if there's no new correct answer, it converges to whatever the best possible answer is. So to some destinations, you could have a route, and to some destinations, you don't have a route. So what you want is, as long as there is a connected path, there is a path between a source and a destination, you would like that source to end up with a route that corresponds to some good working path, and in particular, converges to the new minimum cost working path between the source and the destination. So that's the statement of the problem. And we're going to solve that problem today for both distance vector and link scale. And interestingly, the idea that we're going to use the same idea in both cases. And the idea is, just like we built a redundant topology by having alternate paths between places, we're just going to repeat advertisements. And we're going to repeat the process of processing these advertisements. It's a very simple idea. And the general plan-- there are three steps to the plan. The first step in the plan is to periodically check-- every neighbor is responsible for checking the health of-- every node is responsible for checking the health of its neighbors. So that's the step, which we're going to call neighbor aliveness. And the protocol we're going to use for that is called the hello protocol. It's a very, very simple protocol. I'll describe it in a moment. And we're going to use this idea that every node is responsible for checking whether each of its neighbors is alive. And if it determines that a neighbor is not alive, it assumes the neighbor is dead and removes it from various tables and data structures and so on. And in fact, this fail stop assumption is pretty crucial for us, because the assumption is that when a failure occurs of a node, that in fact, the node doesn't respond. If a failure occurs of a link, the assumption here, as well, is that the link stops responding. You don't get to send packets or receive packets over that link. And for now, we're going to assume that every link is bidirectional. So you send packets in both directions. In reality, there are unidirectional network links. And you have to deal with the problem differently. Not going to worry about that. So there is a protocol called the hello protocol that runs to detect if your neighbor is alive or not. The second step in our answer is to make the advertisements periodic. And the third step is, what do you do when you receive an advertisement? When you receive an advertisement, you collect a bunch of these advertisements that you receive from various neighbors. In the link state protocol, it's these link state advertisements. In the distance vector protocol, it's these distance vector tuple advertisements. And then you run a periodic integration process. So if you look at it with a timeline, every node asynchronously-- in other words, independent of the other nodes. You don't have to synchronize the clocks. Every node has its own clock. And every node does these two steps periodically. So from time to time, it sends an advertisement. It just says, in distance vector, just sends these two columns to its neighbors. In the link state advertisement, it just sends out its link state information, and the flooding process works. And then from time to time, there's this integration of these advertisements that happen. Et cetera. Now the beautiful part of these protocols is that I've shown this picture here with these integrations happening interspersed with the advertisements. That doesn't actually have to be the case. You could do them pretty much arbitrarily as long as you do them periodically. The beautiful part of these protocols is that every node asynchronously running these advertisement steps and these integration steps, as long as they do this periodically, in the end what you get as a property called eventual convergence. What that means is, assuming you have all sorts of failures and any pattern of packet losses, link failures, and switches, and then you freeze the system and assume that no more failures happen, then what eventual convergence means is that in some finite time, all of the nodes in the network will converge to correct routing state. That is, in these routing protocols, all of the nodes will end up with an answer that's consistent with what you are trying to optimize. For example, minimum cost paths to all the destinations. Now, proving that under an arbitrary model of when these advertisements and integration steps are all asynchronous and being done at random times is a little involved. And we're not going to attempt that in this course. The notes talk a little bit about how you get eventual convergence when you assume that all the nodes are running very periodic advertisement steps interspersed with integration steps. The proof is really not that important. What's more important is for you to understand the intention behind why it works. So I'll do that by some examples here. I have to also tell you what the Hello protocol is. I'll get to that. But for now, just assume it's a module that tells you if the neighbor is alive or not. So is this plan clear to everybody? It's just this same idea, except every node's doing this periodically. So in practice, you might do this every 30 seconds or every three minutes or something like that. Of course, the longer the time between advertisements, the longer it's going to take for the protocol to converge after a failure or a set of failures. And the shorter the time, it takes a quicker amount of time to converge. But you end up doing a lot more work. And moreover, in practice, many failures are transient. So a link may fail for a few seconds and then come back up. And so it's in practice not that useful to converge very, very quickly or react very, very quickly. It's important to converge quickly once you start the convergence process. But once you-- detecting that a neighbor is alive or dead on the timescale of a few packets is sometimes too fast, because sometimes failures last for very little time and then they go away. And in the meantime, you have done all this work to converge to a new routing state. And then when the link comes back up, you're going to do more work to come back to the old answer. You may as well have just been a little lazy. And so deciding these times is tricky. And there's no real systematic way of doing it in practice. But the trade-off is usually between how quickly you wish to converge and how much work you're willing to expand in making that convergence happen. So is the plan clear? It's just this same protocol, except we're going to do this periodically. OK. So the first step is this enable liveness, or the hello protocol. So that protocol is actually very easy. Every node-- you have a set of links coming out of the node and then neighbors at the other end of it. So the problem is that the node needs to decide which of these links is working or not working and which of the neighbors are still there versus not there. And the way the protocol works is that every node in the system on each of its links-- let's call these nodes ABC. On each of these links sends out-- periodically sends out only to its neighbors a packet called a Hello packet. And the Hello packet usually has a sequence number on it, an incrementing sequence number. The idea is now very, very simple. This may be sent periodically, say, every 10 seconds. If n finds that a certain number of hello packets-- it hasn't heard from its neighbor, one of its neighbors, in some time-- that is, perhaps three hello packets are missing, or four hello packets are missing in a row-- it just decides that that neighbor is dead. It's a very simple idea. So you send out hello packets periodically and if k successive missing hello packets implies the neighbor from which those packets are missing is dead. Now, in response, what happens is that all of the routes that this node had that went via that neighbor, via that link, are eliminated from the routing table. And you could do that by either simply removing the entry from the routing table or by keeping the entry in the routing table but making-- for those destinations-- and replacing the cost from whatever the value was to infinity. And it's probably a better idea to replace it with infinity. I'm not exactly sure why. That's kind of what most people do. I think the reason is that you'd like to know that that destination exists in the network. And then when the later route arrives, you can fix it. But you could just remove it, as well. The other thing that happens is, if you were in a link state protocol, what you would then do is on the next advertisement of the link state, you would simply eliminate that link and that neighbor altogether. So you would not advertise this link and this neighbor as existing anymore. And when that link state advertisement floods through the network, all of the other nodes through the flooding process would determine that that node has gone away and that link has gone away. And then when they run Dijkstra's algorithm again and recompute the routes, they will no longer assume that that link exists. And they may find new routes to the destination. So that's what the hello protocol does. And so how you pick k, again, it's the same trade-off. It depends on how quickly you want to converge to a real failure. And picking this is difficult. For example, if you were on a wireless network where the normal packet loss probability might be 10%, something high, then waiting for a larger number of successive failed packets is a good idea, because just because a packet's lost or two packets are lost doesn't mean the link has failed. On the other hand, if you were running on a highly, highly reliable link in terms of packet loss-- like, you were running on some dedicated optical link where the packet loss rate is one part in a million-- then a single packet missing or two packets missing would be a good indication that that link has actually failed or that node at the other end of the link has failed, and therefore, k could be small. So again, it totally depends on the actual system context and the normal packet loss rates, because what you're trying to do is to make sure you react to real failure, not to simply packet loss. There's really no way to tell the difference. There's no way to tell the difference between a link that really has failed versus a link with a high packet loss rate. It's a heuristic. And in fact, there's really no way to tell between a node that has actually failed and gone away and a node that's just heavily overloaded and is extremely slow in responding. There's no way to tell. So these are all heuristics that you have to work with and try to solve the problem. So sometimes you may get it wrong. Sometimes you may find that a link has-- you may declare a link to have failed when, in fact, it's still fine. But that's life. And you just have to deal with it. So is the story clear so far as to how we deal with routing and a failure? So we're going to apply that to this picture. And you'll find that the answer will work. Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Right. Like I said, what it-- let me repeat this. What it does is, first of all, it really now assumes that both the node and the link have failed. It doesn't really know. Now, it can definitively assume that the link has failed. The node may still be alive, because it may well be that there is a path like that. And n wants to find that route via A to that destination. So what it does is really two things. The first thing it does is it may have routes in its routing table going through that link. This link is now considered dead. And therefore, it should remove those routes. And then in subsequent advertisements, it should make sure that the cost to that destination is infinity, which is why you would remove it and replace the cost to be infinity so that you tell the other guys that previously I told you I could get to B with the cost of five, but really now it's infinity. The second thing that's done in the link state protocol is when you advertise-- you no longer advertise that link. So really, the answer to your question is it assumes that the link has failed. It makes no determination about the node. Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. A good protocol-- and this will be tested in your lab eight, in p-set eight, is if a link fence and eventually comes back up, we would like for you to actually find that answer. And this is an important requirement. So if the broadcast-- that's why all the stuff that's done in the background is done periodically. So if a link comes back up, you want to find the correct answer. Any other questions? OK. So let me apply this idea to this picture here. So what happened here was this thing failed. So C is, at this point in time-- let's assume we're doing distance vector, this protocol here-- so C is going to assume that this link has failed. And therefore, it tells all of the other guys in its next advertisement-- it tells these other guys that it no longer has a route to destination D. And it does that by sending in its next link state advertisement. Previously, it would have sent-- you know what? Did I tell everybody that you send out the destination and the route in the links-- in the advertisement? I may have done that. I meant the destination and the cost. So maybe I should call this the cost and change that to route. So this is the stuff that's sent in the advertisements. And these two columns are in the routing table. But anyway, right now here what would be advertised as D at a cost of three, we replace that now with D with the cost of infinity in our advertisements to our neighbors. So that's what C would advertise. And B, when it receives that, would now find that the route that it gets along here had previously a cost of four. But now it says that it's replaced with the cost of infinity. So it will replace-- this routing table entry would go away. And it would replace it with no route and a cost of infinity. And that's what would propagate. Now, these advertisements are done periodically. So what D is doing, of course, is to send out two advertisements, one this way and one that way. Now, this thing is not going to reach, because this link no longer is alive. But this advertisement works. So when A receives the next distance vector advertisement from D, it now knows that it has a cost-- you know, that link is actually alive and it has a route going there. Now, this particular example is a little tricky, because what's happening here, of course, is that A previously had two ways of getting to D-- 4 plus 3 this way, or 7 that way. If it were previously using that, then A would also have no route to the destination. And then it would have to wait for this guy to send that route. So when it sent that route, A would now have a valid route to the destination. And in A's next advertisement, it would send that route over to these two guys. So it would send out saying that D is at cost seven. And it would do the same thing here. The D is at cost 7 to C. Now, C, when it receives the next advertisement that D is at cost 7, compares that route against its current route, which is now infinity, and replaces in its routing table entry D infinity with D, this link here, and a cost of 4 plus 7, which is 11. So it would replace it with D. Let me call this L1, L1, and a cost of 11. And then on its next advertisement, C would send that out to B according to that advertisement schedule. And similarly here, S, when it receives this from A would, on its next advertisement, after integrating the route to destination D, which would have a cost of 1 plus 7, 8, it would send out an advertisement this way, which would have a cost of 8. And B, when it receives both of these things, would compare a cost of 8 on this link against a cost of 1 plus 4 plus 7, which is 12. And it would find that 8 is smaller than 12. And therefore, B would use this way of getting to the destination. Does that make sense? Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Well, it's receiving hello packets from all its neighbors. And it's just, if a link is alive and a hello shows up, it processes it. And the moment the first hello shows up, it declares the link to be alive again. Finding that someone is alive is a lot easier than finding that they're dead, at least if they're in networks. It's probably true in life, too. But it's certainly true of networks because all you have to know-- I mean, assuming there's no malicious nodes, detecting that a node is alive, it takes one packet. Detecting that a node is dead, you're not sure. Maybe the link was down. Maybe it was just a transit failure. Maybe a packet was lost. So it's a lot harder to find that something has crashed than it is to find that something's working. But yeah, so you keep listening for hello packets. OK, so this is how it converges. Now eventually, of course, because there's some correct working path, eventually it will all converge to the correct answer. If later at some point and then this link comes back up, the same thing occurs, because all of this stuff is being done periodically. And so periodically, these advertisements are going to be sent. C is going to find that there's a better route to go to D via this link as 0. It advertises D now at a cost of 4. And eventually all of the nodes figure it out. And they converge back to the right answer. OK. And you can see that the link state protocol-- the convergence is actually a little bit easier because again, there-- the nodes are periodically advertising these links. So what's going to happen in a link state protocol is if you take the same picture-- and previously the nodes all had routes. And many of those routes went through that link. You have to wait for the next link state advertisement, which would tell you after this hello protocol discovers that C discovers that this link has failed, it takes that next link state protocol advertised-- link state advertisement by which all of the nodes through the flooding process discover that this link has failed. And they all run Dijkstra's algorithm again. And they will find the correct new answer, which will take them through paths that bypass this failed link. Now, the same logic applies in both protocols to when a node fails. If this node were to fail, you can sort of think through-- the node failing is actually equivalent to all of the links coming out of the node failing. So it's a somewhat harder problem in terms of just making sure that you're able to find the routes correctly. But this node failing is really the same as all of the links attached to that node failing. And in a link state protocol, it'll eventually-- you'll discover that. And all of the nodes will compute routes this way. And similarly in a distance vector protocol that's what happens. Now, so far in this picture, I've assumed that once you have these failures and then you pause, nothing else happens. There's no more failures, no packets that are lost, and so on. But life's actually not so kind. What will happen in practice is that, first of all, before I get to why this stuff is a lot more complicated, does everyone understand how these things work and how they converge correctly to the right answer after failure and after recovery from failure? Any questions? OK. So now let me tell you all the ways in which this story goes wrong. The first way the story goes wrong is-- let me do it in the context of a link state protocol with a very, very simple picture. Let's say you have-- I think I have a slide. All right. Let's say you have the picture that I've shown up there, so very, very simple picture. There's A, B, and D. D is the destination. And this is some path. So let's say that what happens is that normally when there's no failures, the way to go from B to D is via A. So B, A, D. And A goes to D directly. Now, let's assume that this link fails. If that link fails and things work great, what's going to happen is that in the next link state advertisement, A tells B that AD no longer exists. A knows the correct link state from B. And so it computes its path via B, its route via B. And similarly, B realizes that AD doesn't exist anymore. And it computes an alternate route that way. But let's say what happens is that AD fails. And then in the next link state advertisement that A sends out, that packet is lost. Let's say that A's link state advertisement to B is just lost. Packets could get lost. Now we have a problem because A knows that this has failed. And therefore, when it computes its Dijkstra's algorithm or shortest path algorithm, it knows that what it wants is a route going like that. But B, on the other hand, doesn't know that link AD has failed, because it didn't see that link state advisement which was lost. So what B does is compute its routing table entry, which is the same as it was before, going through that link over here. Now you have a problem because when A gets a packet that it now-- a data packet that it wants to send to destination D, previously it sent it this way. But now it knows that link has failed. So it sends the packet to B, because its route for D is via B. Well, B gets that packet and looks it up in its routing table. And B believes that the way to get to D is via A. So it sends it back to A. Well, A gets that packet and says, oh, that's great. This is a packet for destination D. I look it up in my routing table. It goes via B. And this thing [INAUDIBLE] for pretty much as long as you want. This thing here is the simplest example of a general phenomenon called a routing loop. So the first thing that can happen when-- during the process of route convergence, various kinds of pathologies and problematic conditions could happen. And one of them is a routing loop. The second thing that could happen-- and I showed you that here-- where during the process of convergence, C does not have a route to this link-- to the destination D. But B thought it had a route going via C. But in fact, C just dropped that packet. That's the second condition that happens. It's a dead end. So both of these things can happen during the process of convergence. Now, these routing loops are particularly problematic because when you have a routing loop, this is an example of a two-hop routing loop. A goes to B. B goes to A goes to B. So it bounces twice. But you can have more complicated routing loops. You could have a routing loop with four hops that looks like-- or four nodes involved, as opposed to two nodes, where this is destination D. And this A thinks that you have to go that way. B thinks that you have to go this way. C thinks you have to go this way. And let's call this guy E. He thinks you have to go that way. And this could happen. So you end up with packets cycling around. Now, these packets cycling around-- you know, there's really no way to-- once you have routing table entries that have somehow converged, until it gets fixed, somehow if they've converged to routes that-- where for B, you have to use this link, and C has to use this link, and so forth, and you get a cycle, what ends up happening is these packets cycle forever. There's really no way to avoid the packets cycling forever. Now, this is, of course-- eventually this will be fixed. If the routing protocol eventually converges, it will eventually discover that this is wrong and find the correct answer. But during the process of convergence, bad things could happen like this. And that's why we have on packets in packet switch networks a field called the hop limit field. And that's on a data packet. So you have the source of the packet sets a hop limit-- let's say 32. It just says that I need to get to the destination and I know it shouldn't take more than 32 hops, no matter what happens. And then every node that-- every switch that gets this packet reduces the hop limit by 1. And eventually when the hop limit gets to zero, the packet's discarded. So this is a way to flush packets out of the network. And usually you use this mechanism to handle the case when you get stuck in a routing loop, you don't want these packets to cycle around forever and ever and ever, because these packets move around the network in milliseconds. And the routing protocols take minutes to converge, or many, many seconds to converge. So that's many, many milliseconds. These packets could remain in the network forever and ever using the bandwidth and no one's getting any use out of it. So you have this hop limit field to flush packets out of the system. But of course, what we'd like to do is to design protocols with guaranteed no routing loops at all. Unfortunately, it's impossible to do that. But what we can try to do is to reduce and mitigate the effects of routing loops. Now, I want to go through a few more examples of routing loops in-- this is in the link state protocol. I want to actually now talk about what happens with a distance vector protocol and show you why this basic, simple distance vector protocol, which is the first routing protocol that was invented, has some problems and how we go about fixing it. And eventually I'll talk about how this is all used on the internet today. So here's how a distance protocol-- a distance vector protocol might get stuck in a weird kind of routing loop. So let's take this example here where you have five nodes and we're all interested in finding routes to destination E. And the general lesson I want to get at here is that a distance vector protocol is extremely simple, but it only works on small networks. And for bigger networks, we want something better. So that's where the story's going. So let me refresh where we are, all the discussion I had so far. So let's assume that link AC fails in this picture. So what you would like to-- and assume that all link costs are 1. So we don't worry about costs at this point. What you want to have happen is for A to discover that this-- A discovers that's failed. And when the routing converges, you would like A to use this link as its route to destination E. And the cost would be 1, 1, 1, which would be 3. All right. So when A discovers failure, it sends a cost of E is infinity to its neighbor-- in particular, to B. And then B, of course, has a route to destination E at cost 2. B advertises that to A. And then A says, now I have a route to destination E. And thus, this is an example of a good converging routing protocol. Everything's good. Now, let's assume I complicate the picture. Let's assume that link BD also fails. So now what's the correct answer? Well, these two guys have a route to E. But the network has become disconnected. So the correct answer, the correct convergent answer here is that A and B both discover and instantiate in the routing tables entries that say that E is at a cost of infinity because there's no path, which means that it's an infinite cost. So when a packet arrives at B for destination DE, you just drop the packet. But this could happen. So here's an example of how that happens. So let's say that this link fails. B discovers that through the hello protocol. And at this point, B changes its routing table entry so that E is at infinity. B had previously sent information to A saying that B was-- E was at distance 2 or cost 2. And now it says, well, I told you that E was at cost 2 before. But I'm changing my mind. It's at cost infinity. And A says, OK. My entry for E now has cost infinity and both of them have converged correctly to the right answer. Now, unfortunately, that's not the only thing that could happen. This was in the lucky situation when B discovered it had failed and immediately sent out its cost to A. But what could happen is a little different. What could happen is that B could discover that D has failed and change its routing table entry to infinity. But before it gets a chance to send out its advertisement to A-- or perhaps it sends out its advertisement with a cost of infinity to A, but it got lost-- in either of those cases, what could happen is A could send out its routing table cost to B for destination E, because that's what's happening periodically, right? Every node is periodically running this. And the times are all asynchronous. Every node has its own notion of when it should send out its advertisement. So what happens now is A sends out an advertisement to B saying it has a route to destination E at a cost of 3, which is very valid, right? After all, A does have a route in its routing table to E, whose cost is 3. It so happens it goes through B. But A doesn't yet know that that link BD has failed. Now we're a little bit in trouble because B now believes that its routing table entry for E is at cost of infinity because BD has failed that link. And now it sees an advertisement from A with a better cost. The route says, wow, this is cool. I now have a path of cost 3 via A to E, which is better than my cost of infinity. And so I'm going to assume that I have an entry to E at a cost of 4. So now you can see that this is actually not a valid route at all. Now, B actually, on the face of it, has no way of knowing if A is telling it a different route. So it could conceivably be the case that A has a different route going that way whose cost is 3. And A can legitimately have a cost of 3 to E that it could be telling B about. But in this protocol, there's no way for B to distinguish that case from the case where A is just repeating to B a route that it received via B. But it's just telling it that it has a route via B. Now, B, therefore, says that it has a cost of 4 to E. And it sends that to A. And A says, whoa. Previously, the cost was 3-- 2 from B. And now B is telling me that the cost is 4, which means I need to make my cost equal to 4 plus 1, which is 5. And I'm going to send that back down. B says, all right. I've got a cost of 5. Previously, that same thing had a cost of 3. So now I'm going to make my cost 6. And this goes on forever. Now, in the meantime, if there are packets showing up at either A or B for destination E, they're just going to go bouncing between these two guys. This is a routing loop. Now, when does this stop? When do these guys stop sending these incrementing costs? Sorry? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. You need a value of infinity. You need to say that, at some point, they're going to reach infinity and we're going to stop. So in other words, for this protocol as it's presented to converge in a legitimate amount-- reasonable amount of time, your value of infinity should be small. So this thing has a colorful name. It's called counting to infinity. Now, in reality, in any network, the cost of infinity cannot be smaller than the minimum-- the maximum minimum cost path. If you have a minimum cost path that has a cost of 75, for whatever reason, infinity had better be bigger than 75. Right? So what it means is that you have a problem with this protocol. It works great on small networks. But it only works on small networks. And the reason for that is that it needs the value of infinity that's not very big. So this is why distance vector protocols are only used for really simple, small networks. And the moment the network becomes a certain size or when you want costs that are large values, you really can't use this protocol. So how do you fix this problem? Any ideas on how to solve this problem? Clearly the internet is pretty big. We're not counting to infinity throughout the whole internet. Or at least, we don't think we are. So how would you fix this problem? Any ideas? Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: So we do have a hop limit on packets. So all these packets might have a hop limit. So the packets don't remain in the network for a long time. But that doesn't solve the problem of the routing protocol to converge takes a time which is this counting to infinity problem. So you want a better solution in some way. Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. So that's one good idea, which is-- in fact, that's how they started trying to solve this problem, which is, if you have a route to a destination coming from a neighbor, don't send back the same route to them. In other words, in this case, A's route to E was via B. So A should not advertise a route to E-- a route for destination E back to B. If you do that, there's a name for it. It's called split horizon and the notes describe how this protocol works. Or you could do even better. A could advertise to B that its route to destination E has a cost of infinity, forcing B to definitely not use that route, no matter what happens, because A perceived that route via B. So A should tell B that the cost of that route is infinity because under no circumstances does A want B to use the same route that it received. So you could do that. The trouble with that is it doesn't-- it solves these two-hop loop problems, but it doesn't solve four-hop loop problems. So you could have a situation where this link fails and C discovers that. But before C sends out its update, B sends out its route to C. And so C thinks it can use B. In the meantime, B thinks it has a route via A. So you might end up with packets cycling around in longer hop loops. So that idea that you had doesn't actually solve the more general problem. So any other modification or idea that can solve the problem? So one thing you could do is something called path vector, which is what you could do is every node, rather than just sending the cost, it could send the entire route, that is-- sorry. It could send the entire path. It could send the list of nodes that corresponds to that particular route in its routing table advertisement. So I'll show that with a picture here. So E could send out not just its destination and a cost, which previously, it would say that to come to E, E would say the cost is zero. But it could now say the cost is zero and the path is E. And then each of these other guys, C and D, could send out their own advertisements saying the cost is 2 or whatever-- the cost is 1. But they could also say that the path is the DE. So D says that my path to get to E is DE in its advertisement. And B here, when it receives that, could send out its own path vector, which is the list of nodes or the list of switches that corresponds to an actual path that's advertised. And now the rule for how you integrate a route into your routing table entries is very simple. If you see an advertisement with your own identity in that advertisement, then you know that that's just a rumor that you started or you were involved with, so you shouldn't integrate it. So in particular, in this example here, if B, for example, were to see an advertisement from A with a path that was A, B, D, E, then B wouldn't integrate that. So what would happen in the picture I showed you before is if these two links were to fail, what would happen is that B would have-- if that link failed, B would have sent that BDE over here. And when A advertises that back to B, it would have ABDE show up. And B now sees ABDE. And B finds that its own name is in that vector or in that advertisement and says, I should pay no attention to that. And as long as you find your own name somewhere in that list of nodes that routing advertisement went through, you know that you shouldn't pay any attention to it because you were involved in creating that advertisement. And so you shouldn't pay attention to it. This protocol is called path vector. It's used on the internet in something called the Border Gateway Protocol, which runs between autonomous systems. And that's actually what makes the internet essentially converge and not have these routing loops that go between different internet service providers. Any questions, comments so far about any of this stuff? So let me summarize everything about routing protocols. And we pick this up in recitation with some problems tomorrow. So the last two lectures in recitation, we've spoken about the network layer. And the main problem that's solved by the network layer is how to get packet routing to work. How do you find good paths between different nodes in the network, between different switches in the network? Now, we've separated out the tasks of routing from forwarding. Forwarding is what happens when a packet arrives at a switch. There's a lookup that happens in a routing table. You take the destination. You look it up in the table, find the link in the routing table, and ship the packet. So that's done-- usually you want it to be done very, very fast. Routing is the process by which the nodes create routing table entries. And that's a very distributed process. It runs amongst all of the other-- all of the switches in the network. We looked at two routing protocols-- link state and distance vector. In distance vector, the computation is distributed with these Bellman-Ford update steps. And the distance vector protocol is very beautiful in that it's very, very simple. It works for small networks. But to make the ideal work for bigger networks, you have to enhance the distance with the actual path. And if you enhance it with the path, you actually avoid a lot of these routing loops that show up. You can't eliminate it. But you can mitigate the effect of it. In the link state protocol, there's actually more work that's done. There's a lot more information that's flooded between nodes. But the protocol converges quicker than these distance vector and path vector protocols usually. Link state protocol, you flood this neighbor information. You consume more bandwidth. There's a lot more bandwidth that's used in the network in flooding it. And the computation is centralized. You run Dijkstra's shortest path. So what the internet does in general-- I'll pick up on this two lecture-- or three lectures from now, when I talk about how the internet really works and applies the concepts we've studied. What you'll find is that networks like MIT's network will run a protocol like-- a link state like protocol to achieve connectivity between nodes inside MIT. And then routers at the edge of MIT connecting to other internet service providers run a path vector protocol, like BGP. And all of these things work together and they work because ultimately, all of the switches create these routing table entries that have a mapping between destinations and routes or links that have to be used. So that's the routing story. We'll pick it up in recitation tomorrow and see you back on Wednesday.
MIT_602_Introduction_to_EECS_II_Digital_Communication_Systems_Fall_2012
15_Modulationdemodulation.txt
ANNOUNCER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: There's quite a bit I want to go through here. so we're going to talk today about modulation, which you've already gotten some notion of, and that's basically the task of matching a transmitted signal to the physical medium. And then we'll talk about demodulation, as well. Whoops. So just to remind you of how we got into this story, we started off talking about bits that we had to get across to a receiver from a source. And we've been spending quite some time now focusing on this piece of the system, which is taking the bits, converting them to actual physical samples of a voltage, for instance, and then trying to get them over some physical medium. And then, at the other end, converting back to bits, all right? So this is really a key piece of the system. If you can't get it over the physical medium, then you don't have anything. So we've been spending quite some time on that. We've talked about models for signals and models for systems, LTI models for systems, all in the time domain. And then we came to the frequency domain, which we said will make things a lot simpler, and actually is the way that people think about transmission on physical media, typically. OK, so the actual math. Well, we've seen-- this is just review. We've seen that, most recently, that you can actually represent any signal as a weighted combination of exponentials, so this was the transform domain representation. And the weights here were given by the discrete-time Fourier transform. So you give me a signal, I can find for you the discrete-time Fourier transform, which then tells me how to assemble complex exponentials to get the signal of interest, all right? And so this Fourier representation of the signal is really the frequency domain thinking. And then we saw how to apply that actually to a system. We have an LTI system, therefore characterized by a frequency response. We first introduced the frequency response as a way of thinking about what happens to cosines. Put a cosine in, and you get a cosine out that's scaled by the magnitude of the frequency response and with the phase shifted by the angle of the frequency response. And then we went from cosines, or in parallel we talked about exponential inputs, so inputs of this type. And now we have more generally a signal that's represented as a weighted combination of exponentials of that type. Out comes the same weighted combination of exponentials, except each one is scaled by the frequency response as appropriate. And then comparing that with what we expect as a spectral representation for the output. We get this key relationship, which is relating the input and the output of a system, an LTI system, that's governed by a frequency response. So now we're starting to think in terms of the spectral content of the input, all right, which is the frequency domain description of the signal that goes into the system. Then the frequency response of the system shaping that spectral content to give you the spectral content of what comes out, all right? So this is the language and the picture that we have, and it's all as simple as multiplication once you've figured out what the spectral content is of the signal of interest, once you have the frequency response of the system. So we've got to know how to do those pieces. And then we talked most specifically about a physical medium that's close to what you're doing in the lab, which is the medium of, well, an acoustic channel driven by a loudspeaker, and at the other end are a microphone to pick up the signal. And I showed you these typical characteristics of loud speakers, the kinds that you'll find listed everywhere, three different speakers. I mentioned last time that, when you look at frequency specs for speakers, people will typically only show you the magnitude specification because, for audio applications, the phase distortions are a little less important. They tend to not be picked up by the ear. All of these-- let's see they have passbands from-- this pointer's a little weak here, but-- passbands from around 100 Hertz to, let's say, 10 kilohertz. So in that region, they pass signals more or less uniformly in at least the magnitude characteristic, and then near the edges they taper off. And some speakers will have bigger passbands, and will taper off closer to DC. Other speakers actually will not pass frequencies till you get till about, oh, what is that, 120 Hertz or so on this characteristic, but you've got to get way up before you get anything through that speaker. But nominally, we can think of speakers, since they're aimed at audio applications-- what? The ear here is something on the order of, let's say, 100 Hertz to 10 kilohertz. OK, but the phase characteristic is important, too. It's not important, maybe, when you're talking about sending audio on a speaker, but in Audiocom in the lab, you're actually sending pulses across it. You're communicating something other than audio. You're actually trying to get a signal whose particular shape matters. It's not how you hear it, but what it looks like before you sample it, OK? So in settings like that, the phase characteristic is important, as well. Now, you haven't explicitly probed the frequency characteristic of the speaker you're using. You could do that, but instead you've been looking at things like step responses in the time domain and constructing eye diagrams, but you could look in the frequency domain and characterize your particular channel for your laptop sitting in a particular place. You could look to see what the magnitude and phase are like. OK, so I want to go through this exercise of looking at the spectral content of a signal you want to get across this audio channel, and then looking at how the audio channel shapes it, and then what you pick up at the other end. So just to give you a feel for how one thinks through this. So the input in the typical application you have for Audiocom you have-- let's see, if you wanted to signal just a 1 and then all 0's. You would have 256 samples at height 1, and then everything from then on 0, OK? So what I'd like to do is think through how this pulse gets across the medium, but thinking it through in the frequency domain, all right? So the first thing we have to figure out is, what's the spectral content of this pulse? By the way, if we understand it for one pulse, then we can understand it-- then we know it for a sequence of pulses because if we're modeling the system as time-invariant, once we figure out what one pulse does, we can figure out what a later pulse will do. It's just the same response delayed in time, OK? So the key to it is understanding what happens with one pulse. So the spectral content of the signal is what we're interested in. And my question is, do you have any guesses as to what the spectral content might be, just roughly, qualitatively? Where do you think the energy of the signal is concentrated? What frequency ranges? Any thoughts? I'll need a hand up and a loud voice so I can figure out what's-- at least one? Yeah? AUDIENCE: Low frequency. PROFESSOR: Low frequency is a good idea because, for most of this signal, you've got essentially nothing happening, right? It's just flat. So you expect high spectral content at DC. But there is this sharp transition, so you might expect high frequencies associated with that. So do you think it might be low frequencies and then high frequencies, not much in between, or any thoughts? OK, well, let's look at what it actually is. Let's work it out. So we're talking about a signal that's at height 1. For-- let's do the general case. So let's say it's for n samples, so from 0 to n minus 1 its height 1, and then it's 0 outside of that, OK? So suppose this is x of n. How do we determine the spectral content? Well, we've got to write the-- we've got to compute the DTFT, right? So what's the DTFT? It's a summation x m e to the minus j omega m over all m. But in this case, it simplifies, right? Because there are only a few non-zero values of the signal. So this is going to be-- let's see. It's going to be x0 e to the minus j omega 0 plus x1 e the minus j omega 1, and so on. That's going to be 1 plus e to the minus j omega plus e to the minus j2 omega plus e to the minus j N minus 1 omega. So that's the DTFT. But till you work with that and get it in a form that you can make sense of, you still don't have a feel for where the frequency content is, right? You've got to-- the best way to get at that is to think of what the magnitude of this will be. And even then, it's not obvious how to think about the magnitude of a sum of complex numbers like this, so you've got to play with it a little more. OK, well, this is a geometric series, right? Each term is obtained from the previous one by multiplying by to the minus jam omega. And so if you've got the sum of a finite number of geometric series of this type, what do we have? We have that as the sum, right? You agree? So this was the factor by which we multiply each term. Sorry. And we've got N such terms, so you're summing N terms of a geometric series. Well, we might be getting closer here to extracting a magnitude, but you really want to do a little bit more massaging here. Let's see. If I make this e to the j omega N over 2, then here's e to the j omega N over 2, minus e to the minus j omega N over 2. And this is a trick we've done-- we've played a few times before. Right? I've just rearranged things. Let's see. How have I helped myself here? Have I helped myself at all? So what is-- what does that simplify to? Well, the factor in front I can write as some phase term e to the minus j omega N minus 1 over 2. And what's this? Anybody? Numerator? Does the numerator remind you of anything? Sine? Sine omega N over 2? And the denominator, sine omega over 2, right? So now it starts to look a little bit more manageable. If I wanted to get the magnitude of this, well, the magnitude of this is going to be the magnitude of this piece times the magnitude of that piece. What's the magnitude of the first term here? Just 1, right? It's e to the j something, so its magnitude is 1. So here's the magnitude of the DTFT, so that's the spectral characteristic, and that's something that we can plot. AUDIENCE: Question. PROFESSOR: OK. Sorry. Question, hi. AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry. Say that again? AUDIENCE: [INAUDIBLE] PROFESSOR: Did I make a mistake somewhere here, or? Oh, this thing? This term here? I was trying to combine numerator and denominator here. AUDIENCE: [INAUDIBLE] PROFESSOR: Which part? Sorry. I have to stand where you are to see if I made a mistake because it's hard to see close up. AUDIENCE: Over the-- like, when we go from the 1 minus e to the minus 2 again to the other one, why are we dividing by two? PROFESSOR: Oh, what are we-- oh. Why are we dividing by 2? Because when you multiply it out, that's what it takes. I'm trying to group things to get something interpretable. I don't know what this is, but I know what the numerator over sine looks like, so I'm trying to make this a little bit more equally distributed, right? So if I pull out that factor, what's left? Looks like part of a sine. OK? So I'm just-- so it's this time this gives me the numerator here. And this times this gives me the denominator here. So it's just rearranging terms. We've used this trick before. Any trick that works twice is a method, OK? So we really have a method here. It's not just a trick. If it works three times, you can make a religion of it. OK, so that's the derivation we have here. What's the height of this at the origin? Let's just focus on that term. OK, so this is the magnitude we're talking about. What's the height at the origin, at omega equals 0? Well, now you can use L'Hopital's rule, right? Because omega is something that varies continuously. So for small values of the argument, you're really looking at something of height N. And then, when is the first time that this goes to 0? Well, for small values of frequency, the numerator is not changing sign, and this first goes to 0 when you get to omega equals 2 pi over N, to private capital N. So actually, instead of saying all that, I should just draw you a picture. There's a picture of one particular case. So this is a case where, actually, the pulse didn't start at 0. It was symmetrically located around 0, OK? It was a pulse of length 11 symmetrically located around 0. And because it was symmetrically located, this phase factor went away, and all you're left with is the sine omega N over 2 divided by sine omega over 2. So you're looking at the actual DTFT of a pulse of that type, OK? So this started at minus 5 and went to plus 5, and was 11 samples long and was 0 everywhere outside of that. So that's what this function looks like, the sine omega N over 2 divided by sine omega over 2. Does it remind you of a function you've seen before? Sinc? A sinc function? It's very close to a sinc. The sinc, though, had just frequency in the denominator. It didn't have sine of something in the denominator. And the reason this appears is, remember that transforms and frequency responses have to be periodic with period 2 pi. So it certainly wouldn't be possible for the transform of a signal to be a sinc because there's no periodicity in the sinc. But when you work it out carefully, you find that it's something close to a sinc, but one that has exactly the right periodicity, so this thing will repeat periodically with period 2 pi, exactly the way it's supposed to. So it's sort of sinc-like. For small values of omega, the sine is essentially just omega over 2, and it is essentially a sinc. But when you get to larger values of omega, this thing starts to play a role. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah? Sorry? AUDIENCE: Does the magnitude mean anything? PROFESSOR: I haven't-- I'm not plotting the magnitude now. I was plotting the actual DTFT for this case. So in this symmetric case, the actual DTFT is the sine N omega over 2 over sine omega over 2. So I was plotting the actual DTFT. And the magnitude of the DTFT I get just by taking absolute value, right? AUDIENCE: [INAUDIBLE] PROFESSOR: Well, I did this for a case of a pulse starting at time 0, OK? So this factor came purely from where I located this on the time axis. So different positions of this pulse on the time axis will modify this factor but won't touch this factor, OK? Shifts in time correspond to multiplication by e to the minus j omega something, right? You've seen that in recitation. So whenever you shift the pulse in time, what it does to the transform is it leaves this part intact and affects this factor, but that doesn't change the magnitude of the transform, OK? So in the case of a symmetrical pulse, symmetrical about the origin, actually that phase factor goes away, and the actual transform is just that piece without the e to the anything. Sorry. I jumped over a few steps in describing that. Just to go back a second. This is not a sinc function, but it's actually referred to as a periodic sinc. It's got a fancier name, also. It crops up all over the place. Height N at the origin and the first zero crossing at 2 pi over cap N. So as you make the pulse wider in time, you make it narrower in frequency, right? As N becomes larger, you make this wider in time. The main lobe of this frequency distribution gets more concentrated, and frequency gets closer to being a DC signal. Makes sense, right? The longer that this stays constant, the more the signal looks like just DC and the more the frequency is concentrated at the origin. But what you can see here is there's actually a full spread of frequencies. It's not that there's just low frequency for the flat parts and high frequency for the vertical edge and nothing in between. There's actually a full spread of frequency components that it takes to make up that step. OK. If you had a pulse that wasn't centered-- this is just to show you. Here is a pulse-- actually, this is not centered. It's only 10 long, but the magnitude here you're only seeing half the frequency scale, so 0 to pi essentially, except this is in terms of f, which is omega over 2 pi. You get the same kind of magnitude characteristic, but now because you've shifted it off-center, you've got a linear-phase characteristic, and what you're seeing here is a linear-phase characteristic, except every time you have a flip in sign, you jump the phase by 180 degrees, right? When you change the sign of something from a plus to a minus, that's like adding 180 degrees to the phase or subtracting 180 degrees to the phase. So you can spend time on all of this and make sense of it. But the basic idea is that you get the sinc-like distribution and frequency. OK, so let's get back to the particular pulse that's of interest to us, which is that pulse. So it's the same kind of thing, except N is 256. And what I've plotted for you here is the magnitude of the DTFT. It has the sinc-like shape. I haven't actually plotted it as a continuous function of omega. Instead, I've used the FFT. You remember? We talked about the Fast Fourier Transform. So what the fast Fourier transform is going to do is, if the actual magnitude DTFT was some continuous thing like this, the fast Fourier transform is going to give me samples of it, as many as I want. But the more samples I ask for, the more work I have to do, of course, OK? I asked for 48,000 samples of the DTFT so that I could get a nice big spread here. If your samples came from sampling at 48 kilohertz, for instance, then the rightmost end that corresponds to pi in terms of actual frequency would correspond to the sampling frequency divided by 2, so that's 24 kilohertz sitting there. So I actually have 24,000 points for 24,000 Hertz, so I've got one point at every Hertz position, but I could pick anything else. The other thing I wanted to mention was that the reason I could do this is because I'm using the FFT. Because I told you if I just did sort of simple-minded implementation of the formula, I would take order p squared computations, where p is the length of the signal that I'm looking at. If I use the FFT, I go from p squared to-- I go down to p log to the base 2 of p, all right? Well, the number of points I have here is 48,000, so going from p to log 2 p is going from 48,000 to 16, which is a factor of 3,000. So the difference is I'm sitting at the terminal and I hit the Return key to get the FFT, and maybe at 0.1 seconds later, I get the answer, versus if I didn't use the FFT I'd wait five minutes, and then I wouldn't be trying to put together these figures for you. OK, so the FFT really makes a real practical difference, and it really revolutionized how numerical computations were done. OK, so here you now see the full spectral distribution if you're willing to let your eye interpolate between these samples that I've got. You see the full spectral distribution of that rectangular pulse, OK? So a short pulse in time. It's got certainly high DC content, but it's got tremendous frequency distribution, all the way out to high frequencies. In fact, all the way to the end. You're still seeing frequency content. It's visible to the eye. So all the way out to 24,000 Hertz, and you could keep going all right? This is not a sinc. It's a sinc-like function because if you extended it, it would go back up again. It's got that period, 2 pi, exactly as it should. But for all intents and purposes, for small values of frequency, the sine omega over 2 here-- this is essentially omega over 2, and this is a sinc-like function, OK? Now, this is different from what you saw before. Before you had a constant segment and frequency. At least, I think it's different from what you've seen before, unless you've done examples in recitation. But before what we had was, for instance, trying to get a frequency response that was like this, right? We ended up with a unit sample response that was a sinc function in time. This is going the other way. This is a rectangular function in time, giving rise to a sinc-like distribution in frequency. All right. Let's zoom in a little bit just to see what we have here. And this is exactly what we expect to be seeing, the sinc-like distribution. The height there is 256, as it should be. That's the N. And then this should be at 2 pi over capital N. If you translate that to what it means in actual frequency, this first null there is at 187.5 Hertz. That's 48 kilohertz divided by N, so that's-- OK, so that's that number. So that gives you some idea of how this thing is spread in frequency. OK, so if this pulse is applied directly to the loudspeaker-- well, here, the loudspeaker passband goes from about 100 Hertz to, let's say, 10,000 Hertz. You see there's huge amounts of the energy that's not in the passband of the loudspeaker. All right? So this is not going to do very well if you just directly apply that pulse to the loudspeaker. You have to match the frequency content of the input to the frequency response of the system you're trying to get this over. OK. Now, just as an experiment before we get back to sending this across a loudspeaker, let's take a look at what happens if we send that rectangular pulse over a lowpass channel. So let's save the bandpass channel, which is a little more involved, for later, and just look at what happens if we send this pulse over a lowpass channel. So I'm thinking about a channel whose frequency characteristic-- this is h of omega-- it passes low frequencies, and then it truncates higher frequencies. OK? And what I'm going to do is send in an x of N, which is this pulse in time. 256 at height 1, and then everything 0. All right, so if I'm thinking of it in the frequency domain, then I take the spectral content of the signal, which we've just worked out, and multiplied by the frequency response characteristic, which just basically selects out the frequency content that's in the passband and rejects everything else. And then that gives me the spectral content of the output, and I can translate that back to what happens in the time domain. So here is-- well, actually, let's take a zoomed-in version. I've taken a lowpass filter, where this cutoff corresponds to actually a cutoff at 400 Hertz if you're thinking in terms of the underlying waveform. So what we've done is take the rectangular pulse, put it through a lowpass filter-- and I'm assuming an ideal lowpass filter-- passes everything in this frequency band and nothing outside of it, OK? So here again, we see we're selecting out part of the spectral structure of the input, but there's huge amounts of the energy of the input that are being left out of the output of the filter. Now, in the time domain, look at what this corresponds to. It's an approximation to this pulse, but it's one in which all the high-frequency content has disappeared because we've only let the low-frequency pieces through. So what you have is a very rounded kind of pulse. It spreads out well over the 256 mark, so here's the 0 to 256, but this thing actually spills over into adjacent bit slots. We've taken out the high-frequency components, so it can't make any sharp turns anymore. It's got this lower-frequency wiggling. As you can imagine, if I made this even smaller, my wiggling would become even more leisurely here, and I would spill further into the adjacent bit slots, OK? So this is what lowpass filtering will do to that rectangular pulse. Any questions on this piece? OK. Now, we've actually seen examples of this type. I flashed these up last time. It's the same idea, except it was not just a single pulse. It was a succession of pulses like this. So you could do the same thing. Have a succession of pulses like this. Take its DTFT to assess the spectral content. I'm showing you not actually the DTFT but something proportional to it here. This is-- so ignore the labels there. Think of this as essentially the DTFT. To within the scale factor it is. So you can see that I have spectral content all the way out to the edges. But now send it through a lowpass channel which zeros out all the spectral content outside of some central region, and what you have coming out the other end is something that can't take the sharp turns anymore, so it's much more rounded. And as you narrow it down still further, you get even more rounding, and you get a spilling over. You see this sharply confined rectangular pulse now spills over into the adjacent slots. And you can go still further on that. The top plot here is the same as the bottom plot in the previous one. But this is just showing the sequence. So as you narrow it down further and further and further, what comes out the other end gets much less distinctive in its features, OK? It can only-- this only has very low-frequency content, so it can't do any sharp turns. And this is an eye diagram corresponding to-- in each of these received signals-- just to show you how detection gets difficult if you've got a lowpass channel and you've got this pulse that's not well-defined. Now, how might you actually-- how might you get a better-defined pulse for a given channel? So if I gave you this channel-- we sent in this pulse of length 256 samples, and we got something that we didn't like because it spilled over into other slots. The reason it spilled into other slots is we were cutting out too much of its spectral content. What could you do to this pulse to get more of its energy across that bandpass channel? Yeah? Sorry? AUDIENCE: Set a longer pulse. PROFESSOR: Set a longer pulse. How does that work? Now, you see, if you make N longer, you make the pulse longer, you shrink this correspondingly in the frequency domain, right? We said that on these sinc-type characteristics, the height was N. This first null was 2 pi over N. Make the pulse longer, you pull the main lobe in tighter, and more of this is going to go through, OK? And you can see that clearly. That's something you can explore in your experiment. So if you wanted to get more clearly defined output for a given lowpass channel, you might want to increase the length of your pulse. Of course, that's going to slow down your signaling rate, so there's a trade-off involved, right? Now, we've actually-- this is just to sort of step back and point out that what we're seeing here are some properties that are inherent to Fourier transforms, OK? So it's typically the case that if you've got a signal that's wide in time, it's narrow in frequency. And if you make it wider in time, it gets narrower in frequency. In fact, as I mentioned up there, the uncertainty principle in physics really comes from this result. It says that the spread in time times the spread in frequency has some lower bound, OK? So this is some number that's strictly positive. So you can't make a signal arbitrarily concentrated in time and concentrated in frequency. If you make one small, the other one will have to grow correspondingly. So actually, the uncertainty principle in physics is precisely a theorem in Fourier transforms if you study it. Here is another such complementarity or duality. The smoother you make a signal in time, the more sharp it is in frequency. Let's see. We saw that here, for instance, right? We had a signal in time, namely the unit sample response of the ideal filter. This was-- this didn't have any sharp edges to it. It was a sinc function. You got something that's smooth in time, it ends up having sharp edges in frequency, OK? And the more smooth you make it in time, the sharper it gets in frequency. And this we've already seen. This is the kind of trade-off we're talking about. So these are characteristics to be on the lookout for. In fact, I just did a little experiment here. What if we decided not to try and send a rectangular pulse over, but we smoothed out the pulse a little bit to get rid of that sharp edge? So that's another way that you can try and get a pulse over a lowpass channel like this. So you see, what I'm trying to do is not have the sharp discontinuities, the 0 to 1 and the 1 to 0. I want a more rounded behavior in the time domain so I get a sharper concentration in the frequency domain, and you can actually see how that works. So what I've actually done here is, instead of a signal that-- well, if you'll permit me, I don't want to draw these stem plots, the lollipop figures, because they get painful to draw, but just assume that this is 256 such things. That's the rectangular pulse we were trying to send before. What I've done now is instead say, well, let me around the edges, so I'm going to have a half cycle of a cosine for that edge, OK? So that's what-- these are the samples I'm going to use at this edge, and I'm going to have a half-cycle of a cosine at this edge. This is actually something that's used quite a bit in applications. All right, so what have I done? I've removed the sharp edge and gotten a much-rounded transition. In fact, if you're thinking of continuous functions, the original had a discontinuity, whereas here I've got to take two derivatives before I encounter a discontinuity because the function itself and its slope at these ends are well-matched, so I've got to differentiate twice before I get a discontinuity. So actually, it has quite some smoothness to it. Smooth in time means more tightly concentrated in frequency. So all I'm doing on the next slide is showing you, so your eye can compare, what the spectral content is of the original pulse and of this rounded pulse. And for the rounded pulse, I actually just flipped it over so that you can compare more easily. So this is the negative of the DTFT magnitude of the shaped pulse. And you can sort of see right away here that the frequency content has essentially settled out. It's almost all contained in this smaller region, whereas the rectangular pulse had a frequency content that went way off to high frequencies, right? Went off to 24 kilohertz, for instance, in our example. So the frequency content of the shaped pulse is much more tightly contained, and you have a much better chance of getting that across a lowpass channel. So this is just a little bit of shaping that can make a big difference in terms of adapting the signal you're trying to send to the channel. So if I look in the time domain, sending these two pulses over the same lowpass channel, you can do a visual comparison of what comes out. So here's the original 256 rectangular pulse coming out the other end. Here is my shaped pulse coming out the other end. And you can see the shaped pulse is much more tightly confined in the bit slot that I assigned to it, so it's much more tightly confined around the 256-sample width, OK? So this is another thing that is done, and it's done by thinking in the frequency domain. People designed these pulses thinking in the frequency domain. They're not doing convolution. OK, that was all lowpass, but we want to look at bandpass, so just a couple of quick examples there. So we're back to the speakers. And we're taking our rectangular pulse and applying it to the speaker. So here's what the spectrum looks like after ideal bandpass filtering. So I'm not actually filtering with the speaker characteristic. I'm assuming an ideal bandpass that extends from 100 Hertz to 10,000 Hertz and zeros out everything outside that range. So I send in the spectral characteristic of my rectangular pulse. I shape it with the bandpass. So what comes out is something that has this spectral characteristic. And you can see that the frequency content is sharply limited 10 kilohertz above 0, and then there is actually a central region that's entirely missing. So remember, this had to go originally up to 256, but because we've chopped out the center portion, we're only going out to 150-something out there. So actually, if I zoom in, you can see that a lot more closely. So this is a zoomed-in version of what comes out from a loudspeaker, from a bandpass filter, if I send in a rectangular pulse. The very low frequencies are entirely missing, and we saw in the previous characteristic that the very high frequencies are missing, as well. So what's the shape of the pulse that you get out? Not very good. Because the low frequencies are missing, this thing tends to sag in the middle. It can't hold up DC. And it can't make the very sharp transitions, either, so there is a more leisurely transition. But again, this can't stay at that level. It actually asymptotes. So the actual pulse occupies-- before it settles, occupies way over the 256 bits that I've allotted to it, OK? So taking that rectangular pulse and directly putting it on the speaker is going to give you something not pretty at the other end and something that you cannot signal with. So the question is what to do about that, OK? And the answer to that is this thing that we call modulation. We've already seen it in different forms. Let's think about it now in the frequency domain. OK, so here's what we're going to do. Want the big stick of chalk. OK. We're going to shape the spectral characteristics of the signal. We started off with some time-domain signal links, x N, corresponding DTFT x omega. It wasn't well-matched to our channel characteristics, and so what we're going to do is multiply by some carrier frequency. And this is simple amplitude modulation. We're just referring to it as modulation here. We get some signal out here. And the question is, what is the frequency characteristic of that signal? OK, we've already seen what the frequency characteristic of the input is. What's the frequency characteristic of that signal? So just to give you an example, we had our x of omega looking something like the sinc shape. Remember height N there? And the question is, what's the spectrum of this? And there's a computation up there that I don't want to go through, but it shows you that the answer is actually quite simple. So if I call this-- let me call this t of N because it's the signal we're going to transmit. Here's what the spectrum of the transmitted signal looks like. There is omega. There's pi. Here's omega c. OK, so the prescription is simple. Take the spectral characteristic and replicate it at omega c and replicate it at minus omega c, and scale by 1/2. So what you're going to get is this characteristic here, this characteristic here, and the height will be N over 2. It's that simple. So you can go through the math. When you're done with the math, what it says is that the spectral characteristic of this modulated signal is the spectral characteristic of the envelope of the baseband signal, or the envelope, but translated to the position of the carrier. So you can begin to see how this is going to help us shift a signal to get it across a bandpass channel. We started off with something that was not well-suited to the loudspeaker we now have a way to shift its energy to get it right in the passband of the speaker by picking the carrier frequency appropriately. All right? I think I'm going to skip some of this. But let's look at what this does. It's really this picture, but I just want to show you how it works with actual waveforms. So here is the rectangular pulse times the cosine. I've picked-- what did I pick? 1 kilohertz as my carrier? Yeah. 1,000 Hertz was the carrier. 1,000 Hertz sits comfortably in the passband of a speaker, so it's a reasonable choice. By the way, in typical AM, the carrier frequencies are much higher than-- or the ratio of the carrier frequency to the rate of variation of the envelope is much higher than what we have in these examples. So the audio channel is actually very challenging. OK, so this is my modulated signal. The question is, what does its spectrum look like? So I run it through the FFT, and indeed I get the replication here. Let's zoom in a little bit so we can see it a little more closely. So what I have is those two sinc-like spectral characteristics, but translated to sit centered at 1,000 Hertz and minus 1,000 Hertz. Remember that this is 0 out here, OK? And the height, well, it's now it's 128, which is half the 256 that I had before. So in terms of positioning this within where the loudspeaker will transmit it, we were taking the lower cutoff of the loudspeaker as being around 100 Hertz. This is where 100 Hertz sits. So you can see that a huge amount of the energy of the pulse is getting through. It's at the wrong frequencies. We'll have to deal with getting it back. But at least getting the energy across is working here, OK? And the upper cutoff of the speaker is way off over here. So this is 10,000, but I'm showing you a zoomed-in version, so the upper frequency is way off. In fact, that also brings up the idea that you could actually do this trick multiple times. You could actually pick another carrier frequency somewhat higher than this with some other modulating signal on it and tuck that in there, as well. So you could simultaneously transmit messages on multiple carriers through that same speaker, and you'll be exploring that, as well. OK. So what does the-- what's the time-domain signal that corresponds to this looks like? So when you get it, you impress this modulated signal on the loudspeaker, on this bandpass filter. What's the output of the bandpass filter? You can see it's almost exactly what you put in, OK? There's a little bit of distortion at the different places, but it's basically exactly the pulse that you put in. So almost all the energy has gone through, and you don't have significant distortion because you've squarely placed the energy in the passband of the filter. OK, now how do we recover? How do we get back the original baseband signal? Well, it turns out it's very easy. Let me actually do it in pictures, and then we'll look at the math. This is what's coming in to our receiver, and now we've got to process this to get back a signal that has this spectrum. We've learned a trick, which is modulation. Multiply it by a cosine, and you'll take the spectrum and replicate it at omega c and at minus omega c. I'm going to use the same trick again. I'm going to take the received signal multiplied by a cosine of the same carrier frequency, and what's that going to do? Well, my scale is getting bigger here each time. So here's my omega c. Here's my minus omega c. What's coming in is this signal. I'm going to multiply it by cosine omega c, so what does that do in the spectrum? It takes this spectrum, replicates it at omega c. So what does that do? Well, it puts a piece here, and it puts a piece here, at 2 omega c, right? Because I've taken this spectrum-- you've got to imagine the change of scale so I can draw all of this. I've taken this spectrum and I've placed it centered on omega c. And this is now-- this is the N over 2 here. Oh, but now it's going to be N over 4, right? Because I divide by 2. And then I take the same spectrum and I center it on minus omega c. OK, so I've got the N over 4 piece here, but I'm going to have the other-- oh, sorry. I drew it in the wrong place. I'm going to center it at omega c. So this is going to end up at minus 2 omega c. And the replication here. So there's going to be a second one sitting here at the origin. So the net effect at the origin is that I get the original spectrum, but scaled by a half, and then I get vestiges of this, if you like, centered at twice the carrier frequency, OK? And that's actually what the algebra shows. The algebra is very simple. You're receiving this. Multiply it again by a cosine. So take the received signal and multiply it again by a cosine. Well, if you substitute for what the received signal is, you get x n times cosine squared. Cosine squared, if you use a standard identity, splits into these two terms. Let's see. Do I have that right? Yeah. So here's the 0.5 x n sitting here at the origin, and here's another term, which is x n times the cosine. So this is like a modulated signal, but modulated by twice the carrier frequency. So what does this translate to? Well, it's actually 0.5 times x n times cosine 2 omega c. What does that do in the spectral domain? It's going to take half of that and place it at minus 2 omega c and plus 2 omega c, so it's completely consistent with this picture. So what is it that we have to do now to extract the signal of interest? Just a lowpass filtering here, OK? So if you can select out this piece with a lowpass filter, you've recovered the signal of interest. You can adjust the scale factor, too, so you can have a lowpass filter with a gain of 2 and you've recovered your original signal. All right, we'll build on Monday. And that'll be the last lecture on this material. Relative to the calendar, we're just sliding forward, so we'll wrap up this stuff on Monday and then continue with packets.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
21_Stochastic_Differential_Equations.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So far we had a function, and then we differentiated to get an equation of this type. But now, we're given this equation. And we have to go backwards, want to find the stochastic process that satisfies this equation. So goal is to find a stochastic process X(t) satisfying this equation. In other words, we want X of t to be the integral of mu dS plus sigma... The goal is clear. We want that. And so these type of equations are called differential equations, I hope you already know that. Also for PDE, partial differential equations, so even when it's not stochastic, these are not easy problems. At least not easy in the sense that, if you're given an equation, typically you don't expect to have a closed form solution. So even if you find this X, most of the time, it's not in a very good form. Still, a very important result that you should first note before trying to solve any of the differential equation is that, as long as mu and sigma are reasonable functions, there does exist a solution. And it's unique. So we have the same correspondence with this PDE. You're given a PDE, or given a differential equation, not a stochastic differential equation, you know that, if you're given a reasonable differential equation, then a solution exists. And it's unique. So the same principle holds in stochastic world. Now, let me state it formally. This stochastic equation star has a solution that is unique, of course with boundary condition-- has a solution. And given the initial points-- so if you're given the initial point of a stochastic process, then a solution is unique. Just check, yes-- as long as mu and sigma are reasonable. One way it can be reasonable, if it satisfies this conditions. These are very technical conditions. But at least let me parse to you what they are. They say, if you fix a time coordinate and you change x, you look at the difference between the values of mu when you change the second variable and the sigma. Then the change in your function is bounded by the distance between the two points by some constant K. So mu and sigma cannot change too much when you change your space variable a little bit. It can only change up to the distance of how much you change the coordinate. So that's the first condition. Second condition says, when you grow your x, very similar condition. Essentially it says it cannot blow up too fast, the whole thing. This one is something about the difference between two values. This one is about how it expands as your space variable grows. These are technical conditions. And many cases, they will hold. So don't worry too much about the technical conditions. Important thing here is that, given a differential equation, you don't expect to have a good closed form. But you do expect to have a solution of some form. OK, let's work out some examples. Here is one of the few stochastic differential equations that can be solved. This one can be solved. And you already know what X is. But let's pretend you don't know what X is. And let's try to solve it. I will show you an approach, which can solve some differential equations, some SDEs. But this really won't happen that much. Still, it's like a starting point. There's that. And assume X(0) [INAUDIBLE]. And mu, sigma are constants. Just like when solving differential equations, first thing you'll do is just guess, suppose, guess. If you want this to happen, then d of X of t is-- OK. x is just the second variable. And then these two have to match. That has to be equal to that. That has to be equal to that. So we know that del f over del t, mu of X of t is equal to mu of f. So we assumed that this is a solution then differentiated that. And then, if that's a solution, these all have to match. You get this equation. If you look at that, that tells you that f is an exponential function in the x variable. So it's e to the sigma times x. And then we have a constant term, plus some a times a function of t. The only way it can happen is if it's in this form. So it's exponential function in x. And in the time variable, it's just some constant. When you fix a t, it's a constant. And when you fix a t and change x, it has to look like an exponential function. It has to be in this form, just by the second equation. Now, go back to this equation. What you get is partial f over partial t is now a times g prime of t, f... AUDIENCE: Excuse me. PROFESSOR: Mm-hm. AUDIENCE: That one in second to last line, yeah. So why is it minus mu there at the end? PROFESSOR: It's equal. AUDIENCE: Oh, all right. PROFESSOR: Yeah. OK, and then let's plug it in. So we have a of g prime t, f, plus 1/2 of sigma square f equals mu of f. In other words, a of g prime of t is mu minus... square. g(t) is mu times some constant c_1 plus c_2. OK, and then what we got is original function f of t of x is e to the sigma*x plus mu minus 1 over 2 sigma square t plus some constant. And that constant can be chosen because we have the initial condition, x of 0 equals 0. That means if t is equal to 0, f(0, 0) is equal to e to the c. That has to be x_0. In different words, this is just x_0 times e to the sigma*x plus mu minus 1 over 2 sigma square of t. Just as we expected, we got this. So some stochastic differential equations can be solved by analyzing it. But I'm not necessarily saying that this is a better way to do it than just guessing. Just looking at it, and, you know, OK, it has to be exponential function. You know what? I'll just figure out what these are. And I'll come up with this formula without going through all those analysis. I'm not saying that's a worse way than actually going through the analysis. Because, in fact, what we did is we kind of already knew the answer and are fitting into that answer. Still, it can be one approach where you don't have a reasonable guess of what the X_t has to be, maybe try to break it down into pieces like this and backtrack to figure out the function. Let me give you one more example where we do have an explicit solution. And then I'll move on and show you how to do when there is no explicit solution or when you don't know how to find an explicit solution. Maybe let's keep that there. Second equation is called this. What's the difference? The only difference is that you don't have an X here. So previously, our main drift term also was proportional to the current value. And the error was also dependent on the current value or is proportional to the current value. But here now the drift term is something like an exponential-- minus exponential. But still, it's proportional to the current value. But the error term is just some noise. Irrelevant of what the value is, this has the same variance as the error. So it's a slightly different-- oh, what is it? It's a slightly different process. And it's known as Ornstein-Uhlenbeck process. And this is used to model a mean-reverting stochastic process. For alpha greater than 0, notice that if X deviates from 0, this gives a force that drives the stochastic process back to 0, that's negatively proportional to your current value. So yeah, this is used to model some mean-reverting stochastic processes. And they first used it to study the behavior of gases. I don't exactly see why. But that's what they say. Anyway, so this is another thing that can be solved by doing similar analysis. But if you try the same method, it will fail. So as a test function, your guess, initial guess, will be-- or a(0) is equal to 1. Now, honestly, I don't know how to come up with this guess. Probably, if you're really experienced with stochastic differential equations, you'll see some form, like you'll have some feeling on how this process will look like. And then try this, try that, and eventually something might succeed. That's the best explanation I can give. I can't really give you intuition why that's the right guess. Given some stochastic differential equation, I don't know how to say that you should start with this kind of function, this kind of function. And it was the same when, if you remember how we solved ordinary differential equations or partial differential equations, most of the time there is no good guess. It's only when your given formula has some specific form such a thing happens. So let's see what happens here. That was given. Now, let's do exactly the same as before. Differentiate it, and let me go slow. So we have a prime of t. By chain rule, a prime of t and that value, that part will be equal to X(t) over a(t). This is chain rule. So I differentiate that to get a prime t. That stays just as it was. But that can be rewritten as X(t) divided by a(t). And then plus a(t) times the differential of that one. And that is just b(t)*dB(t). You don't have to differentiate that once more, even though it's stochastic calculus, because that's a very subtle point. And there's also one exercise about it in your homework. But when you have a given stochastic process written already in this integral form, if we remember the definition of an integral, at least how I defined it, is that it was an inverse operation of a differential. So when you differentiate this, you just get that term. What I'm trying to say is, there is no term, no term where you have to differentiate this one more. Prime dt, something like that, we don't have this term. This can be confusing. But think about it. Now, we laid it out and just compare. So minus alpha of X of t is equal to a prime t over a(t) X(t). And your second term, sigma*dB(t) is equal to a(t) times b(t). But these two cancel. And we see that a(t) has to be e to the minus alpha t. This says that is an exponential. Now, plug it in here. You get b(t). And that's it. So plug it back in. X of t is e to the minus alpha*t of x of 0 plus 0 to t sigma e to the alpha*s. So this is a variance, expectation is 0, because that's a Brownian motion. This term, as we expected, as time passes, goes to 0, exponential decay. And that is kind of hinted by this fact, the mean reversion. So if you start from some value, at least the drift term will go to 0 quite quickly. And then the important term will be the noise term or the variance term. Any questions? And I'm really emphasizing a lot of times today, but really you can forget about what I did in the past two boards, this board and the previous board. Because most of the times, it will be useless. And so now I will describe what you'll do if you're given a stochastic differential equation, and you have a computer in front of you. What if such method fails? And it will fail most of the time. That's when we use these techniques called finite difference method, Monte Carlo simulation, or tree method. The finite difference method, you probably already saw it, if you took a differential equation course. But let me review it. This is for PDEs or ODEs, for ODE, PDE, not stochastic differential equations. But it can be adapted to work for stochastic differential equations. So I'll work with an example. Let u of t be u prime of t plus-- u prime of t be u(t) plus 2, u_0 is equal to 0. Now, this has an exact solution. But let's pretend that there is no exact solution. And if you want to do it numerically, you want to find the value of u equals-- u(1) numerically. And here's what you're going to do. You're going to chop up the interval from 0 to 1 into very fine pieces. So from 0 to 1, chop it down into tiny pieces. And since I'm in front of a blackboard, my tiniest piece will be 1 over 2 and 1. I'll just take two steps. But you should think of it as really repeating this a lot of times. I'll call my step to be h. So in my case, I'm increasing my steps by 1 over 2 at each time. So what is u of 1 over 2? Approximately, by Taylor's formula, it's u(0) plus 1/2 times u prime of 0. That's Taylor approximation. OK, u(0) we already know. It's given to be equal to 0. u prime of 0, on the other hand, is given by this differential equation. So it's 1 over 2 times 5 times u(0) plus 2. u(0) is 0. So we get equal to 1. Like we have this value equal to 1, approximately. I don't know what happens. But it will be close to 1. And then for the next thing, u1. This one is, again by Taylor approximation, is u of 1 over 2 plus 1 over 2 u prime of 1 over 2. And now you know the value of u of 1 over 2, approximate value, by this. So plug it in. You have 1 plus 1 over 2 and, again, 5 times u(1/2) plus 2. If you want to do the computation, it should give 9 over 2. It's really simple. The key idea here is just u prime is given by an equation, this equation. So you can compute it once you know the value of u at that point. And basically, the method is saying take h to be very small, like 1 over 100. Then you just repeat it 1 over 100 times. So the equation is the i plus 1 step value can be approximated from the i-th value plus h times u prime of h. Now repeat it and repeat it. And you reach u of 1. And there is a theorem saying, again, if the differential equation is reasonable, then that will approach the true value as you take h to be smaller and smaller. That's called the finite difference method for differential equations. And you can do the exact same thing for two variables, let's say. And what we showed was for one variable, finite difference method, we want to find the value of u, function u of t. We took values at 0, h, 2h, 3h. Using that, we did some approximation, like that, and found the value. Now, suppose we want to find, similarly, a two-variable function, let's say v of t and x. And we want to find the value of v of 1, 1. Now the boundary conditions are these. We already know these boundaries. I won't really show you by example. But what we're going to do now is compute this value based on these two variables. So it's just the same. Taylor expansion for two variables will allow you to compute this value from these two values. Then compute this from these two, this from these two, and just fill out the whole grid like that, just fill out layer by layer. At some point, you're going to reach this. And then you'll have an approximate value of that. So you chop up your domain into fine pieces and then take the limit. And most cases, it will work. Why does it not work for stochastic differential equations? Kind of works, but the only problem is we don't know which value we're looking at, we're interested in. So let me phrase it a little bit differently. You're given a differential equation of the form dX equals mu dt plus t dB of t and time variable and space variable. Now, if you want to compute your value at time 2h based on value h, in this picture, I told you that this point came from these two points. But when it's stochastic, it could depend on everything. You don't know where it came from. This point could have come from here. It could have came from here. It could have came from here, came from here. You don't really know. But what you know is you have a probability distribution. So what I'm trying to say is now, if you want to adapt this method, what you're going to do is take a sample Brownian motion path. That means just, according to the distribution of the Brownian motion, take one path and use that path. Once we fix a path, once a path is fixed, we can exactly know where each value comes from. We know how to backtrack. That means, instead of all these possibilities, we have one fixed possibility, like that. So just use that finite difference method with that fixed path. That will be the idea. Let me do it a little bit more formally. And here is how it works. If we have a fixed sample path for Brownian motion of B(t), then X at time i plus 1 of h is approximately equal to X at time a of h plus h times dx at that time i of h, just by the exact same Taylor expansion. And then d of X we know to be-- that is equal to mu of dt plus-- oh, mu of dt is h. So let me write it like that, sigma times d of X-- dB. And these mu depend on [? their paths, ?] x at i of h dt, sigma... With that, here to here is Taylor expansion. Here to here I'm going to use the differential equation d of X is equal to mu dt plus sigma dB(t). Yes? AUDIENCE: Do we need that h for [INAUDIBLE]? PROFESSOR: No, we don't actually. Oh, yeah, I was-- thank you very much. That was what confused me. Yes, thank you very much. And now we can compute everything. This one, we're assuming that we know the value. That one can be computed from these two coordinates. Because we now have a fixed path X, we know what X of i*h is. dt, we took it to be h, approximated as h, or time difference. Again, sigma can be computed. dB now can be computed from B_t. Because we have a fixed path, again, we know that it's equal to B of i plus 1 of h minus B of i of h, with this fixed path. They're basically exactly the same, if you have a fixed path B. The problem is we don't have a fixed path B. That's where Monte Carlo simulation comes in. So Monte Carlo simulation is just a way to draw, from some probability distribution, a lot of samples. So now, if you know how to draw samples from the Brownian motions, then what you're going to do is draw a lot of samples. For each sample, do this to compute the value of X(0), can compute X of 1. So, according to a different B, you will get a different value. And in the end, you'll obtain a probability distribution. So by repeating the experiment, that means just redraw the path again and again, you'll get different values of X of 1. That means you get a distribution of X of 1, obtain distribution of X of 1. And that's it. And that will approach the real distribution of X of 1. So that's how you numerically solve a stochastic differential equation. Again, there's this finite difference method that can be used to solve differential equations. But the reason it doesn't apply to stochastic differential equations is because there's underlying uncertainty coming from Brownian motion. However, once you fix a Brownian motion, then you can use that finite difference method to compute X of 1. So based on that idea, you just draw a lot of samples of the Brownian path, compute a lot of values of X of 1, and obtain a probability distribution of X of 1. That's the underlying principle. And, of course, you can't do it by hand. You need a computer. Then, what is tree method? That's cool. Tree method is based on this idea. Remember, Brownian motion is a limit of simple random walk. This gives you a kind of approximate way to draw a sample from Brownian motions. How would you do that? At time 0, you have 0. At time really tiny h, you'll have plus 1 or minus 1 with same probability. And it goes up or down again, up or down again, and so on. And you know exactly the probability distribution. So the problem is that it ends up here as 1/2, ends up here as 1/2, 1/4, 1/2, 1/4, and so on. So instead of drawing from this sample path, what you're going to do is just compute the value of our function at these points. But then the probability distribution, because we know the probability distribution that the path will end up at these points, suppose that you computed all these values here. I draw too many, 1, 2, 3, 4, 5. This 1 or 32 probability here. 5 choose 1, 5 over 32, 5 choose 2 is 10 over 32. Suppose that some stochastic process, after following this, has value 1 here, 2 here, 3 here, 4, 5, and 6 here. Then, approximately, if you take a Brownian motion, it will have 1 with probability 1 over 32, 2 with probability 5 over 32, and so on. Maybe I didn't explain it that well. But basically, tree method just says, you can discretize the outcome of the Brownian motion, based on the fact that it's a limit of simple random walk. So just do the exact same method for simple random walk instead of Brownian motion. And then take it to the limit. That's the principle. Yeah. Yeah, I don't know what's being used in practice. But it seems like these two are the more important ones. This is more like if you want to do it by hand. Because you can't really do every single possibility. That makes you only a finite possibility. Any questions? Yeah. AUDIENCE: So here you said, by repeating the experiment we get [INAUDIBLE] distribution for X(1). I was wondering if we could also get the distribution for not just X(1) but also for X(i*h). PROFESSOR: All the intermediate values? AUDIENCE: Yeah. PROFESSOR: Yeah, but the problem is we're taking different values of h. So h will be smaller and smaller. But for those values that we took, yeah, we will get some distribution. AUDIENCE: Right, so we might have distributions for X of d for many different points, right? PROFESSOR: Yeah. AUDIENCE: Yeah. So maybe we could uh-- right, OK. PROFESSOR: But one thing you have to be careful is let's suppose you take h equal 1 over 100. Then, this will give you a pretty fairly good approximation for X of 1. But it won't give you a good approximation for X of 1 over 50. So probably you can also get distribution for X of 1 over 3, 1 over 4. But at some point, the approximation will be very bad. So the key is to choose a right h. Because if you pick h to be too small, you will have a very good approximation to your distribution. But at the same time, it will take too much time to compute it. Any remarks from a more practical side? OK, so that's actually all I wanted to say about stochastic differential equations. Really the basic principle is there is such thing called stochastic differential equation. It can be solved. But most of the time, it won't have a closed form formula. And if you want to do it numerically, here are some possibilities. But I won't go any deeper inside. So the last math lecture I will conclude with heat equation. Yeah. AUDIENCE: The mean computations of [INAUDIBLE], some of the derivatives are sort of path-independent, or have path-independent solutions, so that you basically are looking at say the distribution at the terminal value and that determines the price of the derivative. There are other derivatives where things really are path-dependent, like with options where you have early exercise possibilities. When do you exercise, early or not? Then the tree methods are really good because at each element of the tree you can condition on whatever the path was. So keep that in mind, that when there's path dependence in the problem, you'll probably want to use one of these methods. PROFESSOR: Thanks. AUDIENCE: I know that if you're trying to break it down into simple random walks you can only use [INAUDIBLE]. But I've heard of people trying to use, instead of a binomial, a trinomial tree. PROFESSOR: Yes, so this statement actually is quite a universal statement. Brownian motion is a limit of many things, not just simple random walk. For example, if you take plus 1, 0, or minus 1 and take it to the limit, that will also converge to the Brownian motion. That will be the trinomial and so on. And as Peter said, if you're going to use tree method to compute something, that will increase accuracy, if you take more possibilities at each step. Now, there is two ways to increase the accuracy is take more possibilities at each step or take smaller time scales. OK, so let's move on to the final topic, heat equation. Heat equation is not a stochastic differential equation, first of all. It's a PDE. That equation is known as a heat equation where t is like the time variable, x is like the space variable. And the reason we're interested in this heat equation in this course is, if you came to the previous lecture, maybe from Vasily last week, Black-Scholes equation, after change of variables, can be reduced to heat equation. That's one reason we're interested in it. And this is a really, really famous equation also in physics. So it was known before Black-Scholes equation. Particularly, this can be a model for-- equations that model this situation. So you have an infinite bar, very long and thin. It's perfectly insulated. So heat can only travel along the x-axis. And then at time 0, you have some heat distribution. At time 0, you know the heat distribution. Then this equation tells you the behavior of how the heat will be distributed at time t. So u of t of x, for fixed t, will be the distribution of the heat over the x-axis. That's why it's called the heat equation. That's where the name comes from. And this equation is very well understood. It does have a closed-form solution. And that's what I want to talk about. OK, so few observations before actually solving it. Remark one, if u_1 and u_2 satisfies heat equation, then u_1 plus u_2 also satisfies, also does. That's called linearity. Just plug it in. And you can figure it out. More generally that means, if you integrate a family of functions ds, where u_s all satisfy star, then this also satisfies star, as long as you use reasonable function. I'll just assume that we can switch the order of integration and differentiation. So it's the same thing. Instead of summation, I'm taking integration of lot of solutions. And why is that helpful? This is helpful because now it suffices to solve for-- what is it? Initial condition, u of t of x equals delta, delta function, of 0. That one is a little bit subtle. The Dirac delta function is just like an infinite ass at x equals 0. It's 0 everywhere else. And basically, in this example, what you're saying is, at time 0, you're putting like a massive amount of heat at a single point. And you're observing what's going to happen afterwards, how this heat will spread out. If you understand that, you can understand all initial conditions. Why is that? Because if u sub s t, x-- u_0-- is such solution, then integration of-- let me get it right-- u of t of s minus x ds is a solution with initial condition x(0, x). What is it? Sorry about that. So this is really the key. If you have a solution to the Dirac delta initial condition, then you can superimpose a lot of those solutions to obtain a solution for arbitrary initial condition. So this is based on that principle, because each of them is now a solution. If you superpose this, then that is a solution. And then if you plug it in, you figure out that actually it has satisfied this initial condition. That was my first observation. Second observation, second remark, is for the initial value u(0, x) equals a Dirac delta function, u of-- is a solution. So we know the solution for the Dirac delta part. First part, we figured out that if we know the solution for the Dirac delta function, then we can solve it for every single initial value. And for the initial value Dirac delta, that is the solution that solves the differential equation. So let me say a few words about this equation, actually one word. Have you seen this equation before? It's the p.d.f. of normal distribution. So what does it mean? It means, in this example, if you have a heat traveling along the x-axis, perfectly insulated, if you put a massive heat at this 0, at one point, at time 0, then at time t your heat will be distributed according to the normal distribution. In other words, assume that you have a bunch of particle. Heat is just like a bunch of particles, say millions of particles at a single point. And then you grab it. And then time t equals 0 you release it. Now the particle at time t will be distributed according to a normal distribution. In other words, each particle is like a Brownian motion. So for particle by particle, the location of its particle at time t will be kind of distributed like a Brownian motion. So if you have a massive amount of particles, altogether their distribution will look like a normal distribution. That's like its content. So that's also one way you see the appearance of a Brownian motion inside of this equation. It's like a bunch of Brownian motions happening together at the exact same time. And now we can just write down the solution. Let me be a little bit more precise. OK, for the heat equation delta u over delta t, with initial value u of 0, x equals some initial value, let's say v of x, and t greater than equal to 0. The solution is given by integration. u at t of x is equal to e to the minus-- let me get it right. Basically, I'm just combining this solution to there. Plugging in that here, you get that. So you have a explicit solution, no matter what the initial conditions are, initial conditions are given as, you can find an explicit solution at time t for all x. That means, once you change the Black-Scholes equation into the heat equation, you now have a closed-form solution for it. In that case, it's like a backward heat equation. And what will happen is the initial condition you should think of as a final payout function. The final payout function you integrate according to this distribution. And then you get the value at time t equals 0. The detail, one of the final project is to actually carry out all the details. So I will stop here. Anyway, we didn't see how the Black-Scholes equation actually changed into heat equation. If you want to do that project, it will be good to have this in mind. It will help. Any questions? OK, so I think that's all I have. I think I'll end a little bit early today. So that will be the last math lecture for the semester. From now on you'll only have application lectures. There are great lectures coming up, I hope, and I know. So you should come and really enjoy now. You went through all this hard work. Now it's time to enjoy.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
10_Regularized_Pricing_and_Risk_Models.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Our guest speaker today from Morgan Stanley, Ivan Masyukov. Dr. Ivan Masyukov. IVAN MASYUKOV: Hello. One, two, three. Can you hear me? PROFESSOR: And the microphone will just be recording you, but it doesn't broadcast you. IVAN MASYUKOV: Ah. Understood. All right. So I'm Ivan Masyukov. I work in Morgan Stanley. And my background is applied physics and mathematics from Moscow Institute of Physics and Technology. And today, the topic of the lecture is regularized pricing and risk models. So we will talk about typical pricing risk models for interest rate products, and the important aspect of adding some additional constraints, which means, like, adding some regularizers to the model. So we will start from bonds, which is probably the most simple interest rate product on the market. Then we will discuss swaps. We will build a yield curve. And we will see how yield curve models can be improved to satisfy needs of actual trader. And at the end, we'll look at the very nice example of ill-posed problem of calibrating the two-dimensional volatility surface necessary for volatility model-- Monte Carlo assimilation. And we will see how that problem can be solved. During the lecture, if you have any questions, please interrupt, OK? So what is bond? Bond is a security which is issued if someone like a borrower needs money. And it promises to pay some certain fixed amount of certain cash flows in the future, and request for some money up front for this. So typical bonds basically include same periodic payment-- let's say like every half year or every year until maturity, where at maturity the face value is paid, like the biggest sum of money. And again, during the beginning, the investor is asked to pay some up front. There are also zero-coupon bonds, which don't pay anything until the maturity. And there are very interesting perpetual bonds where basically you pay some money up front, and then you pay it back like infinitely-- which sounds like a good deal, but we will know how to price it right. So those are some diagrams. So the first one is the standard fixed-rate bond, where small green arrows represent a periodic payment. And there is a face value added on top of a periodic payment at the maturity of the bond. So this is a typical cash flow diagram used for analysis, OK? And so arrows up represent something that-- and it's green right? That is good for us. So it's something that we receive. And a red arrow facing down represents something that you have to pay. Right? So a zero coupon bond is something, as I said before, is something that you pay up front, and you get back a fixed amount of money in the future. What's interesting about this graph-- you can see that the green arrow has a bigger amplitude than the red one, which means that you kind of, every time you put like $100 now, right, you kind of expect that in return you get more in the future. Because if you don't get more in the future, you just don't get this money, don't put this money. You just keep it in the pocket. So as a result, you get the concept of time value of money. So tomorrow, $100 always will be more than just $100. And also, if you look at the graph of the fixed-rate coupon bond, and you sum all of the cash flows here, it looks like you get more than this red one. But again, there is, as further in the future the cash flow is, the more the kind of depreciation. And we call this depreciation a discount factor, OK? So basically the more in the future the cash flow is, the smaller the discount factor. And so for today the discount factor will be 1, for tomorrow it will be like 0.999, and so forth. And in 30 years, let's say, it will probably be like 0.1, depending on current rates in the market. So let's see how we can price the bond-- or not necessarily price, but compute a fair value of future cash flows. So our fair value of computed cash flows can be found if we have discount factors. So every discount factor at every cash flow in the future I-- which in this particular case will be a coupon times the face value-- should be multiplied by the discount factor. And then we also add a face value discount with a discount factor at the maturity of the instrument. So the way this product trades in the market is that people buy and sell bonds paying P, right? So it's very important to understand that for bonds, it's not something that we have cash flows which we kind of need to price. It's actually the price is already known. So it's very liquid. It's the result of activity in the market, meaning that there is very little uncertainty about the price. So this P is known. And as with all cash flows, it's something that's written in the contract, right? So it's something, we have fixed cash flows in the future. So it's always a question about what kind of model is useful for the discount factors. So we need a model for discounting. Any questions so far? So one of the simplest models is to use just one parameter to kind of cover all the discounting. And the discount factor can be represented as e minus y times t of i, t sub i, where y is some kind of-- it's called yield to maturity. Well, the reason why it's exponential, it's natural right? So if you have a 0.999 discount factor for today, and then we kind of say, OK, it's the same discount for tomorrow, we will have the same discounting for every other day. So we have to multiply them. As a result, the total discounting will be an exponential. So if our discount factors are like this, then our price basically can be represented as a linear combination of future cash flows, right? At this point, by the way, we kind of merge together the final coupon with the face value, and we'll just kind of be talking about the coupons only, about cash flows only. And so that's the formula for the bond price, is this. So basically, what's known on the market is P, right, which is a price that's-- that instrument is traded. We also have defined cash flows in the future. So we can solve for the yield. So essentially, if we know the bond price, we can find the bond yield, OK? And if we know the bond yield, we can find the bond price, OK? So typically, bonds are traded in terms of its price. But some bonds are traded in terms of yield. But again, this is like one-to-one. You can always go back and forth. What's important is-- what has economic value, right, is the bond price, OK, and the future cash amounts of cash flows. And when you talk about yield, it's not something that's traded. It's actually one of the ways to align future cash flows with the bond price. And that way assumes that we have, again, constant discounting for all time points in the future. And we will see that it may or may not be optimal case. So what's also important when we're talking about instrument price is to have the model of how that price changes if the market changes. So here, we're talking about sensitivity of the bond price to yield. And what is typically done is basically to normalize by the bond price itself. And then it's called bond duration. So the nice thing about normalizing is that the duration of that bond that you have in your portfolio doesn't really depend on how many bonds you have, right? So it's basically more like property of the bond itself, rather than how many bonds you have in your portfolio. So if you take the previous formula and take the derivative with respect to y, we get the following formula for duration. And we know what the price is, right? And we can rewrite this formula this way, which you see it's a sum of t_i's times some weights and divided by the sum of the weights. So it's essentially a weighted sum of time, OK? And those pieces of time, those moments of time is more important, as-- I mean the weights are proportional to present values of future cash flows. So that's why bond duration has a very nice kind of intuitive sense. As a result of that, and yeah, I forgot to mention one thing. So the duration is always negative, right? So we have a sign here, because if the bond price goes up, this means that the yields goes down, OK? And if the yield goes up, price goes down. And the explanation's very simple. So yield is kind of the same thing as interest rate on the market. So if those rates go up, this means that there will be more discounting in the future cash flows, they will be less valuable to me. So I'll be less willing to pay for those cash flows, OK? So it's kind of fundamental that relationship has a negative sign. So in case of a zero-coupon bond, we only have one cash flow in the future. So there is just one weight, and that weight is totally assigned to that last cash flow. So duration of zero-coupon bond equals to maturity. Duration of regular coupon bond depends, but it's always less than maturity, just because we'll have a weighted sum formula here. So essentially, that model for the bond duration kind of assumes that all rates-- so we have just one yield number for everything, so all rates go like in a parallel way, which was OK before the crisis, right, where kind of rates today are kind of similar to the rates expected in the future. But it's no longer the case. So the rates now, they're higher than like one year ago, but they're still much lower than expected in the future. So we expect that the rates will go very high. So the curve is very steep at the moment. So that model of just one number for everything might not be adequate. And we'll see how we can improve this situation. So it's worth mentioning the second derivative. We already spoke about the price, first derivative of the price with respect to yield, and a second. So for small changes in the yield, you can assume that it's linear, so it's OK to use just the first derivative. So second derivative will be necessary for larger movements of the market. Like as an example, if you're a trader, right, and the bond trades, right? So we call it a cash product. Means that you actually don't need any model to price it. You already have that price, OK? But if you try to explain like why you might have lost money today, right, and that always-- the trader always does that at the end of the day. And we always use first derivatives. And we try to explain it, but there is also unexplained, OK? And that unexplained can be quite high on this with large movements. So if you have like a term in your analytics for the bond convexity, that helps you to include the second derivative, and therefore make the second derivative smaller-- sorry-- the unexplained smaller. So let's now talk about interest rate swaps. So bond cash flow is basically a stream of fixed cash flow, which means that for certain dates it's just guaranteed that you will be getting $100 with certain periodicity. A swap means you exchange fixed payments with respect to some floating. And floating means that the amount of money that you'll be getting or paying, OK, receiving or paying, will depend on some market observable. So for interest rate swap, it will be typically-- and let's focus on the USD market, it will be a three month LIBOR rate. That rate is published daily, OK? And it's like if you need to go to the bank and get a three month CD with the money for the three months, that rate is already known. It's actually called LIBOR, because it's kind of between banks, and it's set at 11 AM London time. So as a result, we already know how to price cash flows in the future. So present value of the fixed stream of payments, as we know, will be like this. And there is a floating rate of cash flow as well. And the nice thing about the swap is that when you enter the swap, you don't pay any money, right? It's because you just kind of enter the agreement, rather than when you buy or sell a bond, there is some exchange of money. For swaps, swaps are designed the way such that-- so when you make this agreement, it's a certain moment of time, the fixed rate of the swap is picked in such a way that the present value of fixed minus the floating cash flows will be net to zero. So you can see, I mean, if we rewrite those equations, OK, we can see that the swap rate-- which is the most important quantity of the swap, and something that traders are basically are most concerned, right? So you first need to define what the swap is. And for USD, you are saying probably like 10 year swap, OK. And this is the rate. So the trader continuously kind of quotes bid and offer levels of the swap rate. So no one is talking about PVs and stuff like that. So it's always the swap rate. So that the swap rate is a weighted sum of forward rates. And it has a very nice intuitive explanation. So you have some stream of floating cash flows-- variable cash flows-- which at the moment, like, will probably be low now, but will be high in 10 years, will be much higher in 30 years. So the swap rate for this kind of environment will be kind of an average, right? And again, those weights depend on the discounting factors. So later, we will see that because we're talking about bond that having a fixed cash flows in the futures, and a swap that fixed in exchange of floating, swap can be hedged with bond, "hedged" meaning that-- you know what the term "hedged" means, no? Hedging means that if you have just, let's say, a swap, right? So if market changes, right, you can again lose money. So a typical task for the money-maker, trader, is to kind of offset that risk with something. Ideally, you sold one swap, you bought another swap the same way with a different rate. So you kind of locked in your profit. But you remain with a zero risk. So let's try to construct a yield curve. Why do we need the yield curve? So when we have, let's say, a series of swap with different maturities, right, all those swaps will start today, and usually, swap will have quarterly payment for the floating leg, and six month payments for the fixed leg, and you'll have different maturities. But if you try to kind of get discount factors from that information, you will see that you can get those discount factors only for certain dates, OK? But the typical situation is that given on some liquid market instruments, you want to price your entire portfolio, which has continuous spectrum of cash flows from now to 30 years, 40 years. And also, for typical swap portfolio that I personally deal with on a daily basis contains hundreds of thousands of swaps. Every swap has many cash flows. So you need something that can, based on discrete information of reliable liquid instruments on the market, draw the line. Can basically construct the curve. Which means that you can, so that you are able to get discount factors for any potential day in the future, or you can compute forward rate for every date in the future. So the first step to construct a yield curve is to select input instruments for calibration. So you have a set of instruments, and a new set of input quotes. Then you also need to kind of decide what kind of properties of that line will be. So you can say, OK, first of all, you need to decide what quantity will be interpolated. It could be daily discount factors, or daily forward rates, or maybe three-month forward rates. Then you select the spline type. So I'm not sure if you're familiar with the splines. Probably you heard about cubic spline, right? So there are different types of splines, and some of them are better and some of them are worse for different situation. And you also need to decide like what that will be the node points for the spline itself. OK. And then, as a final step, so you have some mathematical quantity, which is mathematical object where you know what the line is, and you have control points. And you need to adjust your control points such that when you reprice your instruments, those instruments are repriced exactly to the same quotes that you find on the market. You have a question. AUDIENCE: Is that spline, again, is it just like a-- IVAN MASYUKOV: All right. So let me. All right. So this is a picture of the cubic spline. So spline is a way to draw a smooth curve. This is an example of the cubic spline. So you start to define your node points. Your node points in this case are 1, 10, 20, 40, 80, 160, and 240, right? And then for every one of those intervals, OK, the functional form of the shape of this curve is a cubic polynomial, OK? Well, if you just do cubic polynomial for every interval without kind of putting additional constraints, you can have all kinds of boundary effects, like jumps, kinks, and other things, because we want our cubic curve-- cubic spline-- to be meaningful, right? So we want to maintain, to preserve maximum number of derivatives for every node point. So we're not going to check. But believe me, this curve, it is a cubic polynomial for every one of those interval. And also, we have two continuous derivative at every node point, because for the n degree of the spline, you always have like n minus 1. You can have n minus 1. So the same thing, a spline can be represented in terms of B-splines. B-spline is a new type of spline. It's just as a representation which is more intuitive, I should say. So all universe of the curves with those node points, with two continuous derivatives, can be represented as a linear combination of those basis functions. So B-spline, I mean, if you're interested, you should probably, we're not going to discuss it in details, but it's nice separate kind of topic about how to build those B-splines. But essentially, what's nice about those B-splines-- and "B," as you probably already understood, B is basis, right? So you have like basis functions. So those functions look like bell shapes. They are non-zero on some sub-interval. On every interval, it will be a cubic polynomial. Everyone will have always two continuous derivatives. As a result, in any linear combination of those-- which the first curve is-- will also have that property. OK. So now we, yeah, so the calibrate means that we basically have some solver to make sure that our swaps with the rates for those maturities actually repriced at par. At par means that the PV is zero. This is a typical example of the yield curve instruments. And IRS stands for "interest rate swap," and we have maturities from one year to 30 years. And the quotes are of 0.33% up to 2.67%. So you can see that that's actually, that's from my one-year-old presentation. Rates are quite high these days. So this is an example of the yield curve graph. So again, so those are the rates from 0.3 to 3.5. And the shape of the curve is not flat at all, right? So it's actually pretty steep. So for the first five years, it's very steep. Then it reaches the plateau. And then there is some feature there, probably because of some behavior in the 20-year region. So three-month forward rate is the LIBOR rate. LIBOR is the rate for the three month. It's mostly kind of common. And the reason why is because the standard interest rate, USD swap has a three-month frequency for payment on the floating leg. So if you're talking about floating rates, is always three months. And it's always LIBOR. So because we've already built the curve, now let's see how we can improve the situation with a bond. So we have the curve, so we have the discount factors, right? And we see that those discount factors cannot be obtained on the assumption that you have just one parameter yield for everything, because the curve we know is not flat. So if we just try to price it using those discount factors, try to get a fair price, we probably won't match the market observables. So we need some extra term. And again, here we can use it in a similar form as we did it for the yield. But right now it's going to be a small correction to the yield curve, rather than kind of really rough assumption about that the curve is flat, OK? So typically, if the curve magnitude is, let's say, 3%. OK? So the spread is probably 100 times lower. So having a nice correction is always better, right? And another nice feature is that of this approach for the bond, like if we already build our yield curve model, and we know sensitivities of our portfolio to inputs of the curves, which then transition into like differences in discount factors, we can easily apply that to the bond. We can first find what this spread parameter is, to solve for s knowing P, which is very liquid market tradable. And then we can kind of use consistent model for the bonds and the swaps in our portfolio. Any questions? AUDIENCE: Yes. So what does the bond rate tell us about the bond? IVAN MASYUKOV: That's a very good question. So it might tell us something like bond liquidity, for example. Like if it's not liquid, or there is some-- so it may be related to the bond itself. And sometimes we kind of think that the bond is riskless, which means that-- especially if it's issued by US government, which if we can assume that those cash flows in the futures are guaranteed, right, then I basically will be willing to bring a certain amount of money and discount factors, right? But if you tell me that you will pay me that in the future, I won't be so certain, right? So I'll need to add some kind of credit spread to that-- we call it credit spreads-- as a result. It's the credit spread will kind of propagate to the spread number. On the other hand, if the bond is really US government-issued, and is considered to be guaranteed, then it may be a feature of the swap, OK? Where just because of some liquidity situations in swap market-- like all of a sudden, let's say, all option traders on the street needed this 10-year swap, OK? Because they kind of need to hedge certain very popular products-- volatility products-- they'll start to buy it, that spread will change. But what's even more interesting, that spread is tradable by itself, OK? So you can go to the market and you trade the spread. Moreover, let's look, like, 10-year situation. So you have 10-year bond on the market. You have tradable swap, and you have tradable spread. So the question is which one is the most liquid? What do you think? The most liquid is the bond, of course. It has much more liquidity. Surprisingly, the second one. It's the spread between the 10-year swap and the bond is traded in the market. So there's more transaction on the spread rather compared to the swap. As a result, when we built our curves, we're not taking like 10-year swap from the market. OK. We actually take the yield and the spread. And that's how we define the most kind of reliable level of the swap. Of course, we could have just take whatever we observe for the 10-year swap, but it could be off. And also, if you observe, there will be more bid-offer spread as well. So as an example, let's try to shift one of the inputs of the curve by one basis point. And that will result in this kind of deviation of forward trades, which will be combination of basis splines. But what's interesting first of all, it's kind of complicated [INAUDIBLE] behavior. The reason why is because you are saying that nothing changed before the nine year, like nothing changed after ninth year, but just point in between. So in order to calibrate to that kind of weird condition, right, you need to have a ripple here. But what also is more important is that by shifting one year basis point by one basis point, that the amplitude of shifts in the curve reaches 14 basis points. So not sure if you're familiar, but it's an ill-posed problem right? So small changes in your inputs can cause large variations in your outputs. This is a very important slide. So the first column is, again, we saw those are our instruments, quotes, and this is the risk of the portfolio. That's something that a trader needs no matter what. It basically shows you what happens on the market if different-- what will be the change in your portfolio if the market changes. So the meaning of the number-- for example, for the five year-- is that if five year rate moves up by one basis point, we'll lose minus 700K. We also marked here yellow points that are more liquid than the others. So now a typical situation is that you need to hedge your portfolio, right? So you need to liquidate your risk, I'm basically saying, OK, given the model that we have, I want its value to be insensitive to any movements on the market. So for that purpose, what you can do, you can go and you can buy as many one-year swaps as plus 200, as many two-year swaps which would be the risk of minus 1.3, and so forth, right? Then that always cost you money right? And that money is kind of proportional to bid-offer of the particular instruments. And that bid-offer is smaller for liquid instruments and larger for less liquid instruments. So if you multiply by the diff-- we can see that if we want to hedge our risk, it's going to be quite expensive. It will cost us 3.6 million dollars. Any questions so far? So traders never hedge every bucket in the risk. Bucket means every line here. So you always see some numbers, but if you try to make every number here zero, which means that if you trade seven here, you also could try to go to the market, find the offsetting seven here, you'll have to pay too much and you won't be profitable. So what the traders do if someone ask for the seven year, they make this transaction, but they go, then hedge it from the more liquid points, which is less expensive to buy. So we need a better model for hedging. And a general formulation of the model is presented here. So we have portfolio risk which is a just the vector here, right? And we have hedging. Portfolio risk is basically if you have candidates of instruments that you can use for portfolio hedging, again, the risk will be represented in this format in terms of sensitivities to swap rates. And we have weights of that hedging portfolio that we need to find, obviously. So you have this hedging portfolio. You multiply H by x, you get risk of this hedging portfolio. You add it to the risk of your portfolio. And then, what we need to minimize, you don't need to minimize everything. But you need to if they give you, OK? What can happen on the market? What are the typical modes of the market? And so essentially you kind of define your market scenarios, which can be found in a different way. So one of the ways to approach that problem is to use principal component analysis. I know you already are familiar with SVD. So if D is data of market movements in matrix-- then any matrix can be decomposed using SVD. And we can then look at this spectrum of this decomposition, looking at those eigenvalues, and just pick the ones that look high enough for us, and just keep that number. And let's, for example, we find that we really investigated this market, and we found that there are just five components that drive the market, and the rest is just so little that it's meaningless, right? On every day, and we are certain that it's just five components, five modes of market moments. Then, if we have a curve that consists of 20 points, we don't need to hedge every swap with its corresponding maturity. We can just pick five swaps that are liquid enough and cheap enough for us to hedge, and just use them. So let's look now at those typical graphs of those principal components. X-axis is the swap maturity in years, and y is some kind of relative, let's think of that as basis points. So blue line is the first component which is the prevalent. And it kind of, you can see that swap rates, they're basically flattish after 10 years, but the first component is pretty steep. And what it says as well is that the main behavior of the market is that rates now do not move, but they will move in the future. And that's basically because Fed is in a hold, right? So they kind of stimulate the market in a way such that the rate remains the same until sometime in the future. Mode number two is a kind of like tilting situation. Mode number three is more complex. And we'll have several other modes here as well. So now, following our previous general approach to the problem, we formulated here as-- so we have PCA factors here in P. And now, because the number of factors that we selected is the number of hedging instruments-- we no longer need to minimize. We can always feed, which you can always achieve like perfect minimization. We can always achieve that zero. So that's why we formulate it as zero. So solving that problem here. Yeah. And the hedging matrix, this is an example of hedging matrix. So what that matrix says is that if I take one year swap and put in my portfolio, empty portfolio, and then they apply my model, I'll have just sensitivity to that particular swap. Which kind of makes sense, because since you use the same instruments to calibrate your yield curve, right, then it should be sensitive to itself only. That's why that matrix is just ones for itself, and zeroes otherwise. So then, as a result, we get this matrix. So same portfolio that we had before. This is our PCA matrix that translates our risk into those few numbers, right? And because we know it translates our risk-- low risk, in terms of many curve inputs-- into just five most liquid ones, which is 1, 2, 5, 10, and 30. As a result, our translated risk, which tells us what we need to do to hedge our portfolio, is just those numbers. And now, if we take a bid-offer charge, 0.1 basis points for those, and multiply, we get numbers which are orders of magnitude smaller than we got before, right? So we probably get something like there were 400. It's not 4 million-- 3.6 million anymore. That's exactly what traders do. And different traders have different opinions of what dynamics of the market is. But they always have some model. So disadvantages. So PCA model is something that just formally attuned to historical data. I always say that if you take kind of scramble your swap maturities in your model, and you do your computations, and you kind of unscramble them, you get exactly the same result. Which means that in PCA model, you don't put any constraints on that. Two year is very close to one year, and two year is between one year and five year. So PCA model or hedging coefficients of that matrix is not very stable-- especially for recent modes in the market. Also, because SVDs kind of is the least squares approximation, it's very sensitive to outliers. So there is just one event on the market that kind of one day happens, something like rates went up, and then it went down significantly, it may have unnecessarily high influence on the outputs. And if those coefficients change daily, right, then again, it may be too costly. And quite often we just overfitting to historical data. So we're saying, OK, what can I do. I just take historical data, and I prove that my model works, would have worked for the last three years, or the last three months, but that doesn't mean that it will work for the next three months. If we kind of try to put some additional constraints or additional thoughts about what this behavior should be, this may improve situation. So PCA interpretation is that risk matrix is a linear combination of principal components producing a shift on one hedging instrument at a time. Now the question is, let's forget about historical, OK? Is there any other approach? We know historical is noisy, and it's kind of first step if you want to build the model. But can we do something better? And the answer is yes. So we can say that we have our yield curve in terms of forward rates. And typically, when we build this curve, we observe that it is smooth. It's smooth not only because we use smooth splines, but also because if there is no certainty about some event in 10 years from now, there is no reason to kind of expect there will be spike or some non-smooth feature in the forward rate space. So what we can do is that we can try to minimize those equations where Jacobian is a matrix translating shifts of yield curve inputs into movements of forward trades. So essentially, we will try to penalize non-smoothness. And the solution will be like this, with some kind of-- so we'll be adding a penalty, OK. And penalty will be a small regularization parameter. So this is, as an example, that's what we get. Again, here in that model. You can view this matrix as if one year rate moves, what it basically-- so your drivers are 1, 2, 5, 10s, and 30s. So that's your drivers. Knowing the movements of your drivers, what would be the response to your swap rates? And you know that it will always be one to itself, right, as you see here. And in between, it will be kind of a smooth functions. So let's take, this moment, a broader view at what the pricing model does. And we have a pricing engine, essentially. It's a way, if you have all model parameters-- including curves, volatility, the surface, everything, right? And in order for those parameters to be consistent with the benchmark prices, you need some calibration engine which matches market observables to the ones that's been repriced by the model output of the pricing engine. And once you make sure that benchmark prices of your model equal or are kind of close enough to benchmark prices observed in the market, you calibrated the model, then you can essentially price your portfolio, and get values and risk. So let's look at one of the nice examples of how that pricing engine and pricing and calibration process works. We'll look at HJM model, which is used to price volatility products. So we're not going to go into too many details about this, but this is equations of evolution of forward rates that we need for simulation, for Monte Carlo simulation. What we're saying here is this change of the forward rates-- because forward rates is the quantity that is being assimilated-- has some drift, OK? Because dt is time. And also, it has some dependence on the forward rates to the power of beta, right? So if it's log-normal model, beta will be one. If it's normal model, beta will be zero. But in general, it's different. Then we have volatility surface, right, which kind of gives you what the number of volatility to use for this calendar and forward time. And we have correlation and factor structure which we're not going to talk about here. And this is Brownian motions. So we're not going to go any more complex like this. We'll just start looking at nice, two-dimensional surfaces here, and see what are the problems of calibrating the volatility surface. Just to give you a diagram of when we look at the surface, what different elements of that surface mean. It's a triangular surface. You have a calendar time, right, and you have a forward time. So your simulation starts at the first vertical line. So you have forward rates here as calibrated from the curve as of today. So those are square boxes here, square elements. So you need to kind of transition from the first line to the second step using Monte Carlo simulation. And that's when, for every arrow here, you need the volatility number. Then, once you did your Monte Carlo simulation for the second one, you need ones for the third one. And again you need data-- which volatility to use, OK? So essentially, the surface that we'll be looking at on next slides is essentially representing the numbers necessary for this transition's volatility for every arrow. So to explain, there are different areas here. Like, for example, those will be-- if one step is one here, OK, we're talking about this, it will be the forward rates that will be observed in two years. That one will be observed as of now for like one year from now. But again, in two years. And so those rates are essential to compute the swap, forward swap rate. And if we do our Monte Carlo simulation, that's essential information that we need to compute the price for the option on the swap, which we're not going to discuss here. But just an example, it shows that for different instruments observed in the market you have quite overlapping areas of sensitivity. So this is a typical example of the volatility surface where this is calendar time, this is forward time. And it has spikes for certain regions. But in general, it's smooth. So why this problem is challenging. If we try to compute the triangle matrix, which has dimension of 240 by 240-- the reason why is 240 is because every element is for three months, OK? But we need up to 60 years of data, which means that it's 60 by 4, which is the number of quarters, is 240 by 240. If you just need triangle elements, it's 200K elements. So if you try to calibrate everything at the same time, and you formally try to solve your problem, you kind of needed to store, at least to build a matrix of 28K by 28K. And we just don't have memory for this. And we also have very small number of calibration instruments only in terms of swaptions or caps, which are typical volatility products. We just have a relatively small number. So it's an underdetermined problem. We also, as we saw on the previous example, areas of sensitivity of different instruments overlap. And it's an ill-posed inverse problems which produces unstable solutions. And no matter what we do, right, the resulting surface should be nice, right? Should look nice, right? Because if it has spikes in some points in the future, then we either have an economic reason for this, or we claim that this is something that's not realistic. So this is how we approach the problem. So the first step, we represent our volatility surface. And here, even though volatility surface is two-dimensional we just kind of assign a number for every of those elements, OK, and then represent the surface as a vector, OK? Whereas saying that the new surface v will be some initial state plus a linear combination of basis functions. And basis functions should correspond to some reasonable functions, OK? But the nice feature is that number of basis functions will be much smaller than the number of elements that we need to calibrate. But we will be very formal here. And we'll try to use same number of basis functions as we have our input instruments. So in case we had 50 input instruments, we select basis functions also the number 50. So we will use typical Newton-Raphson approach here. We will compute sensitivities of input of all instruments to perturbations of a volatility surface, OK? We'll build this Jacobian matrix. And then, if we made the reasonable assumptions about what those basis functions are, then we can invert our square Jacobian. And again, the reason why it's square is because we selected same number of basis functions as the number of input instruments. It's actually quite common approach, but it's very often is wrong approach. It produces unstable results. And we will see why. So we converge to exact solution, but now the volatility surface looks like this. It looks less like a volatility surface, but more like Manhattan skyline. So you have a Hudson River here, and you have some buildings right? Obviously, even though it calibrates exactly, right? And you could go and price your portfolio, but probably prices for instruments in the portfolio that are not input instruments for calibration would be meaningless. Because the reason why we need this surface to be smooth is because for similar instruments for similar products in your portfolio you kind of expect similar prices, right? So if your volatility jumps, that's something that just contradicts with this assumption. So now how can we improve the situation? So we can try to use our basis functions which were selected in terms of piecewise constant shift of different areas. We can use a smooth version of those plans. But again, the result looks better, but still is not good enough. And just to demonstrate that this is an ill-posed problem-- an ill-posed problem is something that small changes of your inputs results in insane changes in your output. And this is a typical example. So keep all the instruments the same, OK? We just change by 1%-- which is not a big number-- of the five year by 10 year swaption, results in a quite large change of the volatility surface. But look also at the shape right? So it's really kind of you look at one building with an antenna, and another building, right? So it's very unreasonable change of the volatility surface. So we can use ill-posedness to our advantage. So basically, at this point we say, well, it's not a requirement to calibrate exactly, just because every instruments that is an input of collaboration actually has some tolerance. So even there is no point to calibrate it exactly. So because we know that small variations in inputs can be large variations of outputs, we can put some constraints on the outputs. And actually, that may not cost us much in terms of not being able to calibrate exactly, but produce much more meaningful result. And just to be absolutely sure that our output result, our surface is smooth, we can use basis functions that are smooth to begin with. So we'll use B-splines, but those will be two-dimensional. And we'll talk a little bit more about this. And it's not a requirement for us to have as many basis functions as we have instruments, because we can put some other constraints. Like, for example, we can put smoothness or gradient smoothness to the surface. So let's pick some relatively high but reasonable number of functions-- could be more than the number of input instruments-- and see what we can do. So first of all, let's build our basis functions for the surface. So this slide we already just saw, that we selected to use B-splines, which is very convenient to work with. This is a one-dimensional. This is the way we build them, typically. So we use the Cox-de Boor recursion formula. You start from linear, right? Then you apply that formula. You transition to the basis set of the second order. And then the next iteration, you have the third order. And those ones will be built. So now, if you take those basis functions in one dimension, and same basis functions in the other dimension, and then you compute the kind of 10s of products, like you multiply them one by one, then you get basis functions with shapes like this. Which means that no matter what we do, right, like, because every basis function makes sense, then any linear combinations will also be good enough. So to formulate the problem is very simple. So we're saying, OK, the quotes produced by our model should be close enough to what's observed in the market, with some weights again. But we don't require any more that those are calibrated exactly. We are going to put some penalty function to the change of the volatility surface. And we're going to put some penalty function to the volatility surface itself. So those are vectors, right? And L_1 and L_2 are matrices. So just to give you an example what those matrices should be, like if you are talking about smoothness, if you want to penalize the gradient of the vector, right, then the matrix will consist of rows of one and following minus 1. So what you're saying, OK, if I want to penalize the difference between this and the next. And you do that for every element, OK? And the penalty kind of consists of all penalties that you have. So here we just formulate our problem right? So we want once-- because we've had the Jacobian-- we want to price things close enough. And there are two penalty terms here with the different regularization parameters. So once we have this, we can just, using linear algebra, the solution is defined here. And this is resulting calibration, which we see is nice and smooth. So if we take the analysis of the calibration inverse problem, let's do that using our linear algebra tools to understand where the problem is coming from. OK, so A is a matrix, translates our model parameters to market observables. And there is some error there-- epsilon, OK? So you can see that your solution is a linear combination of singular values divided by the singular values, OK? So if your values are high, OK, then that's not a problem. The problem is that once you get very small singular values, OK, the deviation of v_i's can result in the large deviation of your reconstructed result. And that's when you have a problem. So this is described on this slide. So "ill-posed" is that small noise may be significantly amplified by small singular values. And if you have a problem when you don't know how good it is, and whether you can trust it or not, so it's a very standard approach, you compute the condition number, which is the ratio of the maximum to minimum singular values. And if the number is high, which means that there are some very insignificant modes in your input data that can cause substantial changes in your output. And if you know that, it's not comforting right? So if that mode actually doesn't present in reality, then that's fine. But there is no guarantee, right? See, if that happens, then your model basically blows up. And that slide displays exactly that noiseless situation, where it looks like if you don't have any noise and if your model is perfect, then you're always able to calibrate exactly to the market observables. But it's never the case right? So there's always uncertainty to the numbers that you're calibrating to. And your model is not always perfect. So very standard technique for that particular problem is the Tikhonov regularization, which, when you solve your ill-posed problem, as trying to minimize x minus y, you add some penalty to the amplitude to your solution. Which essentially saying, OK, give me something reasonable, but something that's not blown-up. If you go through this linear algebra, to see how that lambda parameter in the Tikhonov regularization affects the weights of the SVD kind of representation of your solution, we now see that small singular values is no longer a problem, just because we're not dividing by the small number, but actually we are kind of limited by some regularization parameter. And typically, when you apply that regularization, your model no longer gives you a perfect match, right? But the result is much more meaningful, and more stable. Another approach to the problem is-- and before we go to that slide. So kind of Tikhonov regularization we used for surface calibration. Here, a standard Tikhonov regularization is something that you just penalize the amplitude of the solution itself. But it doesn't have to be the amplitude of the solution. It can be some linear combination of your solution. And in terms of calibrating of volatility surface, we didn't apply penalty to the reconstructed volatility, but we say it's not that the amplitude of the solution that we don't like. We don't like non-smoothness. So let's penalize the derivatives of the surface in the different angles. Another approach would be to use a truncated SVD, where we say, OK, so we did our singular value decomposition. We're looking at the spectrum of singular values, and we find that some numbers look nice and large, some very small. And then we just skip the small ones. It's very similar to the PCA approach for the risk management that we saw before, where we just selected five principal components and we ignored the rest. As a result, the model is much more robust. And by doing this, we essentially truncate the null space of the model. If you're familiar with this, it's basically the space that has very small singular values. So what regularized models gives you is that improved stability. It's absolutely essential for ill-conditioned problems. And it's a more realistic and meaningful result at the expense of some beauty to fit exactly the data, but that's something that is quite often acceptable. It might cause a biased solution, meaning that your solution again may not be exact. It might be biased towards some better result. For example, if you apply smoothness constraint, the solution would kind of assume a little bit more smooth result than it actually is. But that's acceptable. And the bias, again, can be minimized by reasonable selection of what quantity you actually don't like. Again, you can say, oh, during calibration of our volatilities of our surface we could have said, you know what? Let's just open the textbook and see what's the regularization. OK, we find Tikhonov. We start to penalize the amplitude. Then the result won't be good. So we need to think about and say, OK, what exactly we don't like. Like, for example, is like absolutely flat volatility surface good for us? And we'll find, yeah, that that's actually fine. Then, if we said that then penalizing the amplitude doesn't make sense, so we need to penalize something that is a deviation from that perfect flat solution. Or to be more kind of precise, we penalize like the derivative in different directions. So this kind of concludes my presentation today. And there are some useful links if you want to get more information. Thank you. Any question? AUDIENCE: Yes. So regarding the techniques that you use for fitting function that you are using spline techniques. What other techniques-- is the spline the best technique you use? IVAN MASYUKOV: Well, spline is, yes. So a spline or interpolation is the same. So we're always talking about interpolation. So you have some limited number of inputs, and you want to draw in between. So there are just two words for this, which is a interpolation-- or spline-- which I consider to be the same thing in general. AUDIENCE: I have a question about the interpolation graph that you had where the following was very smooth. When you, as an expert in this, look at that graph and see, I guess, just some odd shapes at certain parts of the curve, how do you interpret that? And do you assess that that's a feature of the current term market liquidity conditions, or possibly just a mathematical-- IVAN MASYUKOV: Well, first of all, I mean, that grid is done like for-- every element is a three-month. But what's traded on the market? Like, typical maturities are three-month, maybe half-year, one year, two year, five year, 10 year. So we should kind of rescale it in a logarithmic scale, or-- you know what I'm talking about? And then, if you do that, then this peak doesn't look a peak anymore. So the reason why it looks like a feature to you is because it's quite sharper than this guy, right? But that's because you have many more detailed instruments there compared to here. And that's the reason why we selected our basis functions like this. So we selected more node density in the front, just because there are more instruments in the front than at the end. So we want our spline to be more detailed at the beginning, and kind of just nice and smooth at the end. So that's why those basis functions-- which correspond very well to actual instruments that we have in the portfolio-- can produce, first of all, you see this spike here too, right? So is it like you can compare this guy to this guy. But essentially, it's just they have similar magnitude, but we don't have enough instruments in this area to support any sharper features. So I don't see any problems with this graph. But traders, because they look at this every day, right? And then they calibrate, and they see a feature, and then they immediately kind of trying to think. OK, if you see something that you know is not typical-- and that sense of typical/not typical comes with years of experience-- then they try to arbitrage this. Because if there is a spike in the surface, it's very likely that it will disappear soon. AUDIENCE: So if this is the model for volatility, that's used in modeling for swaptions, right? IVAN MASYUKOV: Yeah. AUDIENCE: So if you were to actually try to go about making a trade out of some discrepancy that you see, can you kind of describe how you'd do that with swaption? Do you use basically like regular options? IVAN MASYUKOV: I don't have the screen here. But essentially what traders do, right? So this is, I mean that calibrator we actually use. And it's a real-time calibrator. The reason why it can be real time is because there is just simple linear algebra there. Most of stuff like A transpose A can be pre-calculated. So it can be real time. So they can see this volatility surface moving while-- we connect to actual market data. And once they see there is anomaly on the market, there is something traded which they believe is wrong, OK, they're just make an advantage of that. They just make a trade, that would be swaption, for example, or maybe some other trade more exotic than swaptions. But again, having like the dependence on the particular instruments, which kind of would express your position that this will change like in a day or so. So that's exactly how the desk makes money. So we call it relative value analysis. So if we have a tool like that, you have a model. You have your input instruments, and you have some regularizing terms of it could be smoothness, it could be a PCA, it could be combination of PCA, right? But then that additional information allows you to find anomalies in the market. Once you find those anomalies, you can take advantage of them-- provided that your model is robust enough. And if you are saying, well, I am kind of doing well, and I'm calibrate well with just some smoothness assumption about smoothness of the forward rates, there is nothing more fundamental than this. So if your model is based on fundamental principle, you can expect that it will be more stable in the future, rather than PCA. Because for PCA, you just kind of say, OK, I took the time interval. I kind of did my regression analysis, whatever. But that doesn't mean that the market will continue to do the same in the future. AUDIENCE: I have a question. [INAUDIBLE] marketer. And then we would try to price the bond at that premium. You mentioned that actually bond is the most liquid instrument in the market. So why not you do the area around by inverse bond to derive the discounting factor from a bond [INAUDIBLE]. IVAN MASYUKOV: Well, that's a very good question. So basically we could have done that, OK. And some firms do that. The problem is that with swaps, we kind of have those swaps today, they kind of roll every day, OK? They like, the swap today starts today, swap tomorrow starts tomorrow, things like that. But bonds do not. OK so they basically, there is like on-the-run bond, which is the most liquid one, which once the new ten-year bond is issued, then that bond becomes off-the-run. It's still traded, but then everyone switches to on-the-run. So you don't have a nice continuous spectrum of bonds. You have kind of concentrations between on-the-runs and off-the-runs. And if you want to draw the curve for all of them, you typically cannot do the perfect fit. You kind of need to do the least squares. So it's just more convenient to do it in the swap. But once we build the swap curve, swap trader typically-- I should say always use bonds for hedging, just because bonds are much more liquid. Then we project bonds to the swap curve rather than swaps to the bond curve, which is hard to build. AUDIENCE: So in this case, when they switch from on-the-run, off-the-run, [INAUDIBLE]? IVAN MASYUKOV: Yes. So your curve won't be stable, just because those roll effects-- we call those "roll effects"-- which means that there is something substantial changes on the market. And there may be such a big demand for this new bond on the market, that will make your curve, that won't look nice. So there are also traders that just trade bonds. And those typically don't have curves. They rely on some PCA models, or some other things. PROFESSOR: Thanks again. IVAN MASYUKOV: Thank you.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
3_Probability_Theory.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, so good afternoon. Today, we will review probability theory. So I will mostly focus on-- I'll give you some distributions. So probabilistic distributions, that will be of interest to us throughout the course. And I will talk about moment-generating function a little bit. Afterwards, I will talk about law of large numbers and central limit theorem. Who has heard of all of these topics before? OK. That's good. Then I'll try to focus more on a little bit more of the advanced stuff. Then a big part of it will be review for you. So first of all, just to agree on terminology, let's review some definitions. So a random variable X-- we will talk about discrete and continuous random variables. Just to set up the notation, I will write discrete as X and continuous random variable as Y for now. So they are given by its probability distribution-- discrete random variable is given by its probability mass function, f sub X, I will denote. And continuous is given by probability distribution function. I will denote by f sub Y. So pmf and pdf. Here, I just use a subscript because I wanted to distinguish f sub x and f sub y. But when it's clear which random variable we're talking about, I'll just say f. So what is this? A probability mass function is a function from the sample space to non-negative reals such that the sum over all points in the domain equals 1. The probability distribution is very similar. The function from the sample space non-negative reals, but now the integration over the domain. So it's pretty much safe to consider our sample space to be the real numbers for continuous random variables. Later in the course, you will see some examples where it's not the real numbers. But for now, just consider it as real numbers. For example, probability mass function. If X takes 1 with probability 1/3, minus 1 with probability 1/3, and 0 with probability 1/3. Then our probability mass function is f_x(1) equals f_x(-1), 1/3, just like that. An example of a continuous random variable is if-- let's say, for example, if f sub Y is equal to 1 for all y in [0,1], then this is pdf of uniform random variable where the space is [0,1]. So this random variable just picks one out of the three numbers with equal probability. This picks one out of this, all the real numbers between 0 and 1, with equal probability. These are just some basic stuff. You should be familiar with this, but I wrote it down just so that we agree on the notation. OK. Both of the boards don't slide. That's good. A few more stuff. Expectation-- probability first. Probability of an event can be computed as probability of A is equal to either sum of all points in A-- this probability mass function-- or integral over the set A depending on what you're using. And expectation, or mean is-- expectation of X is equal to the sum over all x, x times that. And expectation of Y is the integral over omega. Oh, sorry. Space. y times. OK. And one more basic concept I'd like to review is two random variables X_1, X_2 are independent if probability that X_1 is in A and X_2 is in B equals the product of the probabilities, for all events A and B. OK. All agreed? So for independence, I will talk about independence of several random variables as well. There are two concepts of independence-- not two, but several. The two most popular are mutually independent events and pairwise independent events. Can somebody tell me the difference between these two for several variables? Yes? AUDIENCE: So usually, independent means all the random variables are independent, like X_1 is independent with every others. But pairwise means X_1 and X_2 are independent, but X_1, X_2, and x_3, they may not be independent. PROFESSOR: OK. Maybe-- yeah. So that's good. So let's see-- for the example of three random variables, it might be the case that each pair are independent. X_1 and X_2 X_1 is independent with X_2, X_1 is independent with X_3, X_2 is with X_3. But altogether, it's not independent. What that means is, this type of statement is not true. So there are say A_1, A_2, A_3 for which this does not hold. But that's just some technical detail. We will mostly just consider mutually independent events. So when we say that several random variables are independent, it just means whatever collection you take, they're all independent. OK. So a little bit more fun stuff [? in this ?] overview. So we defined random variables. And one of the most universal random variable, or distribution, is a normal distribution. It's a continuous random variable. Our continuous random variable has normal distribution, is said to have normal distribution, if-- N(mu, sigma)-- if the probability distribution function is given as 1 over sigma square root 2 pi, e to the minus x minus mu squared. For all reals. OK? So mu mean over-- that's one of the most universal random variables-- distributions, the most important one as well. OK. So this distribution, how it looks like-- I'm sure you saw this bell curve before. It looks like this if it's N(0,1), let's say. And that's your y. So it's centered around the origin, and it's symmetrical on the origin. So now let's look at our purpose. Let's think about our purpose. We want to model a financial product or a stock, the price of the stock, using some random variable. The first thing you can try is to use normal distribution. Normal distribution doesn't make sense, but we can say the price at day n minus the price at day n minus 1 is normal distribution. Is this a sensible definition? That's not really. So it's not a good choice. You can model it like this, but it's not a good choice. There may be several reasons, but one reason is that it doesn't take into account the order of magnitude of the price itself. So the stock-- let's say you have a stock price that goes something like that. And say it was $10 here, and $50 here. Regardless of where your position is at, it says that the increment, the absolute value of increment is identically distributed at this point and at this point. But if you observed how it works, usually that's not normally distributed. What's normally distributed is the percentage of how much it changes daily. So this is not a sensible model, not a good model. But still, we can use normal distribution to come up with a pretty good model. So instead, what we want is a relative difference to be normally distributed. That is the percent. The question is, what is the distribution of price? What does the distribution of price? So it's not a very good explanation. Because I'm giving just discrete increments while these are continuous random variables and so on. But what I'm trying to say here is that normal distribution is not good enough. Instead, we want the percentage change to be normally distributed. And if that is the case, what will be the distribution of the random variable? In this case, what will be the distribution of the price? One thing I should mention is, in this case, if each discrement is normally distributed, then the price at day n will still be a normal random variable distributed like that. So if there's no tendency-- if the average daily increment is 0, then no matter how far you go, your random variable will be normally distributed. But here, that will not be the case. So we want to see what the distribution of P_n will be in this case. OK. To do that-- let me formally write down what I want to say. What I want to say is this. I want to define a log-normal distribution Y, or log-normal random variable Y, such that log of Y is normally distributed. So to derive the probability distribution of this from the normal distribution, we can use the change of variable formula, which says the following: suppose X and Y are random variables such that-- probability of X minus x-- for all x. Then F of Y of y-- the first-- of f sub X is equal to f sub Y of y. h of x. So let's try to fit into this story. We want to have a random variable Y such that log Y is normally distributed. Here-- so you can put log of x here. If Y is normally distributed, X will be the distribution that we're interested in. So using this formula, we can find probability distribution function of the log-normal distribution using the probability distribution of normal. So let's do that. AUDIENCE: [INAUDIBLE], right? PROFESSOR: Yes. So it's not a good choice. Locally, it might be good choice. But if it's taken over a long time, it won't be a good choice. Because it will also take negative values, for example. So if you just take this model, what's going to happen over a long period of time is it's going to hit this square root of n, negative square root of n line infinitely often. And then it can go up to infinity, or it can go down to infinity eventually. So it will take negative values and positive values. That's one reason, but there are several reasons why that's not a good choice. If you look at a very small scale, it might be OK, because the base price doesn't change that much. So if you model in terms of ratio, our if you model it in an absolute way, it doesn't matter that much. But if you want to do it a little bit more large scale, then that's not a very good choice. Other questions? Do you want me to add some explanation? OK. So let me get this right. Y. I want X to be-- yes. I want X to be the log normal distribution. And I want Y to be normal distribution or a normal random variable. Then the probability that X is at most x equals the probability that Y is at most-- sigma. Y is at most log x. That's the definition of log-normal distribution. Then by using this change of variable formula, probability density function of X is equal to probability density function of Y at log x times the differentiation of log x which is 1 over x. So it becomes 1 over x sigma square root 2 pi, e to the minus log x minus mu squared. So log-normal distribution can also be defined as the distribution which has probability mass function this. You can use either definition. Let me just make sure that I didn't mess up in the middle. Yes. And that only works for x greater than 0. Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: Yeah. So all logs are natural log. It should be ln. Yeah. Thank you. OK. So question-- what's the mean of this distribution here? Yeah? AUDIENCE: 1? PROFESSOR: Not 1. It might be mu. Is it mu? Oh, sorry. It might be e to the mu. Because log X, the normal distribution had mean mu. log x equals mu might be the center. If that's the case, x is e to the mu will be the mean. Is that the case? Yes? AUDIENCE: Can you get the mu minus [INAUDIBLE]? PROFESSOR: Probably right. I don't remember what's there. There is a correcting factor. I don't remember exactly what that is, but I think you're right. So one very important thing to remember is log-normal distribution are referred to in terms of the parameters mu and sigma, because that's the mu and sigma up here and here coming from the normal distribution. But those are not the mean and variance anymore, because you skew the distribution. It's no longer centered at mu. log X is centered at mu, but when it takes exponential, it becomes skewed. And we take the average, you'll see that the mean is no longer e to the mu. So that doesn't give the mean. That doesn't imply that the mean is e to the sigma. That doesn't imply that the variance is something like e to the sigma. That's just totally nonsense. Just remember-- these are just parameters, some parameters. It's no longer mean or variance. And in your homework, one exercise, we'll ask you to compute the mean and variance of the random variable. But really, just try to have it stick in your mind that mu and sigma is no longer mean and variance. That's only the case for normal random variables. And the reason we are still using mu and sigma is because of this derivation. And it's easy to describe it in those. OK. So the normal distribution and log-normal distribution will probably be the distributions that you'll see the most throughout the course. But there are some other distributions that you'll also see. I need this. I will not talk about it in detail. It will be some exercise questions. For example, you have Poisson distribution or exponential distributions. These are some other distributions that you'll see. And all of these-- normal, log-normal, Poisson, and exponential, and a lot more can be grouped into a family of distributions called exponential family. So a distribution is called to be in an exponential family-- A distribution belongs to exponential family if there exists a theta, a vector that parametrizes the distribution such that the probability density function for this choice of parameter theta can be written as h of x times c of theta times the exponent of sum from i equal 1 to k-- Yes. So here, when I write only x, h should only depend on x, not on theta. When I write some function of theta, it should only depend on theta, not on x. So h(x), t_i(x) depends only on x and c(theta) on my value theta, depends only on theta. That's an abstract thing. It's not clear why this is so useful, at least from the definition. But you're going to talk about some distribution for an exponential family, right? Yeah. So you will see something about this. But one good thing is, they exhibit some good statistical behavior, the things-- when you group them into-- all distributions in the exponential family have some nice statistical properties, which makes it good. That's too abstract. Let's see how log-normal distribution actually falls into the exponential family. AUDIENCE: So, let me just comment. PROFESSOR: Yeah, sure. AUDIENCE: The notion of independent random variables, you went over how the-- well, the probability density functions of collections of random variables if they're mutually independent is the product of the probability densities of the individual variables. And so with this exponential family, if you have random variables from the same exponential family, products of this density function factor out into a very simple form. It doesn't get more complicated as you look at the joint density of many variables, and in fact simplifies to the same exponential family. So that's where that becomes very useful. PROFESSOR: So it's designed so that it factors out when it's multiplied. It factors out well. OK. So-- sorry about that. Yeah, log-normal distribution. So take h(x), 1 over x. Before that, let's just rewrite that in a different way. So 1 over x sigma square root 2 pi, e to the minus log x [INAUDIBLE] squared. Square. Can be rewritten as 1 over x, times 1 over sigma squared 2 pi, e to the minus log x square over 2 sigma square plus mu log x over sigma square minus mu square. Let's write it like that. Set up h(x) equals 1 over x. c of theta-- sorry, theta equals mu sigma. c(theta) is equal to 1 over sigma square root 2 pi, e to the minus mu square. So you will parametrize this family in terms of mu and sigma. Your h of x here will be 1 over x. Your c(theta) will be this term and the last term here, because this doesn't depend on x. And then you have to figure out what w and t is. You can let w_1 of x be log x square. t_1-- no, t_1 of x be log x square, w_1 of theta be minus 1 over 2 sigma square. And similarly, you can let t_2 equals log x and w_2 equals mu over sigma. It's just some technicality, but at least you can see it really fits in. OK. So that's all about distributions that I want to talk about. And then let's talk a little bit more about more interesting stuff, in my opinion. I like this stuff better. There are two main things that we're interested in. When we have a random variable, at least for our purpose, what we want to study is given a random variable, first, we want to study a statistics. So we want to study this statistics, whatever that means. And that will be represented by the k-th moments of the random variable. Where k-th moment is defined as expectation of X to the k. And a good way to study all the moments together in one function is a moment-generating function. So this moment-generating function encodes all the k-th moments of a random variable. So it contains all the statistical information of a random variable. That's why moment-generating function will be interesting to us. Because when you want to study it, you don't have to consider each moment separately. It gives a unified way. It gives a very good feeling about your function. That will be our first topic. Our second topic will be we want to study its long-term or large-scale behavior. So for example, assume that you have a normal distribution-- one random variable with normal distribution. If we just have a single random variable, you really have no control. It can be anywhere. The outcome can be anything according to that distribution. But if you have several independent random variables with the exact same distribution, if the number is super large-- let's say 100 million-- and you plot how many random variables fall into each point into a graph, you'll know that it has to look very close to this curve. It will be more dense here, sparser there, and sparser there. So you don't have individual control on each of the random variables. But when you look at large scale, you know, at least with very high probability, it has to look like this curve. Those kind of things are what we want to study. When we look at this long-term behavior or large scale behavior, what can we say? What kind of events are guaranteed to happen with probability, let's say, 99.9%? And actually, some interesting things are happening. As you might already know, two typical theorems of this type will be, in this topic will be law of large numbers and central limit theorem. So let's start with our first topic-- the moment-generating function. The moment-generating function of a random variable is defined as-- I write it as m sub X. It's defined as expectation of e to the t times x where t is some parameter. t can be any real. You have to be careful. It doesn't always converge. So remark: does not necessarily exist. So for example, one of the distributions you already saw does not have moment-generating function. The log-normal distribution does not have any moment-generating function. And that's one thing you have to be careful. It's not just some theoretical thing. The statement is not something theoretical. It actually happens for some random variables that you encounter in your life. So be careful. And that will actually show some very interesting thing I will later explain. Some very interesting facts arise from this fact. Before going into that, first of all, why is it called moment-generating function? It's because if you take the k-th derivative of this function, then it actually gives the k-th moment of your random variable. That's where the name comes from. It's for all integers. And that gives a different way of writing a moment-generating function. Because of that, we may write the moment-generating function as the sum from k equals 0 to infinity, t to the k, k factorial, times a k-th moment. That's like the Taylor expansion. Because you know all the derivatives, you know what the functions would be. Of course, only if it exists. This might not converge. So if moment-generating function exists, they pretty much classify your random variables. So if two random variables, X, Y, have the same moment-generating function, then X and Y have the same distribution. I will not prove this theorem. But it says that moment-generating function, if it exists, encodes really all the information about your random variables. You're not losing anything. However, be very careful when you're applying this theorem. Because remark, it does not imply that all random variables with identical k-th moments for all k has the same distribution. Do you see it? If X and Y have a moment-generating function, and they're the same, then they have the same distribution. This looks a little bit controversial to this theorem. It says that it's not necessarily the case that two random variables, which have identical moments-- so all k-th moments are the same for two variables-- even if that's the case, they don't necessarily have to have the same distribution. Which seems like it doesn't make sense if you look at this theorem. Because moment-generating function is defined in terms of the moments. If two random variables have the same moment, we have the same moment-generating function. If they have the same moment-generating function, they have the same distribution. There is a hole in this argument. Even if they have the same moments, it doesn't necessarily imply that they have the same moment-generating function. They might both not have moment-generating functions. That's the glitch. Be careful. So just remember that even if they have the same moments, they don't necessarily have the same distribution. And the reason is because-- one reason is because the moment-generating function might not exist. And if you look in to Wikipedia, you'll see an example of when it happens, of two random variables where this happens. So that's one thing we will use later. Another thing that we will use later, it's a statement very similar to that, but it says something about a sequence of random variables. So if X_1, X_2, up to X_n is a sequence of random variables such that the moment-generating function exists, and it converges-- ah, it goes to infinity. Tends to the moment-generating function of some random variable t. X. For some random variable X for all t. Here, we're assuming that all moment-generating function exists. So again, the situation is, you have a sequence of random variables. Their moment-generating function exists. And in each point t, it converges to the value of the moment-generating function of some other random variable x. And what should happen? In light of this theorem, it should be the case that the distribution of this sequence gets closer and closer to the distribution of this random variable x. And to make it formal, to make that information formal, what we can conclude is, for all x, the probability X_i is less than or equal to x tends to the probability that at x. So in this sense, the distributions of these random variables converges to the distribution of that random variable. So it's just a technical issue. You can just think of it as these random variables converge to that random variable. If you take some graduate probability course, you'll see that there's several possible ways to define convergence. But that's just some technicality. And the spirit here is just really the sequence converges if its moment-generating function converges. So as you can see from these two theorems, moment-generating function, if it exists, is a really powerful tool that allows you to control the distribution. You'll see some applications later in central limit theorem. Any questions? AUDIENCE: [INAUDIBLE]? PROFESSOR: This one? Why? AUDIENCE: Because it starts with t, and the right-hand side has nothing general. PROFESSOR: Ah. Thank you. We evaluated at zero. Other questions? Other corrections? AUDIENCE: When you say the moment-generating function doesn't exist, do you mean that it isn't analytic or it doesn't converge? PROFESSOR: It might not converge. So log-normal distribution, it does not converge. So for all non-zero t, it does not converge, for log-normal distribution. AUDIENCE: [INAUDIBLE]? PROFESSOR: Here? Yes. Pointwise convergence implies pointwise convergence. No, no. Because it's pointwise, this conclusion is also rather weak. It's almost the weakest convergence in distribution. OK. The law of large numbers. So now we're talking about large-scale behavior. Let X_1 up to X_n be independent random variables with identical distribution. We don't really know what the distribution is, but we know that they're all the same. In short, I'll just refer to this condition as i.i.d. random variables later. Independent, identically distributed random variables. And let mean be mu, variance be sigma square. Let's also define X as the average of n random variables. Then the probability that-- X-- for all-- all positive [INAUDIBLE]. So whenever you have identical independent distributions, when you take their average, if you take a large enough number of samples, they will be very close to the mean, which makes sense. So what's an example of this? Before proving it, example of this theorem in practice can be seen in the casino. So for example, if you're playing blackjack in a casino, when you're playing against the casino, you have a very small disadvantage. If you're playing at the optimal strategy, you have-- does anybody know the probability? It's about 48%, 49%. About 48% chance of winning. That means if you bet $1 at the beginning of each round, the expected amount you'll win is $0.48. The expected amount that the casino will win is $0.52. But it's designed so that the variance is so big that this expectation is hidden, the mean is hidden. From the player's point of view, you only have a very small sample. So it looks like the mean doesn't matter, because the variance takes over in a very short scale. But from the casino's point of view, they're taking a very large n there. So for each round, let's say from the casino's point of view, it's like taking, they are taking enormous value of n. n here. And that means as long as they have the slightest advantage, they'll be winning money, and a huge amount of money. And most games played in the casinos are designed like this. It looks like the mean is really close to 50%, but it's hidden, because they designed it so the variance is big. But from the casino's point of view, they have enough players to play the game so that the law of large numbers just makes them money. The moral is, don't play blackjack. Play poker. The reason that the rule of law of large numbers doesn't apply, at least in this sense, to poker-- can anybody explain why? It's because poker, you're playing against other players. If you have an advantage, if your skill-- if you believe that there is skill in poker-- if your skill is better than the other player by, let's say, 5% chance, then you have an edge over that player. So you can win money. The only problem is that because-- poker, you're not playing against the casino. Don't play against casino. But they still have to make money. So what they do instead is they take rake. So for each round that the players play, they pay some fee to the casino. And how the casino makes money at the poker table is by accumulating those fees. They're not taking chances there. But from the player's point of view, if you're better than the other player, and the amount of edge you have over the other player is larger than the fee that the casino charges to you, then now you can apply law of large numbers to yourself and win. And if you take an example as poker, it looks like-- OK, I'm not going to play poker. But if it's a hedge fund, or if you're doing high-frequency trading, that's the moral behind it. So that's the belief you should have. You have to believe that you have an edge. Even if you have a tiny edge, if you can have enough number of trials, if you can trade enough of times using some strategy that you believe is winning over time, then law of large numbers will take it from there and will bring you money, profit. Of course, the problem is, when the variance is big, your belief starts to fall. At least, that was the case for me when I was playing poker. Because I believed that I had an edge, but when there is really swing, it looks like your expectation is negative. And that's when you have to believe in yourself. Yeah. That's when your faith in mathematics is being challenged. It really happened. I hope it doesn't happen to you. Anyway, that's proof law of large numbers. How do you prove it? The proof is quite easy. First of all, one observation-- expectation of X is just expectation of 1 over n times sum of X_i's. And that, by linearity, just becomes the sum of-- and that's mu. OK. That's good. And then the variance, what's the variance of X? That's the expectation of X minus mu square, which is the expectation sum over all i's, minus mu square. I'll group them. That's the expectation of 1 over n sum of X_i minus mu square. i is from 1 to n. What did I do wrong? 1 over n is inside the square. So I can take it out and square, n square. And then you're summing n terms of sigma square. So that is equal to sigma square over n. That means the effect of averaging n terms does not affect your average, but it affects your variance. It divides your variance by n. If you take larger and larger n, your variance gets smaller and smaller. And using that, we can prove this statement. There's only one thing you have to notice-- that the probability that x minus mu is greater than epsilon. When you multiply this by epsilon square. This will be less than or equal to the variance of x. The reason this inequality holds is because variance X is defined as the expectation of X minus mu square. For all the events when you have X minus mu at least epsilon, your multiplying factor X square will be at least epsilon square. This term will be at least epsilon square when you fall into this event. So your variance has to be at least that. And this is known to be sigma square over n. So probability that x minus mu is greater than epsilon is at most sigma square over n epsilon squared. That means if you take n to go to infinity, that goes to zero. So the probability that you deviate from the mean by more than epsilon goes to 0. You can actually read out a little bit more from the proof. It also tells a little bit about the speed of convergence. So let's say you have a random variable X. Your mean is 50. You epsilon is 0.1. So you want to know the probability that you deviate from your mean by more than 0.1. Let's say you want to be 99% sure. Want to be 99% sure that X minus mu is less than 0.1, or X minus 50 is less than 0.1. In that case, what you can do is-- you want this to be 0.01. It has to be 0.01. So plug in that, plug in your variance, plug in your epsilon. That will give you some bound on n. If you have more than that number of trials, you can be 99% sure that you don't deviate from your mean by more than epsilon. So that does give some estimate, but I should mention that this is a very bad estimate. There are much more powerful estimates that can be done here. That will give the order of magnitude-- I didn't really calculate here, but it looks like it's close to millions. It has to be close to millions. But in practice, if you use a lot more powerful tool of estimating it, it should only be hundreds or at most thousands. So the tool you'll use there is moment-generating functions, something similar to moment-generating functions. But I will not go into it. Any questions? OK. For those who already saw law of large numbers before, the name suggests there's also something called strong law of large numbers. In that theorem, your conclusion is stronger. So the convergence is stronger than this type of convergence. And also, the condition I gave here is a very strong condition. The same conclusion is true even if you weaken some of the conditions. So for example, the variance does not have to exist. It can be replaced by some other condition, and so on. But here, I just want it to be a simple form so that it's easy to prove. And you at least get the spirit of what's happening. Now let's move on to the next topic-- central limit theorem. So weak law of large numbers says that if you have IID random variables, 1 over n times sum over X_i's converges to mu, the mean, in some weak sense. And the reason it happened was because this had mean mu and variance sigma square over n. We've exploited the fact that variance vanishes to get this. So the question is, what happens if you replace 1 over n by 1 over square root n? What happens if-- for the random variable 1 over square root n times X_i? The reason I'm making this choice of 1 over square root n is because if you make this choice, now the average has mean mu and variance sigma square just as in X_i's. So this is the same as X_i. Then what should it look like? If the random variable is the same mean and same variance as your original random variable, the distribution of this, should it look like the distribution of X_i? If mean is mu. Thank you very much. The case when mean is 0. OK. For this special case, will it look like X_i, or will it not look like X_i? If it doesn't look like X_i, can we say anything interesting about the distribution of this? And central limit theorem answers this question. When I first saw it, I thought it was really interesting. Because normal distribution comes up here. And that's probably one of the reasons that normal distribution is so universal. Because when you take many independent events and take the average in this sense, their distribution converges to a normal distribution. Yes? AUDIENCE: How did you get mean equals [INAUDIBLE]? PROFESSOR: I didn't get it. I assumed it if X-- yeah. So theorem: let X_1, X_2, to X_n be IID random variables with mean, this time, mu and variance sigma squared. And let X-- or Y_n. Y_n be square root n times 1 over n, of X_i minus mu. Then the distribution of Y_n converges to that of normal distribution with mean 0 and variance sigma. What this means-- I'll write it down again-- it means for all x, probability that Y_n is less than or equal to x converges the probability that normal distribution is less than or equal to x. What's really interesting here is, no matter what distribution you had in the beginning, if we average it out in this sense, then you converge to the normal distribution. Any questions about this statement, or any corrections? Any mistakes that I made? OK. Here's the proof. I will prove it when the moment-generating function exists. So assume that the moment-generating functions exists. So proof assuming m of X_i exists. So remember that theorem. Try to recall that theorem where if you know that the moment-generating function of Y_n's converges to the moment-generating function of the normal, then we have the statement. The distribution converges. So that's the statement we're going to use. That means our goal is to prove that the moment-generating function of these Y_n's converge to the moment-generating function of the normal for all t, pointwise convergence. And this part is well known. I'll just write it down. It's known to be e to the t square sigma square over 2. That just can be computed. So we want to somehow show that the moment-generating function of this Y_n converges to that. The moment-generating function of Y_n is equal to expectation of e to t Y_n. e to the t, 1 over square root n, sum of X_i minus mu. And then because each of the X_i's are independent, this sum will split into products. Product of-- let me split it better. Meets the expectation-- we didn't use independent yet. Sum becomes products of e to the t, 1 over square root n, X_i minus mu. And then because they're independent, this product can go out. Equal to the product from 1 to n expectation e to the t times square root n-- OK. Now they're identically distributed, so you just have to take the n-th power of that. That's equal to the expectation of e to the t over square root n, X_i minus mu, to the n-th power. Now we'll do some estimation. So use the Taylor expansion of this. What we get is expectation of 1 plus that, t over square root n xi minus mu, plus 1 over 2 factorial, that squared, t over square root n, xi minus mu squared, plus 1 over 3 factorial, that cubed plus so on. Then that's equal to 1-- Ah, to the n-th power. The linearity of expectation, 1 comes out. Second term is 0, because X_i have mean mu. So that disappears. This term-- we have 1 over 2, t squared over n, X_i minus mu square. X_i minus mu square, when you take expectation, that will be sigma square. And then the terms after that, because we're only interested in proving that for fixed t, this converges-- so we're only proving pointwise convergence. You may consider t as a fixed number. So as n goes to infinity-- if n is really, really large, all these terms will be smaller order of magnitude than n, 1 over n. Something like that happens. And that's happening because we're fixed. For fixed t, we have to prove it. So if we're saying something uniformly about t, that's no longer true. Now we go back to the exponential form. So this is pretty much just e to that term, 1 over 2 t square sigma square over n plus little o of 1 over n to the n-th power. Now, that n can be multiplied to cancel out. And we see that it's e to t square sigma square over 2 plus the little o of 1. So if you take n to go to infinity, that term disappears, and we prove that it converges to that. And then by the theorem that I stated before, if we have this, we know that the distribution converges. Any questions? OK. I'll make one final remark. So suppose there is a random variable x whose mean we do not know, whose mean is unknown. Our goal is to estimate the mean. And one way to do that is by taking many independent trials of this random variable. So take independent trials X_1, X_2, to X_n, and use 1 over-- X_1 plus... X_n as our estimator. Then the law of large numbers says that this will be very close to the mean. So if you take n to be large enough, you will more than likely have some value which is very close to the mean. And then the central limit theorem tells you how the distribution of this variable is around the mean. So we don't know what the real value is, but we know that the distribution of the value that we will obtain here is something like that around the mean. And because normal distribution have very small tails, the tail distributions is really small, we will get really close really fast. And this is known as the maximum likelihood estimator, is it? OK, yeah. For some distributions, it's better to take some other estimator. Which is quite interesting. At least my intuition is to take this for every single case, looks like that will be a good choice. But it turns out that that's not the case; for some distributions there's a better choice than this. And Peter will later talk about it. If you're interested in, come back. And that's it for today, any questions? So next Tuesday we will have an outside speaker, and it will be on bonds. and I don't think anything from linear algebra will be here.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
11_Time_Series_Analysis_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. I want to complete the discussion on volatility modeling in the first part of the lecture today. And last time we addressed the definition of ARCH models, which allow for time-varying volatility in modeling the returns of a financial time series. And we were looking last time at modeling the euro-dollar exchange rate returns. And we went through fitting ARCH models to those returns, and also looked at fitting the GARCH model to those returns. And to recap, the GARCH model extends upon the ARCH model by adding some extra terms. So if you look at this expression for the GARCH model, the first two terms for the time-varying volatility sigma squared t is a linear combination of the past sort of residual returns squared. That's the ARCH model, p of those. So the current volatility depends upon what's happened in excess returns over the last p periods. But then we add an extra term, which is corresponds to q levels of the previous volatility. And so what we're doing with GARCH models is adding extra parameters to the ARCH, but an advantage of considering these extra parameters which relate basically the current volatility sigma squared t with the previous or lagged value sigma squared t minus j for lags j is that we may be able to have a model with many fewer parameters. So indeed, if we fit these models to the exchange rate returns, what we found last time-- let me go through and show that-- was-- basically here are various fits of the three cases of ARCH models. ARCH orders 1, 2, and 10, thinking we maybe need many lags to fit volatility. And then the GARCH model 1,1, where we only have one ARCH term and one GARCH term. And so basically the green line, or rather the blue line in this graph, shows the plot of the fitted GARCH(1,1) model as compared with the ARCH models. Now, in looking at this graph, one can actually see some features of how these models are fitting volatility, which is important to understand. One is that the ARCH models have a hard lower bound on the volatility. Basically there's a constant term in the volatility equation. And because the additional terms are squared excess returns, it-- basically, the volatility does have the lower bound of that intercept. So depending on what range you fit the data over, that lower bound is going to be defined by-- or it will be determined by the data you're fitting to. As you increase the ARCH order, you basically allow for a greater range of-- or a lower lower bound of that. And with the GARCH model you can see that this blue line is actually predicting very different levels of volatility over the entire range of the series. So it really is much more flexible. Now-- and in these fits, we are assuming Gaussian distributions for the innovations in the return series. We'll soon pursue looking at alternatives to that, but let me talk just a little bit more about the GARCH model going back to lecture notes here. So let me expand this. OK. So there's the specification. The GARCH(1,1) model. One thing to note is that this GARCH(1,1) model does relate to an ARMA, an autoregressive moving average process in the squared residuals. So if we look at the top line, which is the equation for the GARCH(1,1) model, consider eliminating sigma squared t by using a new innovation term, little u_t, which is the difference between the squared residual and the true volatility given by the model. So if you plug in the difference between our squared excess return and the current volatility, that should have mean 0 because sigma squared t, the t-th volatility squared, is equal to the square-- or is equal to the expectation of the squared excess residual return, epsilon_t squared. So if we plug that in, we basically get an ARMA model for the squared residuals. And so epsilon_t squared is alpha_0 plus alpha_1 plus beta_1 the squared t minus 1 lag plus u_t minus beta_1 u_t. And so what this implies is an ARMA(1,1 model with white noise that has mean 0 and variance 2 sigma to the fourth. Just plugging things in. And through our knowledge, understanding, of univariate time series models, ARMA models, we can express this ARMA model for the squared residuals as basically a polynomial lag of the squared residuals is equal to a polynomial lag of the innovations. And so we have this expression for what the innovations are. And it's required that the roots of this a of L operator, when it thought of on the complex plane, have roots outside the unit circle, which corresponds to alpha_1 plus beta_1 being less than 1 in magnitude. So in order for these volatility models not to blow up and be stationary, covariance stationary, we have these bounds on the parameters. OK, let's look at the unconditional volatility or long-run variance of the GARCH model. If you take expectations on both sides of the GARCH model equation, you basically have the expectation of sigma squared sub t-- in the long run is sigma star squared-- is alpha_0 plus alpha_1 plus beta_1 sigma star squared. So that sigma star squared there is the expectation of the t minus 1 volatility squared in the limit. And then you can just solve for this and see that sigma star squared is equal alpha_0 over 1 minus alpha_1 minus beta_1. And in terms of the stationarity conditions for the process, if the long-run variance, in order for that to be finite, you need alpha_1 plus beta_1 to be less than 1 in magnitude. And if you consider the general GARCH(p,1) model, then the same argument leads to a long-run variance being equal to alpha_0, the sort of intercept term in the GARCH model, divided by 1 minus the sum of all the parameters. So these GARCH models lead to constraints on the parameters that are important to incorporate when we're doing any estimation of these underlying parameters. And it does complicate things, actually. So with maximum likelihood estimation, the routine for maximum likelihood estimation is the same for all models. We basically want to determine the likelihood function of our data given the unknown parameters. And the likelihood function is the probability density function of the data conditional on the parameters. So our likelihood function as a function of the unknown parameters c, alpha, and beta is the value of the probability density, the joint density of all the data conditional on those parameters. And that joint density function can be expressed as the product of successive conditional expectations of the time series. And those conditional densities are normal random variables. So we can just plug in what we know to be the probability densities of normals for the t-th innovation epsilon_t. And we just optimize that function. Now, the challenge with estimating these GARCH models in part is the constraints on the underlying parameters. Those need to be enforced. So we have to have that the alpha_i are greater than 0. Also, the beta_j are greater than 0. And the sum of all of them is between 0 and 1. Who in this class has had courses in numerical analysis and done some optimization of functions? Non-linear functions? Anybody? OK. Well, in addressing this kind of problem, which will come up with any complex model that you need to estimate, say via maximum likelihood, the optimization methods do really well if you're optimizing a convex function, finding the minimum of a convex function. And it's always nice to do minimization over sort of an unconstrained range of underlying parameters. And one of the tricks in solving these problems is to transform the parameters to a scale where they're unlimited in range, basically. So if you have a positive random variable, you might use to log of that variable as the thing to be optimizing over. If the variable's between 0 and 1, then you might use that variable divided by 1 minus that variable and then take the log of that. And that's unconstrained. So there are tricks for how you do this optimization, which come into play. Anyway, that's the likelihood with the normal distribution. And we have computer programs that will solve that directly so we don't have to worry about this particular case. Once we fit this model, we want to evaluate how good it is and the evaluation is based upon looking at the residuals from the model. So what we have are these innovations, epsilon hat t, which should be distributed with variance or volatility sigma hat t. Those should be uncorrelated with themselves or at least to the extent that they can be. And the squared standardized residuals should also be uncorrelated. What we're trying to do with these models is to capture the dependence, actually, in the squared residuals, which is measuring the magnitude of the excess returns. So those should be uncorrelated. There are various test for normality. I've listed some of those that are the most popular here. And then there's issues of model selection for deciding sort of which GARCH model to apply. I wanted to go through an example of this analysis with the euro-dollar exchange rate. So let me go to this case study note. So let's see. There's a package in R called rugarch for univariate GARCH models, which fits various GARCH models with different-- and fits them by maximum likelihood. So with this packet-- with this particular library in R, I fit the GARCH model after actually fitting the mean process for the exchange rate returns. Now, when we looked at things last time, we basically looked at modeling the squared returns. In fact, there may be an underlying mean process that needs to be specified as well. So in this section of the case note, I initially fit an autoregressive process using the Akaike information criterion to choose the order of the autoregressive process and then fit a GARCH model with normal GARCH terms. And this is a plot of the normal q-q plot of the autoregressive residuals. And what you can see is that the points lie along a straight line sort of in the middle of the range. But on the extremes, they depart from that straight line. This basically is a measure of standardized quantiles. So in terms of standard units away from the mean for the residuals, we tend to get many more high values and many more low values with the Gaussian distribution. So that really isn't fitting very well. If we proceed and fit-- OK, actually that plot was just the simple ARCH model with no GARCH terms. And then this is the graph of the q-q plot with the Gaussian assumption. So here we can see that the residuals from this model are suggesting that it may do a pretty good job when things are only a few standard deviations away from the mean. Less than 2, 2.5. But when we get to more extreme values, this isn't modeling things well. So one alternative is to consider a heavier-tailed distribution than the normal, namely the t distribution. And consider identifying what t distribution best fits the data. So let's just look at what ends up being the maximum likelihood estimate for the degrees of freedom parameter, which is 10 degrees of freedom. This shows the q-q plot when you have a non-Gaussian distribution that's t with 10 degrees of freedom. It basically is explaining these residuals quite well, so that's accommodating the heavier-tailed distribution of these values. With this GARCH model, let's see-- if you compare sort of estimates of volatility under the GARCH and ARCH models-- the GARCH models with the t distribution-- sorry t distribution versus Gaussian. Here's just a graph showing time series plots of the estimated volatility over time, which actually look quite close. But when you look at the differences, there really are differences. And so it turns out that the volatility function or the volatility estimate GARCH models with Gaussian versus GARCH with t distributions are really very, very similar. And the heavier tailed distribution of the t distribution means that the distribution of actual volatility is greater. But in terms of estimating the volatility, you have quite similar estimates of the volatility coming out. And this display-- which you'll be able to see more clearly in the case notes that I'll post up-- show that these are really quite similar in magnitude. And the value at risk concept that was just-- by Ken couple weeks ago in his lecture from Morgan Stanley-- concerns the issue of estimating what is the likelihood of returns exceeding some threshold. And if we use the t distribution for measuring variability of the excess returns, then the computations in the notes indicate how you would compute these value at risk limits. If you compare the t distribution with a Gaussian distribution at these nominal levels for value at risk of like 2.5% or 5%, surprisingly you won't get too much difference. It's really in looking at sort of the extreme tails of the distribution that things come into play. And so I wanted to show you how that plays out by showing you another graph here. Those of you who have had a statistics course before have heard that sort of a t distribution can be a good approximation to a normal-- or it can be approximated well by a normal if the degrees of freedom for the t are at some level. And who wants to suggest a degrees of freedom that you might have before you're comfortable approximating a t with a normal? Danny? AUDIENCE: 30 or 40. PROFESSOR: 30 or 40. Sometimes people say even 25. Above 25, you can almost expect the t distribution to be a good approximation to the normal. Well, this is a graph the PDF for a standard normal versus a standard t with 30 degrees of freedom. And you can see that the density functions are very, very close. The standard-- the CDFs, the cumulative distribution functions, which is the likelihood of being less than or equal to the horizontal value, ranges between 0 and 1, is almost indistinguishable. But if you look at the tails of the distribution, here I've computed the log of the CDF function. You basically have to move much more than two standard deviations away from the mean before there's really a difference in the t distribution with 30 degrees of freedom. Now I'm going to page up by reducing the degrees of freedom. Let's see. If we could do a page down here. Page down. Oh, page up. OK. So here is 20 degrees of freedom. Here's 10 degrees of freedom, in our case, which turns out to be sort of the best fit of the t distribution. And what you can see is that, in terms of standard deviation units, up to about two standard deviations below the mean, we're basically getting virtually the same probability mass at the extreme below. But as we go to four or six standard deviations, then we get heavier mass with the t distribution. In discussion of results in finance when you sort of fit models, people talk about, oh, there was six standard deviation move or-- which is just virtually impossible to occur. Well, with t distributions a six standard deviation move occurs about 1 in 10,000 times according to this fit. And so it actually is a common [? idiomatic. ?] And so it's important to know that these t distributions are benefiting us by giving us a much better gauge of what the tail distribution is like. And we call these distributions leptokurtic, meaning they're heavier tailed than a normal distribution. Actually, lepto means slender, I believe, if you're Greek or have the Greek origin of the word. And you can see that the blue curve, which is the t distribution, is sort of a bit more slender in the center of the distribution, which allows it to have heavier tails. All right. So t distributions are very useful. Let's go back to this case note here which discusses-- this case note goes through, actually, fitting the t distribution-- identifying the degrees of freedom for this t model. And so with the rugarch package, we can get the log-likelihood of the data fit under the t distribution assumption. And here's a graph of the negative log-likelihood versus the degrees of freedom in the t model. So with maximum likelihood we identify the value which minimizes the negative log likelihood. And that comes out as that 10 value. All right. Let's go back to these notes and see what else we want to talk about. All right. OK, with these GARCH models we actually are able to model volatility clustering. And volatility clustering is where, over time, you expect volatility to be high during some periods and to be low during other periods. And the GARCH model can accommodate that. So large volatilities tend to be followed by large, small volatilities tend to be followed by small ones. OK. The returns have heavier tails than Gaussian distributions. Actually, even if we have Gaussian errors in the GARCH model, it's still heavier tailed than a Gaussian. The homework goes into that a little bit. And the-- well, actually one of the original papers by Engle with Bollerslev, who introduced the GARCH model, discusses these features and how useful they are for modeling financial time series. Now, a property of these models that may be obvious, perhaps, but it is-- OK, these are models that are appropriate for modeling covariance stationary time series. So the volatility measure, which is a measure of the squared excess return, is basically a covariance stationary process. So what does that mean? That means that's going to have a long-term mean. So with these GARCH models that are covariance stationary, there's going to be a long-term mean of the GARCH process. And this discussion here details how this GARCH process is essentially a mean reversion of the volatility to that value. So basically, the sort of excess volatility of the squared residuals relative to their long-term average is some multiple of the previous period's excess volatility. So if we build forecasting models of volatility with GARCH models, what's going to happen? Basically, in the long run we predict that any volatility value is going to revert to this long-run average. And in the short run, it's going to move incrementally to that value. So these GARCH models are very good for describing volatility relative to the long-term average. In terms of their usefulness for prediction, well, they really predict that volatility is going to revert back to the mean at some rate. And the rate at which the volatility reverts back is given by alpha_1 plus beta_1. So that number, which is less than 1 for covariance stationarity, is sort of measuring, basically, how quickly you are reverting back to the mean. And that sum is actually called a persistence parameter in GARCH models as well. So is volatility persisting or not? Well, the larger alpha_1 plus beta_1 is, the more persistent volatility is, meaning it's reverting back to that long-run average very, very slowly. In the implementation of volatility estimates with the risk metrics methodology, they actually don't assume that there is a long-run volatility. And so that basically you'll have alpha_1 be equal to 0 and beta_1 equal to, say, 0.95. So or rather the alpha_0 is 0 and the alpha_1 and beta_1 will actually sum to 1. And so you actually are tracking a potentially non-stationary volatility, which allows you to be estimating the volatility without presuming a long-run average is consistent with the past. There are many extensions of the GARCH models. And there's wide literature on that. For this course, I think it's important to understand the fundamentals of these models in terms of how they're specified under Gaussian and t assumptions. Extending them can be very interesting. And there are many papers to look at for that. OK. let's pause for a minute and get the next topic. All right. The next topic is time series, multivariate time series. In two lectures ago of mine, we talked about univariate time series and basic methodologies there. We're now going to be extending that to multivariate time series. Turns out there's a multivariate Wold representation theorem, extension of the univariate one. There are autoregressive processes for multivariate cases, which are vector autoregressive processes. Least squares estimation comes into play. And then we'll see where our regression analysis understanding allows us to specify these vector autoregressive processes nicely. There's an optimality properties of ordinary least squares estimates component wise, which we'll highlight in about a half an hour. And go through the maximum likelihood estimation model selection methods, which are just very straightforward extensions of the same concepts for univariate time series and univariate regressions. So let's talk-- let's introduce the notation for multivariate time series. We have a stochastic process, which now is multivariate. So we have bold X of t is some m-dimensional valued random variable. And it's a stochastic process that varies over time t. And we can think of this as m different time series corresponding to the m components of the given process. So, say, with exchange rates we could be modeling m different exchange rate values and want to model those jointly as a time series. Or we could have collections of stocks that we're modeling. So each of the components individually can be treated as univariate series with univariate methods. With the multivariate case, we extend the definition of covariance stationarity to correspond to finite, bounded first and second order moments. So we need to talk about the first order moment of the multivariate time series. Mu now is an m vector, which is the vector of expected values of the individual components, which we can denote by mu_1 through mu_m. So we basically have m vectors for our mean. Then for the variance/covariance matrix, let's define gamma_0 to be the variance/covariance matrix of the t-th observation of our multivariate process. So that's equal to the expected value of X_t minus mu X_t minus mu prime. So when we write that down, we have X_t minus mu. This is basically an m by 1 vector and then X_t minus mu prime is a 1 by m vector. And so the product of that is an m by m quantity. So the 1, 1 element of that product is the variance of X_(1,t). And the diagonal entries are the variances of the components series. And the off-diagonal values are the covariance between the i-th row series and the j-th column series, as given by the i-th row of X and the j-th column of X transpose. So we're just collecting together all the variances/covariances together. And the notation is very straightforward and simple with the matrix notation given here. Now, the correlation matrix, r_0, is obtained by pre- and post-multiplying this covariance matrix gamma_0 by a diagonal matrix with the square roots of the diagonal of this matrix. Now what's a correlation? Correlation is the correlation between two random variables where we've standardized the variables to have mean 0 and variance 1. So what we want to do is basically divide through all of these variables by their standard deviation and compute the covariance matrix on that new scaling. That's equivalent to just pre- and post-multiplying by that diagonal of the inverse of the standard deviations. So with matrix algebra, that formula is-- I think it's very clear. And this is-- now with-- the previous discussion was just looking at the sort of contemporaneous covariance matrix of the time series values at the given time t with itself. We want to look at, also, the cross-covariance matrices. So how are the current values of the multivariate time series, X_t-- how do they covary with the k-th lag of those values? So gamma_k is looking at how the current period vector values is covaried with the k-th lag of those values. So this covariance matrix has covariance elements given in this display. And we can define the cross-correlation matrix by similarly pre- and post-multiplying by the inverse of the standard deviations. The diagonal of gamma_0 is the covariance-- or is the matrix of diagonal entries of variances. Now, properties of these matrices is-- OK, gamma_0 is a symmetric matrix that we had before. But gamma k where k is greater than 1 or less than-- or greater or equal to 1 or less than-- basically different from 0. This is not symmetric. Basically, you may have lags of some variables that are positively correlated with others and not vice versa. So the off-diagonal entries here aren't necessarily even of the same sign, let alone equal and symmetric. So with these covariance matrices, one can look at how things covary and whether they are-- whether there is, basically, a dependence between them. And you can define-- it's basically the j star series-- the j star component of the multivariate time series may lead the j-th one if the covariance of the k-th lag of j star is different from 0-- or the covariance of j star k lags ago is non-zero, covaries with the j-th lag. Sorry. The current lag. So X_(t, j star) will lead X_(t, j). Basically, there's information in the lagged values of j star for the component j. So if we're trying to build models-- linear regression models, even, where we're trying to look at how-- trying to predict values, then if there's a non-zero covariance, then we can use those variables' information to actually project what the one variable is given the other. Now, it can be the case that you have non-zero covariance in both directions. And so that suggests that there can be sort of feedback between these variables. It's not just that one variable causes another, but there can actually be feedback. In economics and finance, there's a notion of Granger causality. And basically that-- well, Granger and Engle got the Nobel Prize number of years ago based on their work. And that work deals with identifying, in part, judgments of causality between-- or Granger causality between variables in economic time series. And so Granger causality basically is sort of positive or non-zero correlation between variables where lags of one variable will cause another or cause changes in another. All right. I want to just alert you to the existence of this Wold decomposition theorem. This is an advanced theorem, but it's a useful theorem to know exists. And this extends the univariate Wold decomposition theorem, which concerns the-- whenever we have a covariant stationary process, there exists a representation of that process, which is the sum of a deterministic process and a moving average process of a white noise. So if you're modeling a time series and you're going to be specifying a covariance stationary process for that, there does exist a Wold decomposition representation of that. You can basically determine-- identify the deterministic process that the process might follow. It might be a linear trend over time or an exponential trend. And if you remove that sort of deterministic process V_t, then what remains is a process that can be modeled with a moving average of white noise, these. Now here, everything is changed from univariate case to multivariate case, so we have matrices in place of constants from before. So these-- new concepts here are we have a multivariate white noise process. That's going to be a process eta_t which is m-dimensional which has mean 0. And the variance matrix of this m-vector is going to be sigma, which is now m by m variance/covariance matrix of the components. And that must be a positive semi-definite. And for white noise, we have covariances between, say, the current t innovation and a lag of its value are 0. So these are uncorrelated multivariate white noise processes. And so they're uncorrelated with each other at various lags. And the innovation eta_t has a covariance of 0 with the deterministic process. Actually, that's pretty much a given if we have a deterministic process. Now, the term psi_k-- basically we have this m-vector X_t is equal to some m-vectored process V_t plus this weighted average of innovations. What's required is that the sum of this-- basically each term psi_k and its transpose converges. Now, if you were to take that X_t process and say let me compute the variance/covariance matrix of that representation, then you would basically get terms in the covariance matrix which includes this sum of terms. So that sum has to be finite in order for this to be covariance stationary. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes? AUDIENCE: Could you define what you mean by innovation? PROFESSOR: Oh, OK. Well, the innovation is-- let's see. With-- let me go back up here. OK. The innovation process-- innovation process. OK, if we have, as in this case, we have sort of our X_t stochastic process. And we have sort of, say, f sub t minus 1 equal to the information on X_(t-1), X_(t-2)... Basically consisting of the information set available before time t. Then we can model X_t to be the expected value of X_t given F_(t-1) plus an innovation. And so our objective in these models is to be thinking of how is that process evolving where we can model the process as well as possible using information up to time before t. And then there's some disturbance about that model. There's something new that's happened at time t that wasn't available before. And that's this innovation process. So this representation with the Wold decomposition is converting the-- or representing, basically, the bits of information that are affecting the process that are occurring at time t and wasn't available prior to that. All right. Well, let's move on to vector autoregressive processes. OK, this representation for a vector autoregressive process is an extension of the univariate autoregressive process to p dimensions. Sorry, to m dimensions. And so our X_t is an m-vector. That's going to be equal to some constant vector C plus a matrix phi_1 times lag of X_t first order, X_(t-1). Plus another matrix, phi_2 times the second lag of X_t, X_(t-2). Up to the p-th term, which is a phi_p, m by m matrix times, X_(t-p) plus this innovation term. So this is essentially-- this is basically how a univariate autoregressive process extends to an m-variate case. And what this allows one to do is model how a given component of the multivariate series-- like how one exchange rate varies depending on how other exchange rates might vary. Exchange rates tend to co-move together in that example. So if we look at what this represents in terms of basically a component series, we can consider fixing j, a component of the multivariate process. It could be the first, the last, or the j-th, somewhere in the middle. And that component time series-- like a fixed exchange rate series or time series, whatever we're focused on in our modeling-- is a generalization of the autoregressive model where we have the autoregressive terms of the j-th series on lags of the j-th series up to order p. So we have the univariate autoregressive model, but we also add to that terms corresponding to the relationship between X_j and X_(j star). So how does X_j, the j-th component, depend on other variables, other components of the multivariate series. And those are given here. So it's a convenient way to allow for interdependence among the components and model that. OK. This slide deals with representing a p-th order process as a first order process with vector autoregressions. Now the concept here is really a very powerful concept that's applied in time series methods, which is when you are modeling dependence that goes back, say, a number of lags like p lags, the structure can actually be re-expressed as simply a first order dependence only. And so it's much easier sort of to deal with just a lag one dependence than to consider p lag dependence and the complications involved with that. So-- and this technique is one where, in the early days of fitting, like autoregressive moving average processes and various smoothing methods, the model-- basically accommodating p lags complicated the analysis enormously. But one can actually re-express it just as a first order lag problem. So in this case, what one does is one considers for a vector autoregressive process of order of p, simply stacking the values of the process. So let me just highlight what's going on there. So if we have basically-- OK, so if we have X_1, X_2, X_n, which are all m by 1 values, m-vectors of the stochastic process. Then consider defining Z_t to be equal to X_t transpose, X_(t-1) transpose up to X_(t-p-1) transpose. Or this is t minus (p-1). So there are p terms. And then if we consider the lagged value of that, that's X_(t-1), X_(t-2), X_(t-p) transpose. So what we've done is we're considering Z_t. This is going to be m times p. It's actually 1 by m times p in this notation. Well, actually I guess I should put transpose here. So m times p by 1. OK, in the lecture notes it actually is primed there to indicate the transpose. Well, if you define Z_t and Z_(t-1) this way, then Z_t is equal to D plus A of Z_(t-1) plus F. Where this is d, basically the constant term has the C entering and then 0's everywhere else. And the A matrix is phi_1, phi_2, up to phi_p. And so basically the Z_t vector transforms the Z_t-- or is the transform-- this linear transformation of the Z_(t-1). And we have sort of a very simple form for the constant term and a very simple form for the F vector. And this is-- renders the model into a sort of a first order time series model with a larger multivariate series, basically mp by 1. Now, with this representation we basically have-- we can demonstrate that the process is going to be stationary if all eigenvalues of the companion matrix A have modulus less than 1. And let's see-- if we go back to the expression. OK, if the eigenvalues of this matrix A are less than 1, then we won't get sort of an explosive behavior of the process when this basically increments over time with every previous value getting multiplied by the A matrix and scaling the process over time by the A-th power. So that is required. All eigenvalues of A have to be less than 1. And equivalently, all roots of this equation need to be outside the unit circle. You remember there was a constraint of-- or a condition for univariate autoregressive models to be stationary, that the roots of the characteristic equation are all outside the unit circle. And the class notes go through and went through the derivation of that. This is the extension of that to the multivariate case. And so basically one needs to solve for roots of a polynomial in z and determine whether those are outside the unit circle. Who can tell me what the order of the polynomial is here for this sort of determinant equation? AUDIENCE: [INAUDIBLE] mp. PROFESSOR: mp. Yes. It's basically of power mp. So in a determinant you basically are taking products of the m components in the matrix, various linear combinations of those. So that's going to be an mp-dimensional polynomial. All right. Well, the mean of the stationary VAR process can be computed rather easily by taking expectations of this on both sides. So if we take the expectation of X_t and take expectations across both sides, we get that mu is the C vector plus the product of the phi_k's times mu plus 0. So mu, the unconditional mean of the process, actually has this formula, just solving for mu in the top-- in the second line to the third line. So here we can see that basically this expression 1 minus phi_1 through phi_p, that inverse has to exist. And actually, if we then plug in the value of C in terms of the unconditional mean, we get this expression for the original process. So the unconditional mean C, if we demeaned the process, there's basically no mean term. There's 0. And so basically the mean-adjusted process X follows this multivariate vector autoregression with no mean, which is actually used when this is specified. Now, this vector autoregression model can be expressed as a system of regression equations. And so what we have with the multivariate series, if we have multivariate data, we'll have n sample observations x_t, which is basically the m-vector of the multivariate process observed for n time points. And for the computations here, we're going to assume that we have p sort of-- we have pre-sample observations available to us. So we're essentially going to be considering models where we condition on the first p time points in order to facilitate the estimation methodology. Then we can set up m regression models corresponding to each component of the m-variate series. And so what we have is our original-- we have our collection of data values, which is x_1 transpose, x_2 transpose, down to x_n transpose, which is an n by m matrix. OK, this is our multivariate time series where we were just-- the first row corresponds to the first time values, nth row to the nth time values. And we can set up m regression models where we're going to consider modeling the j-th column of this matrix. So we're just picking out the univariate time series corresponding to the j-th component. That's y j. And we're going to model that as Z beta j plus epsilon j where Z is given by the vector of lagged values of the multivariate process where there's, for the t-th-- t minus first value we have that current value-- or the t minus first, t minus second, up to t minus p. So we have basically p m-vectors here. And so this j-th time series has elements that follow a linear regression model on the lags of the entire multivariate series up to p lags with their regression parameter given by beta j. And basically the beta j regression parameters corresponds to the various elements of the phi matrices. So now there's a one-to-one one correspondence between those. All right. So I'm using now a notation where superscript j corresponds to the j-th component of the series, of the multivariate stochastic process. So we have an mp plus 1 vector of regression parameters for each series j, and we have an epsilon j for-- an n-vector of innovation errors for each series. And so basically if this, the j-th column, is y j, we're modeling that to be equal to the simple matrix Z times beta j plus epsilon j, where this is n by 1. This is n by np plus 1. And this beta j is the mp plus 1 regression parameter. OK. One might think, OK, one can consider each of these regressions for each of the component series, you could consider them separately. But to consider them all together, we can define the multivariate regression model, which has the following form. We basically have the n-vectors for the first component, and then the second component up to nth component. So an n by p matrix of dependent variables, where each column corresponds to a different component series, follows a linear regression model with the same Z matrix with different regression coefficient parameters, beta 1 through beta m corresponding to the different components of the multivariate series. And we have epsilon 1, epsilon 2, up to epsilon m. So we're thinking of taking-- so basically the y 1, y 2, up to y m is essentially this original matrix of our multivariate time series because it's the first component in the first column and the nth component in the nth column. And the-- this regression parameter or this explanatory variables matrix X, Z in this case corresponds to lags of the whole process up to p lags. So we're having lags of all the-- the m-variate process up to p lags. So that's mp and then plus 1 for our constant. So this is the set up for a multivariate regression model. In terms of how one specifies this, well, actually, in economic theory this is also related to seemingly unrelated regressions, which you'll find in econometrics. If we want to specify this multivariate model, well, what we could do is we could actually specify each of the component models separately because we basically have sort of-- can think of the univariate regression model for each component series. And this slide indicates basically what the formulas are for that. So if we don't know anything about multivariate regression we can say, well, let's start by just doing the univariate regression of each component series on the lags. And so we get our beta hat j's least squares estimates given by the usual formula where the independent variables matrix Z goes Z transpose Z inverse Z transpose Y are the residuals. So these are familiar formulas. And if we did this for each of the component series j, then we would actually get sample estimates of the innovation process, the eta_1, basically the whole eta series. And we could actually define from these estimates of the innovations our covariance matrix for the innovations as the sample covariance matrix of these etas. So all of these formulas are-- you're basically applying very straightforward estimation methods for the parameters of a linear regression and then estimating variances/covariances of these innovation terms. So from this, we actually have estimates of this process in terms of the sigma and the beta hats. But it's made assuming that we can treat each of these component regressions separately. A rather remarkable result is that these component-wise regressions are actually the optimal estimates for the multivariate regression as well. And as mathematicians, this kind of result is, I think, rather neat and elegant. And maybe some of you will think this is very obvious, but it actually-- it isn't quite obvious. That said, this component-wise estimation should be optimal as well. And the next section of the lecture notes goes through this argument. And I'm going to, in the interest of time, go through this-- just sort of highlight what the results are. The details are in these notes that you can go through. And I will be happy to go into more detail about them during office hours. But if we're fitting a vector autoregression model where there are no constraints on the coefficient matrices phi_1 through phi_p, then these component-wise estimates, accounting for arbitrary covariance matrix sigma for the innovations, those basically are equal to the generalized least squares estimates of these underlying parameters. You'll recall we talked about the Gauss-Markov theorem where we were able to extend the assumption of equal variances across observations to unequal variances and covariances. Well, it turns out to these component-wise OLS estimates are, in fact, the generalized least squared estimates. And under the assumption of Gaussian distributions for the innovations, they, in fact, are maximum likelihood estimates. And this theory applies Kronecker products. We're not going to have any homework with Kronecker products. These notes really are for those who have some more extensive background in linear algebra. But it's a very nice use of these Kronecker product operators. Basically, this notation-- I don't know, x circle, I'll call it Kronecker-- is one where you take a matrix A and a matrix B and you consider the matrix which takes each element of A times the whole matrix B. So we start with an m by n matrix A and end up with an mp by qn matrix by taking each element of A times the whole matrix B. So it's, they say, has this block structure. So this is very simple definition. If you look at properties of transposition of matrices, you can prove these results. These are properties of the Kronecker product. And there's a vec operator which takes a matrix and simply stacks the columns together. And in the talk last Tuesday of Ivan's, talking about modeling the volatility surface, he basically, he was modeling a two dimensional surface-- or a surface in three dimensions, but there was two dimensions explaining it. You basically can stack columns of the matrix and be modeling a vector instead of a matrix of values. So the vectorizing operator allows us to manipulate terms into a more convenient form. And this multivariate regression model is one where it's set up as sort of a n by m matrix Y, having that structure. It can be expressed in terms of the linear regression form as y star equaling the vector, the vec of y. So we basically have y 1, y 2, down to y m all lined up. So this is pm by 1. That's going to be equal to some matrix plus the epsilon 1, epsilon 2, down to epsilon n. And then there's going to be a matrix and a regression coefficient matrix beta 1, beta 2, down to beta p. So we consider vectorizing the beta matrix, vectorizing epsilon, and vectorizing y. And then in order to define this sort of simple linear regression model, univariate regression model, well, we need to have a Z in the first column here corresponding to beta 1 for y 1, and 0's everywhere else. In the second block we want to have a Z in the second off diagonal with 0's everywhere else and so forth. So this is just re-expressing everything in this notation. But the notation is very nice because, at the end of the day we basically have a regression model like we had when we were doing our regression analysis. So all the theory we have for specifying these models plays through with univariate regression. And one can go through this technical argument to show that the generalized least squares estimate is, in fact, the equivalent to the component-wise values. And that's very, very good. Maximum likelihood estimation with these models. Well, we actually use this vectorized notation to define the likelihood function. And if these assumptions are made about the linear regression model, we basically have an n times m vector of dependent variable values, whereas your multivariate normal with mean given by x star beta star and then a covariance matrix epsilon. The covariance matrix of epsilon star is sigma star. Well, sigma star is I_n Kronecker product sigma. So if you go through the math of this, everything matches up in terms of what the assumptions are. And the conditional probability density function of this data is the usual functions of log-normal or of a normal sample. So we have unknown parameters beta star sigma, which are equal to the joint density of this normal linear regression model. So this corresponds to what we had before in our regression analysis. We just had this more complicated definition of the independent variables matrix X star. And a more complicated definition of our variance/covariance matrix sigma star. But the log-likelihood function ends up being equal to a term proportional to the log of the determinant of our sigma matrix and minus one half Q of beta sigma, where Q of beta sigma is the least squares criterion for each of the component models summed up. So the component-wise maximum likelihood estimation is-- for the underlying parameters, is the same as the large one. And in terms of estimating the covariance matrix, there's a notion called the concentrated log-likelihood, which comes into play in models with many parameters. In this model, we have unknown parameters-- our regression parameters beta and our covariance matrix for the innovations sigma. It turns out that our estimate of the regression parameter beta is independent, doesn't depend-- not statistically independent-- but does not depend on the value of the covariance matrix sigma. So whatever sigma is, we have the same maximum likelihood estimate for the betas. So we can consider the log-likelihood setting the beta parameter equal to its maximum likelihood estimate. And then we have a function that just depends on the data and the unknown parameter sigma. So that's a concentrated likelihood function that needs to be maximized. And the maximization of the log of a determinant of a matrix minus n over 2 the trace of that matrix times an estimate of it, that has been solved. It's a bit involved. But if you're interested in the mathematics for how that's actually solved and how you take derivatives of determinants and so forth, there's a paper by Anderson and Olkin that goes through all the details of that that you can Google on the web. Finally, let's see. There's-- well, not finally. There's model selection criteria that can be applied. These have been applied before for regression models for univariate time series model, the Akaike Information Criterion, the Bayes Information Criterion, Hannan-Quinn Criterion. These definitions are all consistent with the other definitions. They basically take the likelihood function and you try to maximize that plus a penalty for the number of unknown parameters. And that's given here. OK, then the last section goes through an asymptotic distribution of least squares estimates. And I'll let you read that on your own. Let's see. For this lecture I put together an example of fitting vector autoregressions with some macroeconomic variables. And I just wanted to point that out to you. So let me go to this document here. What have we got here? All right. Well, OK. Modeling macroeconomic time series is an important topic. It's what sort of central bankers do. They want to understand what factors are affecting the economy in terms of growth, inflation, unemployment. And what's the impact of interest rate policies. There are some really important papers by Robert Litterman and Christopher Sims dealing with fitting vector autoregression models to a macroeconomic time series. And actually, the framework within which they specified these models was a Bayesian framework, which is an extension of the maximum likelihood method where you'll incorporate reasonable sort of prior assumptions about what the parameters ought to be. But in this note, I sort of basically go through collecting various macroeconomic variables directly off the web using the package R. All this stuff is-- these are data that you can get your hands on. Here's the unemployment rate from January 1946 up through this past month. Anyone can see how that's varied between much less than 4% to over 10%, as it was recently. And there's also the Fed funds rate, which is one of the key variables that the Federal Reserve Open Market Committee controls, or I should say controlled in the past, to try and affect the economy. Now that value of that rate is set almost at zero and other means are applied to have an impact on economic growth and the economic situation of the market-- of the economy, rather. Let's see. There's also-- anyway, a bunch of other variables. CPI, which is a measure of inflation. What this note goes through is the specification of vector autoregression models for these series. And I use just a small set of cases. I look at unemployment rate, federal funds, and the CPI, which is a measure of inflation. And there's-- if one goes through, there are multivariate versions of the autocorrelation function, as given on the top right panel here, between these variables. And one can also do the partial autocorrelation function. You'll recall that autocorrelation functions and partial autocorrelation functions are related to what kind of-- or help us understand what kind of order ARMA processes might be appropriate for univariate series. For multivariate series, then there are basically cross lags between variables that are important, and these can call be captured with vector autoregression models. So this goes through and shows how these things are correlated with themselves. And let's see. At the end of this note, there are some impulse response functions graphed, which are looking at what is the impact of an innovation in one of the components of the multivariate time series. So like if Fed funds were to be increased by a certain value, what would the likely impact be on the unemployment rate? Or on GNP? Basically, the production level of the economy. And this looks at-- let's see. Well, actually here we're looking at the impulse function. You can look at the impulse function of innovations on any of the component variables on all the others. And in this case, on the left panel here is-- it shows what happens when unemployment has a spike up, or unit spike. A unit impulse up. Well, this second panel shows what's likely to happen to the Fed funds rate. It turns out that's likely to go down. And that sort of is indicating-- it's sort of reflecting what, historically, was the policy of the Fed to basically reduce interest rates if unemployment was rising. And then-- so anyway, these impulse response functions correspond to essentially those innovation terms on the Wold decomposition. And why are these important? Well, this indicates a connection, basically, between that sort of moving average representation and these time series models. And the way these graphs are generated is by essentially finding the Wold decomposition and then incorporating that into these values. So-- OK, we'll finish there for today.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
24_HJM_Model_for_Interest_Rates_and_Credit.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. DENIS GOROKHOV: So I work at Morgan Stanley. I run corporate treasury strategies at Morgan Stanley. So corporate treasury is the business unit that is responsible for issuing and risk management of Morgan Stanley debt. I also run desk strategies own the New York inflation desk. That's the business which is a part of the global interest rate business, which is responsible for trading derivatives linked to inflation. And today, I'm going to talk about the HJM model. So HJM model-- the abbreviation stands for Heath-Jarrow-Morton, these three individuals who discovered this framework in the beginning of 1990s. And this is a very general framework for pricing derivatives to interest rates and to credit. So on Wall Street, big banks make a substantial amount of money by trading all kinds of exotic products, exotic derivatives. And big banks like Morgan Stanley, like Goldman, JP Morgan-- trades thousands and thousands of different types of exotic derivatives. So a typical problem which the business faces is that new types of derivatives arrive all the time. So you need to be able to respond quickly to the demand from the clients. And you need to be able not just to tell the price of derivative. You need to be able also to risk manage this derivative. Because let's say if you sold an option, you've got some premium, if something goes not in your favor, you need to pay in the end. So you need to be able to hedge. And you can think about the HJM model, like this kind of framework, as something which is similar to theoretical physics in a way, right? So you get beautiful models-- it's like a solvable model. For example, let's say the hydrogen atom in quantum mechanics. So it's relatively straightforward to solve it, right? So we have an equation, which can be exactly solved. And we can find energy levels and understand this fairly quickly. But if you start going into more complex problems-- for example, you add one more electron and you have a helium atom-- it's already much more complicated. And then if you have complicated atoms or even molecules, it's unclear what to do. So people came up with approximate kind of methods, which allow nevertheless solve everything very accurately numerically. And HJM is a similar framework. So you can-- it allows to price all kinds of [INAUDIBLE] derivatives. And so it's very general. It's very flexible to incorporate new payoffs, all kinds of correlation between products and so on, so forth. And this HJM model-- [INAUDIBLE] natural [INAUDIBLE] more general framework like Monte Carlo simulation. And before actually going into details of pricing exotic interest rates and credit derivatives, let me just first explain how this framework appears in the most common type of derivatives, basically equity-linked product. So like a very, very simple example, right? So let's say if we have a derivatives desk at some firm, and they sell all kinds of products. Of course, ideally, let's say there's a client who wants to buy something from you. Of course, the easiest approach would be to find the client and do an opposite transaction with him, so that you're market neutral, at least in theory. So if you don't take into account counterparties and so on. However, it's rather difficult in general, so the portfolios are very complicated. And there's always some residual risk. So this is the cause of dynamic hedging. So for this example, very simple example, a dealer just sold a call option on a stock. And if you do this, then in principle, the amount of money which you can lose is unlimited. So you need to be able to hedge dynamically by trading underlying, for example, in this case. So just a brief illustration of the stock markets, you see how random it has been for the last 20 years or so. So first of all, this year, some kind of-- from beginnings of the 1990s to around 2000, we see really very sharp increase. And then we have dot-com bubble, and then we have the bank [INAUDIBLE] of 2008. And if you trade derivatives whose payoff depends, for example, on the FTSE 100 index, you should be very careful. All right? Because market can drop, and you need to be hedged. So you need to be able to come with some kind of good models which can recalibrate to the markets and which can truly risk manage your position. So the so the general idea of pricing derivatives is that one starts from some stochastic process. So in this example here, it's probably like the simplest possible-- nevertheless a very instructive-- model, which is essentially like these [INAUDIBLE] Black-Scholes formalism, which is where we have the stock, which follows the log-normal dynamics. I have a question. Do you have a pointer somewhere, or not? It's just easier-- OK, OK. PROFESSOR: Let's see. There's also a pen here, where you can use this. DENIS GOROKHOV: Oh, I see. PROFESSOR: Have you used this before? You press the color here that you want to use, say, and then you can draw. You press on the screen. DENIS GOROKHOV: Oh, I see. Excellent. That's even better. OK, so it seems like the market is very random. We need to be able to come up with some kind of dynamics. And it turns out that the log-normal dynamics is a very reasonable first approximation for the actual dynamics. So in this example, we have stochastic differential equation for the stock price. And it consists-- it's the sum of two terms. This is a drift, it's some kind of deterministic part of the stock price dynamics. And here, also, we have diffusion. So here, dB is the Brownian motion driving the stock, and S is the price of the stock here. Mu is the drift. And sigma is the volatility of the stock. Particularly, it shows the randomness. And it's the randomness impact on the stock price. And using this model, one can derive the Black-Scholes formula. And the Black-Scholes formula shows how to price derivatives whose payoff depends on the price of the stock. So here, if you look at this differential equation, then you can answer the question. Let's say if you started from some initial value for the stock at time t. And then we started the clock. Which are now to be at time capital T. And given that time T, then stock price is S_T. So what's the probability distribution for the stock at time T? So this kind of equation can be very easily solved. And one can obtain analytically the probability distribution function at any [INAUDIBLE] moment of time. So I mean, I just think I'll write a few equations, because it's very important to understand this. So I'm sure you probably have seen something like this already, but let me just show you the main ideas beyond this formula. So if you have a random process-- let's say A is some process, stochastic process, which is normal. So it follows some drift. Plus some volatility term. Right? So the difference between this equation is that I don't multiply by A here and A here. Especially, it's much simpler to solve. So the solution for this equation is very straightforward. So at any moment of time T, if you start at moment 0, the solution of the equation would be something like this. Drift-- right? I'm simply integrating. Plus-- and I assume that B of t is standard Brownian motion, so at time 0, it's 0. And then it's very easy to see now that... is equal to the Brownian motion. But this is nothing else. It's some random number, which is normally distributed, times square root of time. So epsilon is proportional to it. OK, so basically, this means that this is normally distributed. And its-- and probability distribution for this quantity is equal to-- we know it's exactly, right, because this is like a standard Gaussian distribution. And if you simply substitute A into here, then you will obtain the probability distribution for the actual quantity. And I'll just write it for the completeness. So basically, we obtain probability distribution for the standard variable. So this is straightforward. So the only difference between the case I'm doing here is that the dynamics is assumed to be log-normal. Right? And the interpretation is very simple. If it's normal, then the price of the stock can become negative. Which is just a financial nonsense. So the [INAUDIBLE] log-normal dynamics basically is a good first approximation. And in this case, what helps as a result is just known as Ito's lemma. So I just first of all write it, and then I will explain how you can obtain it. And if you look at this equation-- let me write it once again-- which is basically the drift plus-- then it turns out that, of course, since--- it-- intuitively it's clear that the dynamics of logarithm of S is-- dynamic of logarithm of S is normal. So essentially, we obtain something like this. So if you now substitute this into this, you locked in a very simple formula. OK, so here, I used the result, which is known as Ito's lemma, which I'm going to explain right now. Like how it was obtained-- basically, it tells us that when we differentiate the function of a stochastic variable. Then besides the trivial term, which is basically the first derivative times dS, there's an additional term, which is proportional to the second derivative. And it's non-stochastic, so I'll explain why it's this. But if you do it-- if you look at this equation, then you see essentially this formula. It's very, very similar to this formula. The only difference now is that alpha is just mu minus one half of sigma squared. So that's a how, if you iteratively use this solution, and simply substitute A by log S, you will come to this equation. So this is very important. So it's a very important effect, like-- yes? AUDIENCE: The fact that it can't be negative, does that exclude certain possibilities? When there's a normal Gaussian, can go negative or positive? DENIS GOROKHOV: Yes, but stock-- from a financial point of view, stock cannot be a liability. Right? You buy a stock. This means basically, you pay some money. And you have basically some sort of, say, option on the profit of the company. So they can't charge you by default. So it can't go negative for the stock. Also, in principle, there might be derivatives, which can be both positive or negative payoff, but not the stock. So it's fundamental financial restriction. So very important thing. So if you talk about the stock dynamics and Black-Scholes formalism, it's very important that the probability distribution for the stock can be found exactly. And I'll just [INAUDIBLE] very briefly go, again, through the Black-Scholes formalism, it's very important just for understanding. And I believe there are a couple of things which, at least when I was studying this, it was not very clear to me, so I want to go to some more detail. So basically, here, this derivation is almost like every textbook. So the idea is that there is a very fundamental result in stochastic calculus. That if you have a stochastic function, function of stochastic variable S and time, then its differential can be written as the following form. So this is all very clear, right? This is standard calculus? It's straightforward. But there is an additional term that looks a bit suspicious. And I will explain what it actually means on the next slide. So a very important thing is that when you calculate dC, then you will obtain deterministic term which is proportional to the second derivative. And you see, there is no-- the fact is that you have here dt, basically this looks like it's an additional contribution to the drift. We view this as drift, and this is a drift. And there is no, any more, stochasticity. It's very important. This is like a crucial fact beyond the Black-Scholes formalism and the Monte Carlo method in finance. And then, the idea, you can read, for example, in Hull's book, its standard proof. So if we issue an option, then we hedge it-- by having a certain position in the underlying. So the idea is like this. Let's say I sold a call option of the stock. So when the stock market goes, up I make some money. And then, in the same time, I short the stock, so I lose money on my hedge. And wherever the market goes, I don't make or lose money. So that's the idea, basically, beyond hedging. And basically, what happens if I calculate the change in my portfolio, then since there is no risk involved, I assume I am perfectly hedged. Then I simply obtain the risk-free return. So r here is the risk-free interest rate. So if you simply look at this equation and substitute the Ito's lemma result here, then you obtain like a very simple equation, which is basically Black-Scholes differential equation for the stock-- for the price of the option. So this equation is very fundamental. And it's very elegant. So you can see although originally, right, if you started from something with some arbitrary risk, with some arbitrary drift mu. Right? Which is basically-- it could be anything. Which is that this drift mu drops out of the equation. And it depends on the interest rate. And this is a very interesting fact. So and this very interesting fact has to do with hedging. Again, you have position in an option, and you have an opposite position in underlying. And that's how the drift disappears. If you look at the movement of both the positions, then you see that there that the drift will disappear. So it's a very important and striking fact. And the second thing, which is truly a miracle, is that risk is eliminated completely. So this equation has absolutely no stochasticity. So you can just solve it. If you specify the option payoff, and if you know your volatility, which is a measure of your-- basically often how the stock fluctuates. And if you know the risk-free interest rate, you can just price the options. And this is a true miracle that occurs. And when I was studying this, I couldn't really understand this-- maybe because I was coming from theoretical physics, and all this result called Ito's lemma is buried somewhere in stochastic calculus. And I would just try to understand in [INAUDIBLE] what it all means. And let them just explain here, basically how one can understand this result, Ito's lemma, in a very simple term. [INAUDIBLE] terms. So let me just write-- let me remind you. So Ito's lemma basically tells the following-- once again, so if C is the function of stochastic variables, of stochastic variable S, then its differential is not just equal to some standard result from calculus. But we also get some kind of very exotic term, which is basically very nontrivial. And let me just try to explain to you how actually it appears. So just to understand this, I recommend everybody after this lecture, look at this derivation, because it really explains what this Ito's lemma means. So the idea is very simple. So let's start from electrons for the first principles. And let's say we have an interval of time, with length dt. And let's say we divide it into n intervals, and each interval length is dt prime. Right? And I assume that the ratio of dt over dt prime is sufficiently large. So first, we know that our stock, as we know, follows the log-normal dynamics. So this means that if I go from from time i to time i plus 1, here you need to exchange i and i minus 1. So then, you can always [INAUDIBLE] the following form, right? So S at time i plus 1 minus S at time i is equal to the drift term-- right? Which is a discrete version of the stochastic differential equation. Plus the randomness. So here again, sigma is volatility. It's the measure of how the stock fluctuates, which is the stock price, which is square root of dt, because Brownian motion fluctuation is proportional to the time. And also here, we have epsilon, and epsilon is a standard normal variable. Then-- OK, so we have this. This is pretty straightforward. Basically, I just throw stochastic differential equation on the latest. And I go from point i to point i plus 1. Now, let's see what it means for the price of the option. So again, so C is the price of the option. At time T, when the stock price is equal to S_(i+1). So the change in the option price is equal to-- like the first term, just something very standard, standard calculus. Plus the first derivatives and the difference in the stock price, plus I take the second-order term, this is the second derivative, and I have here S times i plus 1, minus S times i squared. So this is approximate, because I'm taking only the main terms. Or the other terms, given that both times dt and dt prime are very small, they can be neglected. So you can check it carefully at home if you want to. But I guarantee that there is no miracle here. Everything we need is here. Now let's do the following. So we have this equation. And let's look at this term. So this term, basically, is the cornerstone of the Ito's lemma. So let's take this equation for the difference and substitute into here. And you see here, again, you can look-- what is important, against the time scales. So dt prime is very small. Therefore, the term, which is random, dominates here. Right? Because [INAUDIBLE] square root of dt. And square root for small times is much bigger than the linear function. Therefore, we simply neglect this term compared this term. And with linear accuracy in dt prime, we can approximate this just by this term. Now, what to do-- so again, we wrote the same equation, that latest difference for the option price of two neighboring points. And what I'm doing right now, I have all this equation, and I will simply sum them-- basically from 0 to N. So let's say I have all these equations from 0 to N minus 1, and I sum them. And again, it's very straightforward and obtains the full equation. And again, what is very interesting is that we will obtain-- you look at this term. So this term is very complicated. It's essentially stochastic right? Because-- it looks like very stochastic. And because-- remember that this is the standard normal variable. And all of them are independent. So in principle, we have the sum of N independent normal variables squared. And it turns out-- it's really a very beautiful result, and I recommend everybody also do it at home, I try to show you right now on the blackboard-- that if you sum up all this epsilon squared, that in the limit when N goes to infinity, this term becomes deterministic. So let me just show you basically what exactly is meant. So what I mean by deterministic is that of course, if I have epsilon squared, then it's-- there is some probability distribution, right? It's distributed between 0 and infinity, right? So this is some kind of function. But my claim is that once I start adding more and more numbers-- and so on, and so on-- then this function will become more and more and more narrow. So it behaves like a deterministic random-- like a completely deterministic variable in the large N limit. And to do this, let me just write a very simple-- write explicitly of what I mean. So essentially, remember that we have the sum of variables. Right? And for us to show that it's become deterministic, we need to show that it squared-- The width of the distribution, which I defined as-- let's say you have a variable, right? And if I defined the dispersion in the following way, now I define here the dispersion for this random variable which is equal to the sum of epsilon squared. So if I write it here, then it turns out that each term in this equation is proportional to N squared, which is natural. But it turns out that the difference in the large N limit is proportional only to N. Therefore, if you have this variable, which-- if you sum up more and more terms, then we'll have a variable. We have a distribution for this variable, which is moving in this direction. And of course it moves this direction, but it becomes more and more and more narrow, basically. So as the limit of N tends to infinity, it becomes a deterministic. So I'd recommend everybody at home just do this very simple exercise. And you will see that essentially, the sum behaves as a deterministic quantity. So just to do this, you need to you need to know the very simple properties of the standard normal distribution. First of all, the average expected value of epsilon is equal to 1, right? For a standard normal variable. And also, you need to know that the fourth moment of the normal variable is equal to 3. So if you have this, then you can calculate this, which is trivial to calculate. And then you can come to this property that, once again, probability distribution function, in the large N limit, behaves deterministic. It essentially becomes like a delta function. So this is a very interesting result, because it basically explains why in the Black-Scholes equation, we have this very weird by deterministic term. And that's why the option pricing is possible. Because if you started pricing options-- like if you don't know anything about Black-Scholes, it might be that there's no price for the option, because it might be that although you do hedge, you still cannot eliminate your randomness completely. Maybe hedge helps you with just too narrow the distribution of your outcome, but we're just not guaranteed at all. So it's really very-- Ito's Lemma, which is usually in every book on derivatives, probably like the first equation ever written, basically is given without any proof. But this-- in reality, it's a very interesting limit. So it can be realized only if you have two different time scales. So the small time scale, which is dt prime is-- in the business sense, it corresponds to your hedging frequency. It's when you rebalance your hedging portfolio. And the time dt, there's a time scale dt, which is much bigger than dt prime. It's at the time at which you look at your portfolio. So only in this very weird limit, when dt over dt prime goes to infinity, you strictly have Ito's Lemma. So actually, if you look even like is most famous book on derivatives. If you look at this edition, you will see actually that the proof actually isn't correct. So just look at it and find what's wrong there. AUDIENCE: [INAUDIBLE] normal? DENIS GOROKHOV: Sorry? AUDIENCE: That's what it is? DENIS GOROKHOV: Yeah, this is what [INAUDIBLE] means. So if you use these two results here, you will see that your it's proportional only to N, not to N squared. So that's why your distribution becomes more and more narrow. Because when you sum up, what it means is you sum up more and more variables. Each of them was like random normal variables. So the average-- average goes like N. But the dispersion-- the dispersion, right? That's the standard deviation, right? You have a square root of N. That's why basically, square root of N over N is small. So by increasing N, basically you become more and more and more deterministic. So that's the main fact beyond Ito's lemma. So that's it's obtained. So I recommend everybody just look in detail, because this is the cornerstone of derivatives pricing theory, but at many books it's not really well-written. So when I was studying, it was like I couldn't understand for a while. So it took me a time just to understand. What else? And a very interesting thing now is that remember that we used Ito's lemma-- and basically, we are able to obtain this equation. And this equation is very well known in literature. It's very similar to the heat equation. And heat equation can be solved using standard methods. And I don't want to write any derivation here, it's relatively straightforward. Maybe a bit cumbersome but straightforward. And if payoff of your option at maturity is given by some function, which is not really important here. Because you can write a very general solution. So what is here is essentially Green's function of this equation. And this Green's function, if you look at this equation, is very similar to the probability distribution function, which we have on this slide in the very beginning. So this function is identical to this function, and the only difference is that the drift of stock in the real world disappears. And we are left only with the interest rate. And so this equation, which is again, also very important for the derivatives pricing, is how we come up with the whole idea of Monte Carlo simulation. So this is nothing else as a Green's function, which basically tells us how the stock evolves in the risk-neutral space. Risk-neutral space is essentially some kind of imaginary world, like [INAUDIBLE] world, where all the assets' drift is just the interest rate and not the actual drift. So it's very fundamental. So it's very important things that the drift in real world drops out of all the equations. So the only parameter which actually does matter for option pricing is volatility. So this parameter's relatively easier to understand, right? Because that's how much money your deterministic investment basically makes. So [INAUDIBLE] is the [INAUDIBLE] parameter. So naively, you could expect I need both mu-- let me just remind you what mu is. Mu and sigma, two independent parameters. But it turns out mu completely drops out of the picture. And this is because of dynamic hedging, because we hedged the position. And so now this equation-- since this is basically Green's function, and Green's function tells us what's the probability density of the stock at some time in the future, if the stock were at some point initially, then basically this means that we can simulate the stock dynamics. And we can price derivatives, like, using a very simple framework. So what do we do? We simply write the equation for the stock in the risk-neutral world. Remember, the difference is that instead of the actual drift of the stock, mu, we substitute here by the interest rate. And this is basically how much money, roughly speaking, the bank account makes. And what we do-- we start from some stock value at time 0, and then we simulate stock along different paths. So there are like three paths here. There could be like thousands. So now-- and you know, now, let's say we know the stock payoff at maturity. And what you do-- then the price of derivative is very simple. Essentially, you take the average of this payoff, over the distribution. And you know distribution, because you just simulated the stock price. And you just discount it with the interest rate. So it's extremely simple. So in principle, implementing this-- I'd say if you have package like MATLAB, it probably takes like maybe one hour at most, implement let's say pricing of Black-Scholes formula via a Monte Carlo simulation. So maybe if you have time, you can try this and see how your Monte Carlo solution converges to the exact result which was first obtained by Black-Scholes and Merton. So basically, this is a super powerful framework, which basically tells us something like this. So it's not applicable just to the stock prices, but it's also applicable to interest rate derivatives, credit derivatives, and foreign exchange derivatives, so on and so forth. Basically, the idea is like this. You have some-- the payoff of your derivative depends on various financial variables. And you simply simulate all of them in the risk-neutral world. Right? So you simulate all of them, and then you could calculate the average of the payoff. And you just discount it. And that's how you can price derivatives. So in principle, if you have a flexible IT infrastructure, like a financial institution, so you can implement it. And then you can price pretty much everything. That's basically how exotic derivatives are priced, whose prices are not easy to obtain using analytical methods. Which is the case for a large amount of derivatives. So this is the whole idea, right? So Monte Carlo simulation is a very fundamental concept. So we do the simulation in the risk-neutral world, and there are certain rules how to write these equations for different asset classes-- could be stock again, could be foreign exchange, could be credit, could be rates, whatever. And then you do some kind of sampling, you find average, and then basically you are done. So this is how it works with the stock, and let me just explain how to generalize all these ideas for the case of interest rates and credit derivatives. So and-- let me just start from the very basics of the interest rate derivatives. So of course the whole point of these derivatives is to allow financial institutions or individuals to manage their interest rate risk better. So businesses need money to run their business. So big institutions, big corporations, have billions, [INAUDIBLE] hundreds of billions of dollars of debt, and they know how to risk manage it, [INAUDIBLE] efficiently. And just to make money, and not even necessarily financial institutions. So of course if you borrow money, then you need to pay some interest. So you can think about interest rate derivatives as some kind of option on the stochastic interest, because let's say say today, you can borrow money at 5%. But tomorrow, this rate can change. So in order to control this uncertainty, you need to be able to buy some derivatives, just to hedge your exposure, for example. Or it might just speculate. Maybe you just have some view that rates will go up or down. So it depends on the type of investor or speculator, whatever. And so I mean this is a very simple concept of present value of money. If I have dollar today, it's definitely better than the dollar one year from now. Let's say I have a dollar, right? But I will get it only in one year from now. So how much does it cost? It's clear that if the interest rate is 5%, it roughly costs $0.95. Right? Because what do I do? If the interest rate is 5%, then I take $0.95, and I'd put it into bank account, and I'd make 5%. So I will get like $1 in one year from now. So there exists very important concept of the present value. Or like time value of money. Depending on where in the future you are, how much money it costs today. And people talk about-- it's very often a fundamental notion of the fixed income derivative, is the discount factor. So it essentially tell you that OK, if you have one dollar today, it costs one dollar. But if you have one dollar in the future, basically it costs something else. So this is a very important notion in finance. So I'll tell a little bit more how [INAUDIBLE] them together, this functional [INAUDIBLE]. So another very important thing in the interest rate derivatives is the forward rate. So remember, okay, so we have discount factor. And the very important thing about discount factor is it should start at 1. Because a dollar today is a dollar. There is no uncertainty, right? Thus it's clear that this function should be decaying, or at least non-increasing, with time. So that's why it's very convenient to parametrize this kind of function with forward rates. So this is some positive forward rates. And [INAUDIBLE] very convenient. And remember, let's say, in the example below, like on this page, if all maturities earn 5%, then this is simply 5% a year. So for this example, basically your forward rate is just flat. OK, so if this is an example-- and when you talk about interest rate derivatives, it's very convenient to model the dynamics of the forward rates. So again, it's very different from the stock, because it's got an additional dimension. So if you model the stock dynamics, it's just a point process. Right? Let's say it's $100 today, and then you start modeling. Next, they'll go to $95, could go to $105, so on and so forth. But interest rates, it's more about curve. So it has an extra dimension-- it's a one-dimensional object. And the reason is very simple. In general, let's say if you borrow money for one year, then let's say you pay one percent. But if you borrow for two years, it might be that you borrow it for 2%, and so on. So there's a concept of the yield curve. And here basically tells us how much different maturities make. So in a typical situation, with your curve, if you don't have some [INAUDIBLE] of recession, which sometimes happens, it's usually upward sloping. This basically means if you borrow money for longer term, you pay higher interest. You can see it very easily. Like for those who have mortgages right there, it's always like 15-year mortgage rate is lower than 30-year mortgage rate. And just here I just show-- to give you a [INAUDIBLE], of where we are right now in terms of interest rates, basically I just show you the yield of a 10-year US Treasury note. So what is 10-year Treasury note? Basically, the US government borrows money to finance its activities. And then it works like this. Let's say I'm an investor. I'm giving the US government $100. And then every year, like for the next 10 years-- more exactly, like twice a year-- let's say they are paying me some coupon. Let's say if the interest rate per year is 5%, this means that if I give the US government $100, then the government pays me $2.50 every half a year. And at the very end, in 10 years from now, they must return $100, the notional. And basically, if you look again how stochastic the rates are right and what kind of environment we are in right now, you can see that over the last about 50 years, we see very interesting picture. From about '60s to about '80, '82, we can see a tremendous increase in interest rates. And this is something which looks very unbelievable right now. So this problem nowadays. If one takes, let's say a mortgage, now a 30-year mortgage is maybe 4%, 4.5% nowadays. But let's say here, about 30 years ago, it was like a [INAUDIBLE] interest rate-- very high inflation. And mortgage rates were in double digits. It was not uncommon to pay like 15% if you would take mortgage somewhere here. So the rates were increasing. But since then, we live in a very different environment, when interest rates gradually go and go down. So essentially, here, basically it shows in 1980, the US Government would pay 12% a year each year to borrow money for 10 years. So at the end of 2012, it paid less than 2%-- just 1.7%. So there like a very clear trend, you know? Something's going down. So in recent years, there is some kind of uptick here. But you know, we always get some kind of situation here. So where are we going? Nobody knows. But really, we're in this situation where interest rates are extremely low. It was nothing like this, basically for the last 50 years. So it's very unusual, and you have these very low interest rates. This means that the economy is very weak, because this means there's not much demand on borrowing, right? Because corporations, like individuals, they don't want to borrow a lot, because once [INAUDIBLE] again, like supply-demand, right? Because if you want to borrow, basically you're willing to pay higher rate. So also, of course another reason for this is because-- we live in a very unusual environment, because the government interferes a lot on the market. So they're trying to make the rates as low as possible, just to make the interest rates burden for corporations, for private individuals as small as possible. And hopefully, we'll go out of this recession. But as I said, this is very singular, very unusual environment-- just to understand what's going on. And there a whole world of interest rate-- yes? AUDIENCE: But it pays to invest in a non-productive access, like real estate, which are expected to rise with time, without, for example, [INAUDIBLE]. Doesn't it skew whatever investment is made toward assets which are expected to rise with time? It may not be productive access-- DENIS GOROKHOV: Yes, yes, but right now, I mean I think even right now, lots of people are just scared to buy real estate. You never know what's going on, right? Because prices are still pretty high, so who knows what will happen? So you're right. There is some kind of psychology [INAUDIBLE]. But many people who bought like 2006, whatever-- like before, they basically lost tons of money. You never know. So it's like when you buy some assets, you've got some finance. Let's say fixed rate finance. So you know how much you're going to pay, but where is the guarantee that, you know-- I mean, long term, it goes up, of course, but long-term basically means tens of years. But if you look at the real estate prices, for the last, whatever, seven years. We are going up right now, but still, we didn't go through for the minimum. Like the [INAUDIBLE] maximum, which you had before, basically. So you never know. Yes, and so there's a whole world of interest rate derivatives. So I'm just very briefly explaining what it all means. So usually-- here I mentioned it's all about Treasury. So it's all like government-- it's kind of yield implied from the government bonds. But usually, all the derivatives are linked to another very famous rate, which is called LIBOR. And LIBOR-- roughly speaking, it's a short-term rate at which financial institutions in London borrow money from each other on an unsecured basis. So there's a lot of caveats here on this definition, but that's roughly what that means. And there is like a fundamental derivative in the interest rate world is a LIBOR swap. So the standard USD LIBOR swap is something like this, basically. It's paying-- once a three months, it's paying three months LIBOR rate. And so this is stochastic, right? So basically, every day, there is this certain procedure, which tells us what this LIBOR, this short-term borrowing rate is. And in exchange for this, if you're paying out this LIBOR swap, this LIBOR rate, you are receiving the fixed rate, which is diminished. So this is like fundamental interest rate basically. It's like, essentially, if you believe that rates will go up and you just want to speculate, basically you're trying to be long LIBOR and short fixed rate, and vice versa. So this is a very important instrument for pricing. And it's all kinds of derivatives linked to this LIBOR rate. For example, you can talk about a swaption. What is a swaption? Swaption is a derivative to enter an interest rate swap in the future. Remember like in the equity option world, let's say if I have a call option on a stock, that's the right to buy a stock at a fixed price-- it's fixed today-- like at some time in the future. Here, this is basically the same idea. If you're here today, at sometime in the future you can enter a swap, a kind of contract, which pays various legs and there is some price given for today. And there are also all kinds of false derivatives. You can talk about rates. Basically you can buy or sell options on a particular LIBOR rate. Or there's also cancel-able swaps, which basically are you can enter a swap, but if you don't want to pay, like, let's say, high rate anymore, you can cancel it. Of course, it's affecting the price so on and so forth. So, very important idea if you think about all these that it turns out that when you price all these derivatives, they all depend-- Their price depends on these discount factors. And the discount factors depend on these forward rates, which is basically trivial parametrization. But it's very important, very convenient, to work with these forward rates. And when we model interest rate derivatives, using Monte Carlo simulations, and there are no analytical models available, then [INAUDIBLE] model of dynamics of forward rates. And you can ask a question. So how can we get, basically, this curve in practice, or this curve? And it turns out that the swap market tells us how to obtain this curve. So here I show some quotes, real market quotes, for interest rate swap of different maturities. Let's say two years, three years, four years, and so on and so forth. And then if you add this number and this number, then you obtain the swap rate. So if you take these swap rates, then it turns out that you can show very easily that if you know all these numbers, then you will be able to obtain this curve in a pretty unique way. So because of this market of swaps-- so once again, if you add these two numbers here, then basically it tells you that, for example, for this instrument, let's say, five years. For the next five years, I'm going to pay roughly like 0.75% a year. Right. So these two payments, basically, correspond to like 0.75% in exchange for the LIBOR payment, right? So if I enter a swap-- so I know that the I will be paying fixed-- but I'll receive floating, which is random, because we don't know what it is. And [INAUDIBLE] is a pretty complicated concept. The idea is very simple. So basically the swap market allows you to obtain this discount factor-- basically this function-- which tells you how much your dollar in the future is today. So if you know how much a dollar is, then you know how much C dollars, basically, cost. Then basically, let's say you have C dollars. Then you simply multiply them by the discount factor, and that's what the present value of your fixed rate payment is. So remember that finance [INAUDIBLE] very important things. In finance, at least in the derivative world, we typically-- what is called PV or present value of all our future payments, right? So we have some future liability, which is something very complicated. I say, I'll pay you something very complicated, pay off in 10 years from now. But we are trying to understand how much it's worth today. Because idea for this business is clients come to the bank. And they say, I want this derivative. You sell this derivative. You charge the money right now, and you spend this money on hedging. Of course, you try to charge them a little bit more because you need to still make living. But in [INAUDIBLE] basically is like you've spent most of your money on hedging. But you to try to come up with a number today. Here's, again, a very simple example. So if you know, once again, how much your dollar is in the future, then you can present value, PV, every payment. So let's say in 10 years from now, d is equal to 0.5, then if you payoff's $1,000, the present value is equal to $500. Because, again, the argument is very simple, right? You take $500 today and invest for 10 years, and you get $1,000 in the future. This is the replication argument. Another very important thing here, is that if you have an interest rate swap, which is paying LIBOR. And let's say on a notional. Let's say I pay you LIBOR, which is some rate which is measured in percent. LIBOR is like a 1% a year, for example. Then notional of the swap is $1 million which means that the floating rate payment is based on $1 million times 1% is $10,000. So it turns out that very interesting thing is that if you pay LIBOR rate and if you pay the notional at the very end, then the present value of this is equal to the notional. So it's the beauty of floating rate is security. [INAUDIBLE] is basically that if you pay the current market rate all the time, then the price of your security is always equal to the notional. It's very nice fact which is also fundamental here. And very interesting thing would happen after crisis is that all the derivatives have become what's called collateralized. So you need to post some money all the time. So there's another concept of OIS discounting, which I don't talk about here. The main idea which you need to understand here is that we have this function, like discount function, which shows us again how much the dollar is worth in the future. And using this function, we can price all kinds of swaps. So we can PV the value of the swap today using this. So the idea of interest rate derivatives it's all about dynamics of the yield curve. It's basically how your discount function or how your yields, future yields, evolve. The whole idea is similar to the stock. So again, at time 0 you start from some curve. For example, something like this, right? From some curve which is shown here. And then it stopped evolving and you want to be able to model it mathematically and price all kinds of derivatives. So there is like a very interesting difference between stock options and interest rate options because for the stock options, we know the price today. If it's a liquid stock, it's just known. We know what it's trading right now. But for the yield curve, it's different. We first need to take the swap markets quotes and do what is called bootstrapping to get the function d of t. The next step, we need to specify the volatility of different forward rates in the future and we need to come up with some kind of dynamics which describes the future dynamics of forward rates. And then once we have this, we can use the Monte Carlo framework to price all kinds of derivatives. So before I start talking about the HJM framework here, I just want to mention that there are some other more simple models which are historically appear before the HJM model which basically describe the dynamics of the short rate. And so the most famous ones are the Ho-Lee model, Hull-White model, and so-called CIR model. And basically, the idea is that if you have this function for forward rates-- which I wrote here. So they describe dynamics, instantaneous dynamics, of this rate. So instead of modeling the whole curve, you model only just this short rate and so on. So some of these models are particular case of the HJM model. Some of them are not. But just to mention. And basically the idea, then, of the interest rate derivatives, for example, let's say I want to price an option that in five years from now, I enter a particular interest rate swap which pays 5% on the fixed leg and receives LIBOR. So I need to model the dynamic of future yields. And remember, it's a very important thing that, again, because we have the curve, now we have two different times here. For the stock derivatives, we just basically write dynamics, d of S_t is equal to something. And t is just basically instantaneous time. Here t stands for instantaneous time. And T, capital T, stands for the future time. Here. So essentially if you're here, you're looking at the forward rate somewhere here. And then you basically describe with dynamics. I don't want to go into details, but again, using this very fundamental result in pricing theory like Ito's Lemma, you can derive the equation for this drift. So the problem is it turns out it's always the case in the Monte Carlo simulation. So you [INAUDIBLE] some time equation and you have drift and you have volatility. So it turns out that this drift, the real time drift, because you hedge, drops out of your equation. And it turns out that for the interest rate, there is some complication. In the risk-neutral world, this real-world drift [INAUDIBLE] by some equation which depends on sigma. So if you do the calculation, then you will see that in the risk-neutral world, if you [INAUDIBLE] of following form, which is some non-local equation. But it is what it is. So it's very straightforward. I encourage you just to, if you have time, to go through this and really understand how it works. But now once we have this, the model for interest rate derivatives is very simple. And remember that in the stock world-- let me go back just to this equation. So we started from some stochastic differential equation. And then we simulate different paths. And then basically we average over the pay-off here at maturity of the derivative, when actually we do the payment. And here the situation is very similar. So we have some initial curve which we obtain from the market today. And this curve dynamics is described by this equation. Then we have distribution of this curve in the future, and then you can price all kinds of derivatives. So again, it's a very fundamental framework. So very general. So once the curve and the volatility are known, you simply run this simulation and you get your pay-off. So basically that's how it works. And now another example, which is basically-- of this HJM model, is basically credit derivatives. So I don't have much time, but just mention-- I'll go very briefly what's going on. So if you give money just to someone, like to the corporations, then there is a probability that you won't get your money back. So corporations issue bonds, financial instruments to raise capital. It's, again, very similar to the US treasuries. And so you give them $100 and they pay you 5% < every year. And then let's say in 10 years, if it's a 10 year bond, they are supposed to give your money back. But this might not happen. Corporations default because they make their own decisions. Like something went wrong with economy, and so on and so forth. It happens. So there is some risk which is indicated here. We just call it default risk. So corporations or private individuals, they have a right to default. So they can default. And this is reflected in the coupons which they pay. So for the US government at the end of 2012. A 10 year bond would pay just 1.7% a year. Again, we are in extremely low environment which looks like almost nothing. And remember that even if you're an investor and if you buy this bond, then you get your 1% interest but then you need to pay taxes on the profit. So the return is really very small. So then, of course, if you're an investor, then OK. The US government securities are assumed to be risk-free, so you won't be able to lose money. So this is a very important benchmark. But then you can buy bonds of corporations. But, of course, to compensate for possible default, they pay higher coupon. For example, at the end of 2012, Morgan Stanley bonds would pay around, let's say 5% a year. Significantly higher. Some governments are right now very close to default. So some time ago, for example, when Morgan Stanley bonds would pay 5% a year. But say, Greece bonds would pay 25%, 30% a year. Because nobody knows what's going to happen there. It's clear that the economy is not in good shape and it all depends on the bailouts. Or these bailouts are conditioned, for example, that the right government-- if you'll be in power and the [INAUDIBLE] is unclear. So there's lots of uncertainty. Such uncertainties, that's why, essentially, the yield-- investors tell you would require very high yield. And in the credit derivatives, the fundamental instrument, is credit default swap. So if you have a risky bond, then in order to protect from default you can go, let's say to a bank, and buy a credit default swap. It basically means that if you hold a bond and default happens, then the seller of this protection will compensate you for the loss. For example, let's say you bought a bond at $100. And then, let's say, in one year the corporation defaults. And then what happens in this event? Then court. Court happens. And the judge decides how much money is recovered. And this money is distributed to the bond investors. They're first in the queue. And then if, let's say, $0.70 on the dollar were recovered, then the default swap will pay you $32 which you lost. And very fundamental concept in the world of credit derivatives is market implied survival probability. So in principle, credit default swaps are available for different entities. Let's say like Morgan Stanley. It could be Verizon. Could be AT&T and so on and so forth. And [INAUDIBLE] require different payments. For example, let's say if credit default swap for Morgan Stanley, probably is like 5 year maturity, you pay around 100 basis points. And if there is some-- like Greece, probably, you pay like 500, maybe 1,000 basis points or something like this. So market differentiates. And based on this, you can then do a very simple calculation. And you consider, it's very easy to come with a concept of the survival probability. Roughly speaking if, let's say, default protection on some reference entity is worth 1% a year. And then what do we see? Then with probability 99% a year, you will get your money. If probability 1% per year, you will get nothing. So you can think about it like this. This means you can say the probability to default is roughly 1% a year, in this case. And then we could talk about survival probabilities, which is basically one [INAUDIBLE] default probability. And you can then come up with the concept of survival probabilities, which you can again parametrize with forward rates which are called hazard rates. So credit derivatives, in a sense, they're similar to interest rate derivatives. Remember, in the case of interest rate derivatives, we were talking about discount factors. So this is like the present value. Present value of money. In terms of world of credit derivatives-- besides this, because of course interest rates are also very important for credit derivatives-- we talk about survival probability. Today it's equal to 1, but then it decays. And let's say if you have a US government, basically it always stay at one. And let's say if it's like Morgan Stanley, it goes like this. If it's some distressed European sovereign, it will go like this. So basically it's market-implied probability of default based on the credit default swap market. And the idea of the HJM model for the credit derivatives is that-- similar to the dynamic of forward rates in interest rate case-- you simply describe the dynamics of hazard rates which parametrize your survival probabilities. And now let me see. Let me show an example of very important type of derivatives, which are priced using credit models. Let's talk about the corporate callable bonds. So it's a very simple instrument. Again, I'm a corporation. I borrow $100 from you. And let's say I'm paying you 5% every year. But I have the right at any time-- or, let's say, once in three months-- return you this $100, and basically close the deal. So why is that so valuable for the corporations? Because today's environment is such that I borrow at a very high rate. In this example, let's say I am paying 5% a year. And I issued a 10 year bond and there's $100 million notional. So basically this means that every year, I am paying to the investor 5%. $5 million. But let's say I'm paying 5%. I need this money to run my business and so on. So it's some burden, but usually all the corporations have significant amount of debt. So it's good to have debt if you know how to manage it. Now let's say in three years from now, situation changed. So now I can borrow money for seven years, because initially I issued the bond for 10 years. And now I have seven years remaining, but it turns out I can issue just a 3%. Basically this means if I do this, if I exercise my call option, then I will save 5 minus 3-- 2%-- times $100 million times seven years. So it's $14 million. So that's kind of why callable debt, it's good to issue it, because you can save money. It's very similar to what's happening right now also for private individuals. Because in recent year or couple of years, there was lot of refinancing activity in the US. Remember rates are at historical low right now. So rates are going down, down, down. So let's say if you took out a mortgage here at 6%, it was like you could refinance at here, for example, the same mortgage. You could [INAUDIBLE] like at 3.5%. So the same [INAUDIBLE] has happened to corporations. So in the US, by default, all mortgages are callable. And basically by default, everybody has a right to refinance. So it's not like you issue a 30 year bond and then even you're paying a huge coupon, even you can refinance lower percent-- which might be the case for corporation, by the way. But by law in the US, all the mortgages can be refinanced. So basically, that's the idea. So if you price this kind of instrument as callable bond then you need to take into account, of course, the interest rate risk because you need to understand what is the current level of interest rate you can charge. And also you need to take into account the quality of the issuer. So if, let's say again, Greece. Or, let's say, Morgan Stanley issue debt right now, then Morgan Stanley would pay significantly less. It's all [INAUDIBLE] on the fair market. [INAUDIBLE] result and subsidies. And, of course, Morgan Stanley would pay significantly less in the interest because for the case of Greece, it is a much higher default risk. And as I mentioned, the idea is that you, in the world of credit derivatives, there is the concept of hazard rates which, again, some curve which shows how risky the issuer is at some point in the future. And here I show the dynamics for the forward rates, and here is the dynamics of hazard rates. It shows you, basically, how risky the issuer is. And then using similar approach-- I show, give you as an exercise-- you can prove again-- it turns out if you know the volatility of hazard rates, then you know how to simulate the dynamics of hazard rates. So essentially, it's the dynamics of all this. So again, it's the idea-- let me go back just to the stock case-- again, it's the idea, it's very simple. So you have all the dynamic variables like rates and [INAUDIBLE], in this case. Then what you do, you simulate that in risk-neutral world. You have different path. And then you simply average over the pay-off. So this is the beauty of the risk-neutral pricing. There is a visual framework which is basically implemented at all the major banks. Which is really like the right approach to price very exotic derivatives for which it's very hard to find the exact analytical formulas. And let me show you one example of securities which are issued by big banks. And that's where this HJM model and Monte Carlo simulation are used all the time because the pay-offs are very complicated. And example of such a product is called structured note. So what's a structured note? It's-- again, corporations need to raise money just to run this business. But, of course, I cannot just get this money for free. I need to pay some interest. And again, if you look at what happened last year. Again, at the end of last year, for example a US 20 year bond would pay 1.7%. And if you also pay all the taxes, then you probably get something like 1.1%. And this might be even lower than inflation. So investors, especially long-term investors, they are not interested in investing in the US treasuries because although it's risk free, but there's no return. So you want to generate some money. So what can you do, then? OK, so you don't want to invest into treasuries. So then you can try to find some corporate bonds. Again, corporates are risky compared to the United States government. So typical coupon paid by the corporate bonds would be higher. So let's say 5% for a non-distressed typical US corporation. But again, 5%. Then you need to pay, let's say, 30% tax top of this. So you're left 3.3%. There's inflation and so on and so forth. So it still looks like a low return. Of course, [INAUDIBLE] below, you can buy some distressed bonds, say from Greece or maybe from some distressed corporations, which is a much higher. But it becomes more like gambling. There's so much uncertainties there, so it's more like you can get very high return, but you can lose everything because basically you're bearing very high credit risk. So what to do in this situation? Turns out that banks issue very special securities called structured notes which are very attractive to some investors. So let's say Morgan Stanley-- but instead of issuing vanilla bond, I am issuing-- and at 5%, let's say for 10 years-- I issue a bond which pays 10% a year. So much higher coupon. But I pay you 10% only if certain market conditions are satisfied. So let's say market condition like this. 30 year swap rate is higher than two year swap rate. Let's go back to the picture which I drew. So essentially this means that if you borrow money, then the short term borrowing rate is smaller than the long term borrowing rate, which usually is the case. So basically, let's assume I pay you 10% percent if two conditions are satisfied. 1% is the 30 year borrowing rate in the economy right now is higher than two year borrowing rate, which is this condition. Plus this second condition. S&P 500 index is higher than 880. So now if these conditions are satisfied, then the investor will get 10%. If one of these conditions breaks down, the investor would get nothing. So there are many investors who would like to bear this kind of risk because they have certain view on how the economy would develop. Because right now, for example, S&P 500 index is pretty close to 2,000. So it's very unlikely that it'll go down by the factor of two, which is 880. So it's very low probability. And then also investor believes that this will never happen. So we always will be in the economy where it's still more expensive to borrow long-term than short term. So in this case, it turns out that the coupon can be enhanced. This is a whole idea of the structured note. So instead of setting like a plain coupon, 5%, I am selling [INAUDIBLE] the derivative. And if investors like it, it's kind of gambling but in educated way because there's certain economic meaning of these conditions. But this can get high return. And this is a very popular way of financing because it turns out that investors are buying this kind of instruments, but they are very unique. There's a lot very liquid. Therefore when issue this kind of instrument, even if you price it correctly using all the models, the bank or financial institution which issues these instruments can make some extra money. So effectively it's cheaper to issue these instruments than to issue vanilla bonds. And all of these big banks, they have all the machinery to risk manage this kind of [INAUDIBLE] derivatives. So they know what they are doing. So they sell this kind of product, and they're hedging their exposure. And they realize some profit because you can't identify how much [INAUDIBLE] instrument is. So it's good for banks. And it's also good for investors because they are looking for this kind of yield enhancement. They want to have a higher yield. And they are taking-- and they're willing to take this risk. But again, it's an educated risk because like, this condition, for example, here, they have a very clear economical meaning. So if an investor understands what's going on, then it's a reasonable risk. And, of course, what do you do in this case if you want to model something like this? Then it's very complicated to find any kind of analytic approximations here in the real world. So what do we do? We simulate the stock market price. We simulate the 30 year yield and 10 year yield. And we simulate Morgan Stanley's credit spread. And we do it all simultaneously, at the same time. And then we see in the Monte Carlo simulation if this condition is satisfied for every coupon date, then we're paying 10%. If something is broken, then we are paying 0. So if we simulate many, many paths like this and then we calculate the average value of it. And then we come up with the price and then we quote this price to the investor. And again, I say, these products are very nonstandard. That's fine. You can make some extra money. And as a firm, you save money because it's cheaper than to issue plain vanilla bonds. And just to give you the idea where we are in terms of numbers. So here there is a graph of difference between 30 year borrowing rate and two year borrowing rate for the last decade. So you see, this difference always positive. It was negative only very shortly for some time around 2005, 2006. So it's very interesting thing. So when you price derivative, then there's a notion of market-implied numbers. It turns out if you look at how different instruments are priced on the market, then the probability-- Then you can ask a question: What is the probability that this-- Let's say if I run, for example, this stochastically for the last 10 years, then how-- what the probability that this difference is positive. And then it turns out probability is only 80% percent. Whereas in reality, it was realized only for a few days. So it's significantly lower. So basically, then, the investor says like this. So market give me the discount, like 80%. But I know that this almost never happen in the past. Therefore I believe that it will not happen in the future. Maybe it will happen, but I will still make some extra money because of this. So basically we have [INAUDIBLE] enhancement by a factor 1 divided by 0.8. 1.25%. Second thing is about S&P 500. If you look at the history of this index, which is basically the main US market index, then you see that it was historically above 880 level for 94 days out of 100 days. So very, very high probability. But the market implies this will be the case only in 75% case. The credit investor would say like this. OK, now S&P 500 is around 1,800. So what the probability it's going to drop below 880? Of course there is some probability, but if it's going to happen because it will mean a very serious recession, and it looks like the economy is improving. The market might drop down, but maybe to the level of 1,500, 1,400. But not that low. Therefore the investor believes that he, by taking this risk, he will again get a higher coupon. So [INAUDIBLE] very popular instruments which are solely price by Monte Carlo simulation, which-- we have big businesses, for example, like Morgan Stanley, whose goal is to raise capital by selling these exotic products and hedging them using the Monte Carlo framework. And if the interest rates are crucial for dynamics, then we use the HJM model for simulating interest rates. So that's everything I wanted to tell you about today, so thank you very much. [APPLAUSE] DENIS GOROKHOV: Yeah? AUDIENCE: [INAUDIBLE] simulation. Is there some choice-- you might make certain choices based on historical precedence? DENIS GOROKHOV: It's a very good question. So, in reality. So here's what happens. So let's go to a very simple case of stock prices. So again, r here basically is just the borrowing rate. It's like, let's say, whatever the bank account gives. Which is known. So the only parameter which isn't known is volatility. So usually, you have liquid stocks, for example. Like IBM, Apple. Then there are a lot of derivatives traded, which are very liquid. This means that you can imply this sigma from the price of liquid derivatives. Because you know, for example, that this particular option-- let's say today Apple traded at 600-- and you know that at the money option, so option with a strike 600, in one year now, for example, it's worth whatever. Like $50, for example. By knowing this, you can imply this sigma. So the whole idea is like this. So you take very liquid derivatives, like standard call options, and you imply this sigma. And then you use this model to price really truly exotic derivatives, which are not vitally available. That's how big banks make money. Because we know how to price them. We have clients come in. And we see the prices of very liquid instruments and we buy them to hedge. So very often what we do is that we do some very complicated deal, but we have an ability to off-load it into simpler contracts, which we know how to price. That's the idea. And the same is true for all the other derivatives, from credit derivatives or [INAUDIBLE] derivatives. So you try to imply the sigma from the market. If there is no way to do this-- which is very often the case for credit derivatives because for the credit derivatives, credit vol-- is not very liquid, not liquidly traded. Then the best thing that you can do is to take historical estimates. So we also do this. There is nothing else. Yeah. Yeah? AUDIENCE: On your last slide where you talked about the implied frequency of the S&P 500 being lower than 880? DENIS GOROKHOV: Yeah. AUDIENCE: Was that from historical quotes or current quotes? DENIS GOROKHOV: OK This number, I think, if you go to the end of 2012 and go back to 2002. 10 years into the past. Then I think it was above 880 in 94% of case. We can go back. So remember, just to the slide I showed in the very beginning. Here it was, right. So 880 is somewhere here. 2012 is here. You go back 2002. It was below 880 around 2000, internet bubble. And around, say, 2008, 2009 when we had major banking crisis. [INAUDIBLE] just now. So you can see probability is not very high based on historicals. These kind of people believe that in the future it might happen, but then the stock will go back again because the government will intervene and so on and so forth. That's the way of thinking of these investors who invest into structured notes like this. AUDIENCE: So for the implied frequency, that's from the current-- DENIS GOROKHOV: Exactly. AUDIENCE: --option prices-- DENIS GOROKHOV: Exactly. Exactly. Exactly. Exactly. So now that's how historical was obtained. Ah, let me see. [INAUDIBLE] Yeah. Well, let me see. So, yes. So it's like this. So you're today and you have your Monte Carlo model. And you simulated going forward for 10 years and you see what the probability to be below 880. And actually, much higher because usually the market is extremely risk-averse. So if you're buying a deep out of the money option you usually-- there is-- everybody requires premium. Because if this happens, if you don't really like charge enough money, basically that you're out of business. That is how I obtain this number, what, 75%. Whatever. OK. Yeah? AUDIENCE: So is the pricing of these more exotic products totally reliant upon Monte Carlo, or are there other techniques? DENIS GOROKHOV: I mean, usually it's Monte Carlo. So there are some derivatives where analytical approximations are available. For example, for interest rate derivative. Swaps are like a very simple linear product. To price them, you need discount function. So it's just arithmetic. Of course, it's all done, just simple arithmetic. For swaptions, standard swaptions, there is a model called SABR model which allows some kind of semi-analytical solutions, which are approximate but of high quality. Then you can do it. But there are different schools of thought. Because with some approximations, which might fail for some if maturity is very long, or it's very-- very deep out of the money option. So very often what traders do, even if their official numbers are only-- more simplified models which kind of has some formula, they still round the Monte Carlo simulation for the whole portfolio to understand what the most complicated model, like in terms of your present value of your portfolio, in terms of the risk. But, of course, this kind of double range accruals, which are just [INAUDIBLE]. It's impossible to build any meaningful analytical model. You can do something, but you won't be able to be competitive. It's just all Monte Carlo simulation. AUDIENCE: So you said usually this whole simulation process takes an hour on a MATLAB program? DENIS GOROKHOV: No. No. It takes probably like one hour just to write the whole program, because it's very simple. So what you do, you have Brownian motion. But what you do. MATLAB generates Brownian motion. So you just do it. And then you write the change in your price is equal to your drift, which you know, plus some random number. And you basically just simulate different path. And then if you price a call option, you know the distribution of your stock prices, let's say, in one year from now with maturities. And you just do average. So it might take someone 15 minutes to write this kind of program. This is so you can verify numerically the accuracy of the-- verify numerically Black-Scholes formula, for example. But the idea fits very simple here. But, of course, for these complicated models, which you-- for term structure process like HJM, because it's already one-dimensional object. And, of course, it's much more complicated because besides pricing, you need to have this idea of calibration as you mentioned, because these volatilities are not just usually historical. They're implied from other instruments. So what you do in practice, like this. So if you have liquid instruments, liquid options, you have the model. But the model has unknown parameters. First, we do the calibration. So we make sure that our model prices all the simple instruments. And then we take the derivative whose price is unknown because it's just something very complicated. And then we just price it, but our model's calibrated to simple derivatives. And this tells us-- then this model, after pricing it and running sensitivity with respect to market parameters, tells us how to hedge it. That's the idea. AUDIENCE: You ought to do post-hoc analyses to see how the models did in the past so you can adjust them. DENIS GOROKHOV: Yeah. Yeah. AUDIENCE: Is that a big part of what you have to do? DENIS GOROKHOV: I will say in general we are moving to this direction. In general, of course, for Monte Carlo-- from the [INAUDIBLE] point of view for complicated Monte Carlo model, it's very difficult to do technically. It's very difficult. But if you do it, you cannot afford simple models like for swaps and so on. AUDIENCE: --historical experience with the projection that you made. DENIS GOROKHOV: No. But the situation, it's very different. So we don't make any predictions here, remember. It's risk-neutral pricing. Just no prediction here. What we do here is like this. If we are a bank and we want to trade all kinds of very exotic derivatives which nobody knows how to price but we have clients who want to buy them with different reasons. Might want to speculate or they want to manage their risk exposure and so on and so forth. So nobody knows except for like 10, 20 banks, how to price them. Because this is like, you need to have infrastructure. You need to know how to do this. Then you need to have some business channels, how to off-load this risk. So this is some very exotic products. So now the idea of dynamic hedging is like this. Remember, in the case of Black-Scholes. You buy an option and then you hedge it by holding a certain amount of the underlying. So you don't make any money. But you want to make sure that whatever happens to the market, you're fully hedged. So the market moves here, you don't make any money. The market moves down, you don't make any money. So the way how you make money in this situation, basically this Black-Scholes formula, in this case, the price which you charge for the option is the price of executing the hedging strategy. So if you charge a little bit more, this is extra money which you can make. So it's very different. So what you just mentioned is like proprietary business. Big banks, they are not supposed to do this. It's more like a hedge fund world. Very different models. What we do, we try to manage big portfolios of derivatives, all kinds of derivatives. And we try to price them and charge a little bit extra so that we can make our living. But on the other hand, we don't take any risk. That's the idea. [INAUDIBLE] just models. So from point of view in terms of testing historically, you can still ask a question. Let's say if I go back 10 years. And let's say 10 years ago, I would sell, for example, this stock option. And for the next 10 years, using historical data, I see basically how-- my model then tells me what my Greeks or like what my sensitivity with respect to the underlying-- what my sensitivity with respect to underlying is. And then you can ask a question. How was this delta H performing historically? Which is a reasonable question because maybe you assume that the model pretty much continuous. But maybe if your dynamics is very jerky, then you can just lose money because you just don't take into account these effects. This is an example of historical analysis which we may run, but it has nothing to do with prediction here. So it's a whole different world. So it's risk-neutral pricing. So we don't take any risk. That's the whole idea. But due to the fact that derivatives are very complex, even in this case, still banks bear some residual risk, because remember we cannot exactly off-load it, the risk. So we still have some assumptions that we can re-balance our position dynamically and move forward, basically, and not lose money. That's the idea of it. AUDIENCE: I have a question about the Monte Carlo pricing. You can set up the Monte Carlo using implied parameters from current prices of various derivatives in the market, which gives you a good baseline price. I'm wondering what other Monte Carlos do you do to have a robust estimate of pricing, hedging cost. I would think that there would be, I don't know, maybe some stress scenarios in the market or alternatives. You probably don't just do one Monte Carlo study with current parameters. You probably have different sets. And I'm wondering how extensive is that? DENIS GOROKHOV: Absolutely. You are right. So if you just do the Monte Carlo, then you just know the price. But price is nothing, because dynamic hedging, all this business of derivatives, it's not just about how much it's right now, but what to do if the market behaves this way. So of course you could collate all your Greeks. That's very important. But Greeks is like, say, your delta. It's all about linear terms. So of course it's a very important thing. What happen to the portfolio, let's say, if there is a very sharp, for example, jump in interest rate. So let's say, what happens if rates jump forward by 1%. Or if they jump down. What happens if volatility in a particular time, region, for example, blows up. You run all these kind of analyses. So it's big departments at the banks who look at all this kind of risks. So it all comes to one business unit which looks all kinds of risks of the firm. It's a very big thing for the bank. This notion of stress test. Basically right now, all of the banks are very heavily regulated by the government. So the government can tell us what happens. For example, for the whole bank-- not just for a particular desk which trades, whatever, swaptions. What happens to all your bank, to all kinds of cash flows which you can have if, let's say, interest rates jump by 100% percent. We have a huge group of people, quants, IT, risk managers, who are looking at all these numbers trying to understand it. And for a big bank, very non-trivial problem, actually. So it's very good point. But, of course, we do as good as we can. Yeah. AUDIENCE: Well, thanks again. And for a little time afterwards for-- [APPLAUSE] DENIS GOROKHOV: Thank you.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
16_Portfolio_Management.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, let's start. So first of all, I hope you've been enjoying the class so far. And thank you for filling out the survey. So we got some very useful and interesting feedbacks. One of the feedbacks-- this is my impression, I haven't gotten a chance to talk to my co-lecturers or colleagues yet, but I read some comments. You were saying that some of the problem sets are quite hard. The math part may be a bit more difficult than the lecture. So I'm thinking. So this is really the application lecture. And from now, after three more lectures by Choongbum, it will be essentially the remainder is all applications. The original point of having this class is really to show you how math is applied, to show you those cases in different markets, different strategies, and in the real industry. So I'm trying to think, how do I give today's lecture with the right balance? This is, after all, a math class. Should I give you more math, or should I-- you've had enough math. I mean, it sounded like from the survey you probably had enough math. So I would probably want to focus a bit more on the application side. And from the survey also it seems like most of you enjoyed or wanted to listen to more on the application side. So anyway, as you've already learned from Peter's lecture, the so-called Modern Portfolio Theory. And it's actually not that modern anymore, but we still call it Modern Portfolio Theory. So you probably wonder, in the real world, how actually we use it. Do we follow those steps? Do we do those calculations? And so today, I'd like to share with you my experience on that, both in the past, a different area, and today probably more focused on the buy side. Oh, come on in. Yeah. Actually, these are my colleagues from Harvard Management. So-- [CHUCKLES] --they will be able to ask me really tough questions. So anyway, so how I'm going to start this class. You wondered why I handed out to each of you a page. So does everyone have a blank page by now? Yeah, actually. Yeah. Could also pass to--? Yeah. So I want every one of you to use that blank page to construct a portfolio, OK? So you're saying, well, I haven't done this before. That's fine. Do it totally from your intuition, from your knowledge base as of now. So what I want you to do is to write down, to break down the 100% of what do you want to have in your portfolio. OK, you said, give me choices. No, I'm not going to give you choices. You think about whatever you like to put down. Wide open, OK? And don't even ask me the goal or the criteria. Base it on what you want to do. And so totally free thinking, but I want you to do it in five minutes. So don't overthink it. And hand it back to me, OK? So that's really the first part. I want you to show intuitively how you can construct a portfolio, OK? So what does a portfolio mean? That I have to explain to you. Let's say for undergraduates here, so your parents give you some allowance. You manage to save a $1,000 on the side. You decided to put into investments, buying stocks or whatever, or gambling, buy lottery tickets, whatever you can do. Just break down your percentage. That could be $1,000, or you could be a portfolio manager and have hundreds of billions of dollars, or whatever. Or then and say if they raise some money, start a hedge fund, they may have $10,000 just to start with. How do you want to use those money on day one? Just think about it. And then so while you're filling out those pages, please hand it back to me. It's your choice to put your name down or not. And then I will start to assemble those ideas and put them on the blackboard. And sometimes I may come back to ask you a question-- you know, why did you put this? That's OK. Don't feel embarrassed. We're not going to put you on the spot. But the idea is I want to use those examples to show you how we actually connect theory with practice. I remember when I was a college student I learned a lot of different stuff. But I remember one lecture so well, one teacher told me one thing. I still remember vividly well, so I want to pass it on to you. So how do we learn something useful, right? You always start with observation. So that's kind of the physics side. You collect the data. You ask a lot of questions. You try to find the patterns. Then what you do, you build models. You have a theory. You try to explain what is working, what's repeatable, what's not repeatable. So that's where the math comes in. You solve the equations. Sometimes in economics, lot of times, unlike physics, the repeatable patterns are not so obvious. So what you do after this, so you come back to observations again. You confirm your theory, verify your predictions, and find your error. Then this feeds back to this rule. And a lot of times, the verification process is really about understanding special cases. That's why today I really want to illustrate the portfolio theory using a lot of special cases. So can you start to hand back your portfolio construction by now? OK, so just hand back whatever you have. If you have one thing on the paper, that's fine. Or many things on the paper, or you think as a portfolio manager, or you think as a trader, or you think simply as a student, as yourself. All right, so I'm getting these back. I will start to write on the blackboard. And you can finish what you started. By the way, that's the only slide I'm going to use today. I'm not concerned-- you realize if I show you a lot of slides, you probably can't keep up with me. So I'm going to write down everything, just take my time. And so hopefully you get a chance to think about questions as well. OK, I think-- is anyone finished? Any more? OK. All right, OK. OK, great. You guys are awesome. OK, so let me just have a quick look to see if I missed any, OK? Wow, very interesting. So I have to say, some people have high conviction. 100% of you, one of those. I think I'm not going to read your names, so don't worry, OK? OK I'm just going to read the answers that people put down, OK? So small cap equities, bonds, real estate, commodities. Those were there. Qualitative strategies, selection strategies, deep value models. Food/drug sector models, energy, consumer, S&P index, ETF fund, government bonds, top hedge funds. So natural resources, timber land, farmland, checking account, stocks, cash, corporate bonds, rare coins, lotteries, collectibles. That's very unique. And Apple's stock, Google stock, gold, long term saving annuities. So Yahoo, Morgan Stanley stocks. I like that. [LAUGHTER] OK. Family trust. OK, I think that pretty much covered it. OK, so I would say that list is more or less here. So after you've done this, when you were doing this, what kind of questions came to your mind? Anyone wants to-- yeah, please. AUDIENCE: [INAUDIBLE] how do I know what's the right balance to draw in my portfolio? Whether it would be cash, bills, or stuff like that? PROFESSOR: How do you do it, really? What's the criteria? And so before we answer the question how you do, how do you group assets or exposures or strategies or even people, traders, together-- before we ask all those questions, we have to ask ourselves another question. What is the goal? What is the objective, right? So we understand what portfolio management is. So here in this class, we're not talking about how to come up with a specific winning strategy in trading or investments. But we are talking about how to put them together. So this is what portfolio management is about. So before we answer how, let's see why. Why do we do it? Why do we want to have a portfolio, right? That's a very, very good point. So let's understand the goals of portfolio management. So before we understand goals of portfolio management, let's understand your situations, everyone's situation. So let's look at this chart. So I'm going to plot your spending as a function of your age. So when you are age 0 to age 100, so everyone's spending pattern is different. So I'm not going to tell you this is the spending pattern. So obviously when kids are young, they probably don't have a lot of hobbies or tuition, but they have some basic needs. So they spend. And then the spending really goes up. Now your parents have to pay your tuition, or you have to borrow-- loans, scholarships. And then you have college. Now you have-- you're married. You have kids. You need to buy a house, buy a car, pay back student loans. You have a lot more spending. Then you go on vacation. You buy investments. You just have more spending coming up. So but it goes to a certain point. You will taper down, right? So you're not going to keep going forever. So that's your spending curve. And with the other curve, you think about it. It's what's your income, what's your earnings curve. You don't earn anything where you are just born. I use earning. So this is spending. So let's call this 50. Your earning probably typically peaks around age 50, but it really depends. Then you probably go down, back up. Right, so that's your earning. And do they always match well? They don't. So how do you make up the difference? You hope to have a fund, an investment on the side, which can generate those cash flows to balance your earning versus your spending. OK, so that's only one simple way to put it. So you've got to ask about your situation. What's your cash flow look like? So my objective is, I'm going to retire at age of 50. Then after the age of 50, I will live free. I'll travel around the world. Now I'll calculate how much money I need. So that's one situation. The other situation is, I want to graduate and pay back all the student loans in one year. So that's another. And typically people have to plan these out. And if I'm managing a university endowment, so I have to think about what the university's operating budget is like, how much money they need every year drawing from this fund. And then by maintaining, protecting the total fund for basically a perpetual purpose, right? Ongoing and keep growing it. You ask for more contributions, but at the same time generating more return. If you have a pension fund, you have to think about what time frame lot of the people, the workers, will retire and will actually draw from the pension. And so every situation is very different. Let me even expand it. So you think, oh, this is all about investment. No, no, this is not just about investment. So I was a trader for a long time at Morgan Stanley, and later on a trading manager. So when I had many traders working for me, the question I was facing is how much money I need to allocate to each trader to let them trade. How much risk do they take, right? So they said, oh, I have this winning strategy. I can make lots of money. Why don't you give me more limits? No, you're not going to have all the limits. You're not going to have all the capital we can give to you. Right, so I'm going to explain. You have to diversify. At the same time, you have to compare the strategies with parameters-- liquidity, volatility, and many other parameters. And even if you are not managing people, let's say-- I was going to do this, so Dan, [INAUDIBLE], Martin and Andrew. So they start a hedge fund together. So each of them had a great strategy. Dan has five, Andrew has four, so they altogether have 30 strategies. So they raise an amount of money, or they just pool together their savings. But how do you decide which strategy to put more money on day one? So those questions are very practical. So that's all. So you understand your goals, that's then you're really clear on how much risk you can take. So let's come back to that. So what is risk? As Peter explained in his lecture, risk is actually not very well defined. So in the Modern Portfolio Theory, we typically talk about variance or standard deviation of return. So today I'm going to start with that concept, but then try to expand it beyond that. So stay with that concept for now. Risk, we use standard deviation for now. So what are we trying to do? So this, you are familiar with this chart, right? So return versus standard deviation. Standard deviation is not going to go negative. So we stop at zero. But the return can go below zero. And I'm going to review one formula before I go into it. I think it's useful to review what previously you learned. So you let's say you have-- I will also clarify the notation as well so you don't get confused. So let's say-- so Peter mentioned the Harry Markowitz Modern Portfolio Theory which won him the Nobel Prize in 1990, right? Along with Sharpe and a few others. So it's a very elegant piece of work. But today, I will try to give you some special cases to help you understand that. So let's review one of the formulas here, which is really the definition. So let's say you have a portfolio. Let's call the expected return of the portfolio is R of P, equal to the sum, a weighted sum, of all the expected returns of each asset. You'll basically linearly allocate them. Then the variance-- oh, let's just look at the variance, sigma_P squared. So these are vectors. This is a matrix. The sigma in the middle is a covariance matrix. OK that's all you need to know about math at this point. So I want us to go through an exercise on that piece of paper I just collected back to put your choice of the investment on this chart. OK, so let's start with one. So what is cash? Cash has no standard deviation. You hold cash-- so it's going to be on this axis. It's a positive return. So that's here. So let's call this cash. Where is-- and let's me just think about another example. Where's lottery? Say you buy Powerball, right? So where's lottery falling? Let's assume you put everything in lottery. So you're going to lose. So your expected value is very close to lose 100%. And your standard deviation is probably very close to 0. So you will be here. So some of you say, oh, no, no. It's not exactly zero. So OK, fine. So maybe it's somewhere here, OK? So not 100%, but you still have a pretty small deviation from losing all the money. What is coin flipping? So let's say you decide to put all your money to gamble on a fair coin flip, fair coin. So expected return is zero. What is the standard deviation of that? AUDIENCE: 100%? PROFESSOR: Good. So 100%. So we got the three extreme cases covered. OK, so where is US government bond? So let's just call it five-year note or ten-year bond. So the return is better than cash with some volatility. Let's call it here. What is investing in a start up venture capital fund like? Pretty up there, right? So you'll probably get a very high return, by you can lose all your money. So probably somewhere here, you see. Buying stocks, let's call it somewhere here. Our last application lecture, you heard about investing in commodities, right? Trading gold, oil. So that has higher volatility, so sometimes high returns. So let's call this commodity. And the ETF is typically lower than single stock volatility, because it's just like index funds. So here. Are there any other choices you'd like to put on this map? OK. So let me just look at what you came up with. Real estate, OK. Real estate, I would say probably somewhere around here. Private equity probably somewhere here. Or investing in hedge funds somewhere. So I think that's enough examples to cover. So now let me turn the table around and ask you a question. Given this map, how would you like to pick your investments? So you learned about the portfolio theory. As a so-called rational investor, you try to maximize your return. At the same time, minimize your standard deviation, right? I hesitate to use the term "risk," OK? Because as I said, we need to better define it. But let's just say you try to minimize this but maximize this, the vertical axis. OK, so let's just say you try to find the highest possible return for that portfolio with the lowest possible standard deviation. So would you pick this one? Would you pick this one? OK, so eliminate those two. But for this, that's actually all possible, right? So then that's where we learn about the efficient frontier? So what is the efficient frontier? It's really the possible combinations of those investments you can push out to the boundary that you can no longer find another combination-- given the same standard deviation, you can no longer find a higher return. So you reached the boundary. And the same is true that for the same return, you can no longer minimize your standard deviation by finding another combination. OK, so that's called efficient frontier. How do you find the efficient frontier? That's what essentially those work were done and it got them the Nobel Prize, obviously. It's more than that, but you get the flavor from the previous lectures. So what I'm going to do today is really reduce all of these to the special case of two assets. Now we can really derive a lot of intuition from that. So we have sigma, R. We're going to ignore what's below this now, right? We don't want to be there. And we want to stay on the up-right. So let's consider one special case. So again for that, let's write out for the two assets. So what is R of P? It's w_1 R_1 plus 1 minus w_1 R_2, right? Very simple math. And what is sigma_P? So the standard deviation of the portfolio-- or the variance of that, which is a square-- we know that's for the two asset class special case. So let me give you a further restriction-- which, let's consider if R_1 equal to R_2. Again, here meaning expected return. I'm simplifying some of the notations. And sigma_1 equal to 0, and sigma_2 not equal to 0, so what is rho? What is the correlation? Zero, right? Because you have no volatility on it. OK, so what is-- what's that? AUDIENCE: It's really undefined. PROFESSOR: It's really undefined, yes. Yeah. AUDIENCE: [INAUDIBLE] no covariance. PROFESSOR: There's no-- yeah, that's right. OK, so let's look at this. So you have sigma_2 here. Sigma_1 is 0. And you have R_1 equal to R_2. What is all R of P? It's R, right? Because the weighting doesn't matter. So you know it's going to fall along this line. So here is when weight one equal to 0. So you weight everything on the second asset. Here you weight the first asset 100%. So you have a possible combination along this line, along this flat line. Very simple, right? I like to start with a really a simple case. So what if sigma_1 also is not 0, but sigma_1 equal to sigma_2. And further, I impose-- impose-- the correlation to be 0, OK? What is this line look like? So I have sigma_2 equal to sigma_1. And R_1 is still equal to R_2, so R_P is still equal to R_1 or R_2, right? What does this line look like? So volatility is the same. Return is those are the same of each of the asset class. You have two strategies or two instruments. They are zero-ly correlated. How would you combine them? So you take the derivative of this variance with regarding to the weight, right? And then you minimize that. So what you find is that this point is R_1 equal to 0, or-- I'm sorry, w_1, or w_1 equal to 1. You're at this point, right? Agreed? So you choose either, you will be ending up-- the portfolio exposure in terms of return and variance will be right here. But what if you choose-- so when you try to find the minimum variance, you actually end up-- I'm not going to do the math. You can do it afterwards. You check by yourself, OK? You will find at this point, that's when they are equally weighted, half and half. So you get square root of that. So you actually have a significant reduction of the variance of the portfolio by choosing half and half, zero-ly correlated portfolio. So what's that called? What's that benefit? Diversification, right? When you have less than perfectly correlated, positively correlated assets, you can actually achieve the same return but having a lower standard deviation. I'll say, OK, that's fairly straightforward. So let's look at a few more special cases. I want really to have you establish this intuition. So let's think about what if in the same example, what if rho equals to 1, perfectly correlated? Then you can't, right? So you end up at just this one point. You agree? OK. What if it's totally negatively correlated? Perfectly negatively correlated. What's this line look like? Right? So you if you weight everything to one side, you're going to still get this point. But if you weight half and half, you're going to achieve basically zero variance. I think we showed that last time, you learned that last time. OK, so let's look beyond those cases. So what now? Let's look at-- so R_1 does not equal to R_2 anymore. Sigma_1 equal to 0. There's no volatility of the first asset. So that's cash, OK? So that's a riskless asset in the first one. So let's even call that R_1 is less than R_2. So that's the-- right? You have the cash asset, and then you have a non-cash asset. Rho equal to 0, zero correlation. So let's look at what this line looks like. So R_1, R_2, sigma_2 here. When you weight asset two 100%, you're going to get this point, right? When you weight asset one 100%, you're going to get this point, right? So what's in the middle of your return as a function of variance? Can someone guess? AUDIENCE: A parabola? Should it be a parabola? PROFESSOR: Try again. AUDIENCE: A parabola. PROFESSOR: Yeah, I know, I know. Thank you. Are there any other answers? OK, this is actually I-- let me just derive very quickly for you. Sigma_1 equal to 0, rho equal to 0. What's sigma_P? Right? And sigma_P is essentially proportional to sigma_2 with the weighting. OK, and what's R? R is a linear combination of R_1 and R_2. So it's still-- so it's linear. OK, so because in these cases, you actually-- you essentially-- your return is a linear function. And the slope, what is the slope of this? Oh, let's wait on the slope. So we can come back to this. This actually relates back to the so-called capital market line or capital allocation line, OK? Because last time we talked about the efficient frontier. That's when we have no riskless assets in the portfolio, right? When you add on cash, then you actually can select. You can combine the cash into the portfolio by having a higher boundary, higher Efficient Frontier, and essentially a higher return with the same exposure. So let's look at a couple more cases, then I will tell you-- so I think let's look at-- so R_1 is less than R_2. And volatilities are not 0. Also, sigma_1 is less than sigma_2, but it has a negative correlation of 1. So you'll have asset one, asset two. And as we know, where you pick half and half, this goes to 0. So this is a quadratic function. You can verify and prove it later. And what if when rho is equal to 0-- and actually, I want to-- so sigma_1 should be here, OK? So when rho is equal to zero, this no longer goes to-- the variance can no longer be minimized to 0. So this is your efficient frontier, this part. I think that's enough examples of two assets for the efficient frontier. So you get the idea. So then what if we have three assets? So let me just touch upon that very quickly. If you have one more asset here, essentially you can solve the same equations. And when the-- special case: you can verify afterwards, if all the volatilities are equal, and zero correlation among the assets. You're going to be able to minimize sigma_P equal to 1 over the square root of three of sigma_1. OK. So it seems pretty neat, right? The math is not hard and straightforward. But it gives you the idea how to answer your question, how to select them when you start with two. So why are two assets so important? What's the implication in practice? It's actually a very popular combination. Lot of the asset managers, they simply benchmark to bonds versus equity. And then one famous combination is really 60/40. They call it a 60/40 combination. 60% in equity, 40% in bonds. And even nowadays, any fund manager, you have that. People will still ask you to compare your performance with that combination. So the two-asset examples seem to be quite easy and simple, but actually it's a very important one to compare. And that will lead me to get into the risk parity discussion. But before I get to risk parity discussion, I want to review the concept of beta and the Sharpe ratio. So your portfolio return, this is your benchmark return, R of m, expected return. R_f is the risk-free return, so essentially a cash return. And alpha is what you can generate additionally. So let's even not to worry about these small other terms-- or not necessarily small, but for the simplicity, I'll just reveal that. So that's your beta. Now what is your Sharpe ratio? OK. And you can-- so sometimes Sharpe ratio is also called risk-weighted return, or risk-adjusted return. And how many of you have heard of Kelly's formula? So Kelly's formula basically gives you that when you have-- let's say in the gambling example, you know your winning probability is p. So this basically tells you how much to size up, how much you want to bet on. So it's a very simple formula. So you have a winning probability of 50/50, how much you bet on? Nothing. So if you have p equal to 100%, you bet 100% of your position. If you have a winning probability of negative 100%, so what does it mean? That means you have a 100% probability of losing it. What do you do? You bet the other way around, right? You bet the other side, so that when p is equal to negative-- I'm sorry, actually what I should say is when p equal to 0, your losing probability becomes 100%, right? So you bet 100% the other way, OK? So that I leave to you to think about. That's when you have discrete outcome case. But when you construct a portfolio, this leads to the next question. It's in addition to the efficient frontier discussion, is that really all about asset allocation? Is that how we calculate our weights of each asset or strategy to choose from? The answer is no, right? So let's look at a 60/40 portfolio example. So again, two asset stock. Stock is, let's say, 60% percent, 40% bonds. So on this-- so typically your stock volatility is higher than the bonds, and the return, expected return, is also higher. So your 60/40 combinations likely fall on the higher return and the higher standard deviation part of the efficient frontier. So the question was-- so that's typically what people do before 2000. A real asset manager, the easiest way or the passive way is just to allocate 60/40. But after 2000, what happened was when after the equity market peaked and the bond had a huge rally as first Greenspan cut interest rates before the Y2K in the year 2000. You think it's kind of funny, but at that time everybody worried about the year 2000. All the computers are going to stop working because old software were not prepared for crossing this millennium event. So they had to cut interest rates for this event. But actually nothing happened, so everything was OK. But that left the market with plenty of cash, and also after the tech bubble burst. So that was a good portfolio, but then obviously in 2008 when the equity market crashed, the bond market, the government bond hybrid market, had a huge rally. And so that made people question that. Is this 60/40 allocation of asset simply by the market value the optimal way of doing it, even though you are falling on the Efficient Frontier? But how do you compare different points? Is that simple choice of your objectives, your situation, or there's actually other ways to optimize it. So that's where the risk parity concept was really-- the concept has been around, but the term was really coined in 2005, so quite recently, by a guy named Edward Qian. He basically said, OK, instead of allocating 60/40 based on market value, why shouldn't we consider allocating risk? Instead of targeting a return, targeting asset amount-- let's think about a case where we can have equal weighting of risk between the two assets. So risk parity really means equal risk weighting rather than equal market exposure. And then the further step he took was he said, OK. So this actually, OK, is equal risk. So you have lower return and a lower risk, a lower standard deviation. But sometimes you will really want a higher return, right? How do you satisfy both? Higher return and lower risk. Is there a free lunch? So he was thinking, right? There is, actually. It's not quite free, but it's the closest thing. You've probably heard this phrase many times. The closest thing in investment to a free lunch is diversification. OK, and so he's using a leverage here as well. let me talk about it a bit more, about diversification, give you a couple more examples, OK? That phrase about the free lunch and diversification was actually from-- was that from Markowitz? Or people gave him that term. OK, but anyway. So let me give you another simple example, OK? So let's consider two assets, A and B. In year one, A goes up to-- it basically doubles. And in year two, it goes down 50%. So where does it end up? So it started with 100%. It goes up to 200%. Then it goes down 50% on the new base, so it returns nothing, right? It comes back. So asset B in year one loses 50%, then doubles, up 100% in year two. So asset B basically goes down to 50% and it goes back up to 100%. So that's when you look at them independently. But what if you had a 50/50 weight of the two assets? So if someone who is quick on math can tell me, what does it change? So A goes up like that, B goes down like that. Now you have a 50/50 A and B. So let's look at magic. So in year one, A, you have only 50%. So it goes up 100%. So that's up 50% on the total basis. B, you'll also weight 50%, but it goes down 50%. So you have lost 25%. So at the end of year one, you're actually-- so this is a combined 50/50 portfolio, year one and year two. So you started with 100. You're up to 1.25 at this point, OK? So at the end of year one, you rebalance, right? So you have to come back to 50/50. So what do you do? So this becomes 75, right? So you no longer have the 50/50 weight equal. So you have to sell A to come back to 50 and use the money to buy B. So you have a new 50/50 percent weight asset. Again, you can figure out the math. But what happens in the following year when you have this move, this comes back 50%, this goes up 100%. You return another 25% positively without volatility. So you have a straight line. You can keep-- so this two year is a-- so that's so-called diversification benefit. And in the 60/40 bond market, that's really the idea people think about how to combine them. And so let me talk a little bit about risk parity and how you actually achieve them. I'll try to leave plenty of time for questions. So that's the return, and so let's forget about these. So let's leave cash here, OK? So the previous example I gave you, when you have two assets, one is cash, R_1, the other is not. The other has a volatility of sigma_2. You have this point, right? So and I said, what's in between? It's a straight line. That's your asset allocation, different combination. Did it occur to you, why can't we go beyond this point? So this point is when we weight w_2 equal to 1, w_1 equal to 0. That's when you weight everything into the asset two. What if you go beyond that? What does that mean? OK. So let's say, can we have w_1 equal to minus 1, w_2 equal to plus 2? So they still add up to 100%. But what's negative 1 mean? Borrow, right? So you went short cash 100%, you borrow money. You borrow 100% of cash, then put into to buy equity or whatever, risky assets, here. So you have plus 2 minus 1. What does the return looks like when you do this? So R_P equal to w_1 R_1 plus w_2 R_2. So minus R_1 plus 2R_2. That's your return. It's this point here. What's your variance look like, or standard deviation look like? As we did before, right? So sigma_P simply equal to w_2 sigma_2. So in this case, it's 2sigma_2. So you're two times more risky, two times as risky as the asset two. So this introduces the concept of leverage. Whenever you go short, you introduce leverage. You actually-- on your balance sheet, you have two times of asset two. You're also short one of the other instrument, right? OK so that's your liability. So your net is still one. So what this risk parity says is, OK, so we can target on the equal risk weighting, which will give you somewhere around-- let's called it 25. 25% bonds, 75%-- 25% equity, 75% of fixed income. Or in other words, 25% of stocks, 75% of bonds. So you have lower return. But if you leverage it up, you actually have higher return, higher expected return, given the same amount of standard deviation. You achieved by leveraging up. Obviously, you leverage up, right? That's the other implication of that. We haven't talked about the liquidity risk, but that's a different topic. So what's your Sharpe ratio look like for risk parity portfolio? So you essentially maximized the Sharpe ratio, or risk-adjusted return, by achieving the risk parity portfolio. So 60/40 is here. You actually maximize that, and this is-- does leverage matter? When you leverage up, does Sharpe ratio change, or not? AUDIENCE: It splits in half. So you've got twice the [? variance ?] [INAUDIBLE]. PROFESSOR: So let's look at that straight line, this example, OK? So we said Sharpe ratio equal to-- right? So R_P, what is sigma_P? It's 2sigma_2, right, when you leverage up. So this equals to R_2 minus R_1, divide by sigma_2. So that's the same as at this point. So that's essentially the slope of the whole line. It doesn't change. OK, so now you can see the connection between the slope of this curve and the Sharpe ratio and how that links back to beta. So let me ask you another question. When the portfolio has higher standard derivation of sigma_P, will beta to a specific asset increase or decrease? So what's the relationship intuitively between beta-- so let's take a look at the 60/40 example. Your portfolio, you have stocks, you have bonds in it. So I'm asking you, what is really the beta of this 60/40 portfolio to the equity market? When equity market, it becomes-- when the portfolio becomes more volatile. Is your beta increasing or decreasing? So you can derive that. I'm going to tell you the result, but I'm not going to do the math here. So beta equals to-- [INAUDIBLE] in this special case, is sigma_P over sigma_2. OK. All right, so so much for all these. I mean, it sounds like everything is nicely solved. And so coming back to the real world, and let me bring you back, OK? So are we all set for portfolio management? We can program, make a robot to do this. Why do we need all these guys working on portfolio management? Or why do we need anybody to manage a hedge fund? You can just give money, right? So why do you need somebody, anybody, to put it together? So before I answer this question, let me show you a video. [VIDEO PLAYBACK] [HORN BLARING] [END VIDEO PLAYBACK] OK. Anyone heard about the London Millennium Bridge? So it was a bridge built around that time and thought it had the latest technology. And it would really perfectly absorb-- you heard about soldiers just marching across a bridge, and they'll crush the bridge. When everybody's walking in sync, your force gets synchronized. Then the bridge was not designed to take that synchronized force, so the bridge collapsed in the past. So when they designed this, they took all that into account. But what they hadn't taken into account was the support of that is actually-- so they allow the horizontal move to take that tension away. But the problem is when everybody's sees more people walking in sync, then the whole bridge starts to swell, right? Then the only way to keep a balance for you standing on the bridge is to walk in sync with other people. So that's a survival instinct. And so I got this-- I mean, that's actually my friend at Fidelity, Ren Cheng. Dr. Ren Cheng brought this up to me. He said, oh, you're doing-- how do you think about the portfolio risk, right? This is what happened in the financial market in 2008. When you think you got everything figured out, you have the optimal strategy. When everybody starts to implement the same optimal strategy for your own as individual, the whole system is actually not optimized. It's actually in danger. Let me show you another one. [VIDEO PLAYBACK] [CLACKING] OK. These are metronomes, right? So can start anywhere you like. Are they in sync? Not yet. What is he doing? You only have to listen to it. You don't have to see it. So what's going on here? This is not-- metronomes don't have brains, right? They don't really follow the herd. Why are they synchronizing? OK, if you're expecting they are getting out of sync, it's not going to happen. OK, so I'm going to stop right here. OK. [END VIDEO PLAYBACK] You can try as many-- how do I get out of this? OK, so you can try it. You can look at-- there's actually a book written on this as well, so. But the phenomena here is nothing new. But what when he did this, what's that mean? When he actually raised that thing on the plate and put it on the Coke cans? What happened? Why is that is so significant? AUDIENCE: Because now they're connected. PROFESSOR: They're connected. Right. So they are interconnected. Before, they were individuals. Now they're connected. And why did I show you the London Bridge and this at the same time? What's this to do with portfolio management? What's this to do with portfolio management? AUDIENCE: [INAUDIBLE] people who are trading, if they have the same strategy, [INAUDIBLE] affect each other, they become connected in that way-- PROFESSOR: Right. AUDIENCE: If as an individual, you are doing a different strategy, if everybody has been doing something different, you can maximize [? in the space. ?] PROFESSOR: Very well said. So if you're looking for this stationary best way of optimizing your portfolio, chances are everybody else is going to figure out the same thing. And eventually, you end up in the situation and you actually get killed. OK, so that's the thing. What you learned today, what you walk away was this. OK, today is not what I want you to know that all the problems are solved. Right? So you say, oh, the problem's solved. The Nobel Prize was given. So let's just program them. No, you actually-- it's a dynamic situation. You have to. So that makes the problem interesting, right? As a younger generation, you're coming to the field. The excitement is there are still a lot of interesting problems out there unsolved. You can beat the others already in the field. And so that's one takeaway. And what are the takeaways you think by listening to all these? AUDIENCE: Diversification is a free lunch. [CHUCKLES] PROFESSOR: Diversification is a free lunch, yes. Not so free, right, in the end. It's free to a certain extent. But it's something-- you know, it's better than not diversified, right? It depends on how you do it. But there is a way you can optimize. And so it's-- I want to leave with you, I actually want to finish a few minutes earlier so that you can ask me questions. You can ask. It's probably better to have this open discussion. And so I want you to walk away, to really keep in mind is in the field of finance, and particularly in the quantitative finance, it's not mechanical. It's not like solving physics problems. It's not like you can get everything figured so it becomes predictable, right? So the level of predictability is actually very much linked to a lot of other things. Physics, you solve Newton's equations. You have a controlled environment and you know what you're getting in the outcome. But here, when you participate in the market, you are changing the market. You are adding on other factors into it. So think more from a broader scope kind of view rather than just solve the mathematics. That's why I come back to the original-- if you walk away from this lecture, you'll remember what I said at the very beginning. Solving problems is about observe, collecting data, building models, then verify and observe again. OK, so I'll end right here, so questions. AUDIENCE: Yeah, just [INAUDIBLE] question. Does this have anything to do with-- it kind of sounds like game theory, but I'm not exactly too sure. Because you have a huge population and no stable equilibrium. Does it have anything to do with game theory, by any chance? PROFESSOR: It has a lot to do with game theory, but not only to game theory. So game theory, you have a pretty well-defined set of rules. Two people play chess against each other. That's where a computer actually can become smarter, right? So in this market situation, you have so many people participating without clearly defined rules. There are some rules, but not always clearly defined. And so it's much more complex than game theory. But it's part of it, yeah. Dan, yeah? AUDIENCE: Can you talk a little bit about why some of the risk parity portfolios that did so poorly in May and June when rates started to rise and what about their portfolio allowed them do that? PROFESSOR: Good question, right. So as you can see here, what the risk parity approach does is essentially to weight more on the lower volatility asset. In this case, the question is, how do you know which asset has low volatility? So you look at historical data, which you conclude bonds have the lower volatility. So you overweight bonds. That's the essence of them, right? So then when bonds to start to sell off after Bernanke, Fed chairman Bernanke, said he's going to taper quantitative easing. So bonds from a very low high yield, a very low yield level, the yield went much higher, the interest rate went higher. Bonds got sold off. So this portfolio did poorly. So now the question is, does that prove the risk parity approach wrong, or does it prove right? Does the financial crisis of 2008 prove the risk parity approach a superior approach, or does the June/May experience prove this as the less-favored approach? What does it tell us? Think about it. So it really is inconclusive. So you observe, you extrapolate from your historical data. But what you are really doing is you're trying to forecast volatility, forecast return, forecast correlation, all based on historical data. It's like-- a lot of people use that example. It's like driving by looking at the rear view mirror. That's the only thing you look at, right? You don't know what's going on, happening in front of you. You have another question? AUDIENCE: Given all this new information, do you find that people are still playing similar [INAUDIBLE] strategy with portfolio management? PROFESSOR: Very much true. Why? Right, so you said, people should be smarter than that. It's very difficult to discover new asset classes. It's also very difficult to invent new strategies in which you have a better winning probability. The other risk, the other very interesting phenomenon, is most of the traders and the portfolio managers, the investors, they are career investors-- meaning just like if I'm a baseball coach, I'm hired to coach a baseball team. My performance is really measured against the other teams when I win or lose, right? A portfolio manager or investor is also measured against their peers. So the safest way for them to do is to benchmark to an index, to the herd. So there's very little incentive for them to get out of the crowd, because if they are wrong, they get killed first. They lose their jobs. So the tendency is to stay with the crowd. It's for survival instinct. It's, again, the other example. It's actually the optimal strategy for individual portfolio manager is really to do the same thing as other people are doing because you stay with the force. AUDIENCE: So you said given that we have all these groups, in the end, it's not just that we could leave it to the computers. We need managers. So what different are the managers doing, other than [INAUDIBLE]? PROFESSOR: Can you try to answer that question yourself? What's the difference between a human and a computer? That's really-- what can human add value to what a computer can do? AUDIENCE: Consider the factors, the market factors and news and what's going on. PROFESSOR: So taking more information, processing information, make a judgment on a more holistic approach. So it's an interesting question. I have to say that computers are beating humans in many different ways. Can a computer ever get to the point actually beating a human in investment? I can't confidently tell you that it's not going to happen. It may happen. So I don't know. Any other questions? Yeah? AUDIENCE: Just to add to that. I think there is some more to management than just investing. I think managers also have key roles in their HR, key roles in just like managing people and ensuring that they're maximizing their talents, not just like, oh, how much money did you make? But I mean, are you moving forward in your career while you're there? So I think management has a role to play in that as well, not just investment. PROFESSOR: Yeah, I think that's a good point. Yeah. All right, so-- oh, sure. Jesse? AUDIENCE: What is your portfolio breakdown? PROFESSOR: My personal portfolio? Well, I am actually very conservative at this point, because if you look at my curve of those spending and earning curve, I'm basically trying to protect principals rather than try to maximize return at this point. So I would be sliding down more towards this part rather than try to go to this corner, yeah. So I haven't really talked much about risk. What is risk, right? So I talk about volatility or standard deviation. But as we all know that, as Peter mentioned last time as well, there are many other ways to look at risk-- value at risk or half distribution or truncated distribution, or simply maximum loss you can afford to take, right? But looking at standard deviation or volatility is an elegant way. You can see. I can really show you in very simple math about how the concept actually plays out. But in the end, actually volatility is really not the best measure, in my view, of risk. Why? Let me give you another simple example before we leave. So let's say this is over time. This is your cumulative return or you dollar amount. So you start from here. If you go flat, then-- does anyone like to have this kind of a performance? Right? Of course, right? This is very nice. You keep going up. You never go down. But what's the volatility of that? The volatility is probably not low, right? And then on the other hand, you could have-- what I'm trying to say, when you look at expected return matching expected return and the volatility, you can still really not selecting the best combination. Because what you really should care about is not just your volatility. And again, bear in mind all the discussion about the Modern Portfolio Theory is based on one key assumption here. It's about Gaussian distribution, OK? Normal distribution. The two parameters, mean and standard deviation, categorize the distribution. But in reality, you have many other sets of distributions. And so it's a concept still up for a lot of discussion and debate. But I want to leave that with you as well. Yeah? AUDIENCE: Just going back to the same question about what these guys were asking about management and how do they add value, I think the people who added value-- there were some people who added a tremendous amount of value in the financial crisis. And they were doing the same mathematics. But a difference was in their expected return of various assets was different from the entire-- the broad market. So if you can just know what expected return is that, probably that is the only answer to the whole portfolio management debate. PROFESSOR: Yes. If you can forecast expected return, then that's-- yeah, now you know the game. You solved it. You solved the big part of the puzzle. Yeah? AUDIENCE: What management does is how good it can do [INAUDIBLE] expected return, full stop. Nothing more. PROFESSOR: I disagree on that. That's the only thing. Because given two managers, they have the same expected return, but you can still further differentiate them, right? So that's-- yeah. And that's what all this discussion is about. But yes, expected return will drive lot of these decisions. If you know one manager's good expected return, three years later, he's going to make 150%. You don't really care what's in between, right? You're just going to ride it through. But the problem is you don't know for sure. You will never be sure. AUDIENCE: I'd like to comment on that. PROFESSOR: Sure. AUDIENCE: What [INAUDIBLE] looked at in simplified settings, estimating returns and volatilities. And the problem, the conclusion for the problem, was basically cannot estimate returns very well, even with more data, over a historical period. But you can estimate volatility much better with more data. So there's really an issue of perhaps luck in getting the return estimates right with different managers, which are hard to prove that there was really an expertise behind that. Although with volatility, you can have improved estimates. And I think possibly with a risk parity portfolio, those portfolios are focusing not on return expectations, but saying if we're going to consider different choices based on just how much risk they have and equalize that risk, then the expected return should be comparable across those, perhaps. PROFESSOR: Yeah. So that highlights the difficulty of forecasting return, forecasting volatility, forecasting correlation. So risk parity appears to be another elegant way of proposing the optimal strategy but it has the same problems. Yeah? AUDIENCE: Actually, I also wanted to highlight. You mentioned the Kelly criterion, which we haven't covered the theory for that previously. But I encourage people to look into that. It deals with issues of multi-period investments as opposed to single-period investments. And most-- all this classical theory we've been discussing, or that I discuss, covers just a single period analysis, which is an oversimplification of an investment. And when you are investing over multiple periods, the Kelly criterion tells you how to optimally basically bet with your bank roll. And actually there's an excellent book, at least I like it, called Fortune's Formula that talks about-- [? we already ?] said the origins of options theory in finance. But it does get into the Kelly criterion. And there was a rather major discussion between Shannon, a mathematician at MIT, who advocated applying the Kelly criterion, and Paul Samuelson, one of the major economists. PROFESSOR: Also from MIT. AUDIENCE: Also from MIT. And there was a great dispute about how you should do portfolio optimization. PROFESSOR: That's a great book. And a lot of characters in that book actually are from MIT-- and Ed Thorp, for example. And it's really about people trying to find the Holy Grail magic formula-- not really to that extent, but finding something other people haven't figured out. But it's very interesting history. Big names like Shannon, very successful in other fields. In his later part of his career and life really devoted most of his time to studying this problem. You know Shannon, right? Claude Shannon? He's the father of information theory and has a lot to do with the later information age invention of computers and very successful, yeah. So anyway, so we'll end the class right here. No homework for today, OK? So you just need to-- yeah, OK. All right, thank you.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
2_Linear_Algebra.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let's begin. Today, I'm going to review linear algebra. So I'm assuming that you already took some linear algebra course. And I'm going to just review the relevant content that will appear again and again throughout the course. But do interrupt me if some concepts are not clear, if you don't remember some concept from linear algebra. I hope you do. But please let me know. I just don't know. You have very different background knowledge. So it's hard to tune to one special group. So I tailored this lecture notes so that it's a review for those who took the most basic linear algebra course. So if you already have that experience, and don't understand it, please feel free to interrupt me. So I'm going to start by talking about matrices. A matrix, in a very simple form, is just a collection of numbers. For example [1, 2, 3; 2, 3, 4; 4, 5, 10]. You can pick any number of rows, any number of columns. You just write down numbers in a square format. And that's the matrix. What's special about it? So what kind of data can you arrange in a matrix? So I'll take an example, which looks relevant to us. So for example, we can index the rows by stocks, by companies, like Apple. Morgan Stanley should be there, and then Google. And then maybe we can index the column by dates. I'll say July 1st, October 1st, September 1st. And the numbers, you can pick whatever data you want. But probably the sensible data will be the stock price on that day. I don't know for example 400, 500, and 5,000. That would be great. So these kind of data, that's just the matrix. So defining a matrix is really simple. But why is it so powerful? So that's an application point of view, just as a collection of data. But from a theoretical point of view, a matrix, an m by n matrix, is an operator. It defines a linear transformation. A defines a linear transformation from the vector space, n-dimensional vector space to the m-dimensional vector space. That sounds a lot more abstract than this. So for example, let's just take a very small example. If I use a 2 by 2 matrix, [2, 0; 0, 3]. Then [2, 0; 0, 3] times, let's say [1, 1] is just [2, 3]. Does that makes sense? It's just matrix multiplication. So now try to combine the point of view. What does it mean to have a linear transformation defined by a data set? And things start to get confusing. What is it? Why does a data set define a linear transformation? And does it have any sensible meaning? So that's a good question to have in mind today. And try to remember this question. Because today I'll try to really develop a theory of eigenvalues and eigenvectors in a purely theoretical language. But it can still be applied to these data sets, and give very important properties and very important quantities. You can get some useful information out of it. Try to make sense out of why it happens. So that will be the goal today, to really treat linear algebra as a theoretical thing. But remember that there's some data set, like really data set underlying. This doesn't go up. That was a bad choice for my first board. Sorry. So the most important concepts for us are the eigenvalues and eigenvectors of a matrix, which is defined as a real number, lambda, and vector v, is an eigenvalue, and eigenvector of a matrix A, if A times v is equal to lambda times V. We also say that v is an eigenvector corresponding to lambda. So remember eigenvalues and eigenvectors always come in pairs. And they are defined by the property that A*v = lambda*v. First question, does all matrix have eigenvalues and eigenvectors? Nope? So Av-- It looks like this is a very strange equation to satisfy. But if you change it in this form, (A - lambda I)v = 0. That still looks strange. But at least you understand that-- it's an only if, this can happen only if this can happen. Happens only if A - lambda I does not have full rank. So determinant of (A - lambda I) is equal to 0, if and only if, in fact. So now comes a very interesting observation. det(A - lambda I) is a polynomial of degree n. I made a mistake. I should have said, this is only for n by n matrices. This is only for square matrices. Sorry. It's a polynomial of degree n. That means it has a solution. It has to give n in terms of lambda. So it has a solution. It might be a complex number. I'm really sorry. I'm nervous in front of the video. I understand why you were saying that is doesn't necessarily exist. Let me repeat. I made a few mistakes here. So let me repeat here. For n by n matrix A, a complex number lambda, and the vector v, is an eigenvalue and eigenvector if it satisfies this condition. It doesn't have to be real. Sorry about that. And now if we rephrase it this way, because this is a polynomial, it always has at least one solution. That was just a side point. Very theoretical. So we see that there always exists at least one eigenvalue and eigenvector. Now we saw its existence, what is the geometrical meaning of it? Now let's go back to the linear transformation point of view. So suppose A is a 3 by 3 matrix. Then A takes the vector in R^3 and transforms it into another vector in R^3. But if you have this relation, what's going to happen is A, when applied to v, it will just scale the vector v. If this was the original v, A of v will just be lambda times this vector. That will be our Av, which is equal to lambda v. So eigenvectors are those special vectors which when applied this linear transformation just get scaled by some amount, where that amount is exactly lambda. So what we established so far, what we recall so far is every n by n matrix has at least one such direction. There is some vector where the linear transformation defined by A just scales that vector. Which is quite interesting, if you ever thought about it before. There's no reason such vector should exist. Of course I'm lying a little bit. Because these might be complex vectors. But at least in the complex world it's true. So if you think about this, this is very helpful. It gives you the vectors-- from these vectors' point of view, this linear transformation is really easy to understand. That's why eigenvalues and eigenvector are so good. It breaks down the linear transformation into really simple operations. Let me formalize that a little bit more. So in an extreme case a matrix, an n by n matrix A, we call it diagonalizable, if there exists an orthonormal matrix, I'll call what it is, U, such that A is equal to U times D times U inverse for a diagonal matrix D. Let me iterate through this a little bit. What is an orthonormal matrix? It's a matrix defined by the relation U times U transposed is equal to the identity. What is a diagonal matrix? It's a matrix whose nonzero entries are all on the diagonal. All the rest are zero. Why is it so good to have this decomposition? What does it mean to have an orthonormal matrix like this? It means basically I'll just explain what's happening. If that happens, if a matrix is diagonalizable, if this A is diagonalizable, there will be three directions, v_1, v_2, v_3, such that when you apply this A, v_1 scales by some lambda_1. v_2 scales by some lambda_2. And v_3 scales by some lambda_3. So we can completely understand the transformation A, just in terms of these three vectors. So this, the stuff here will be the most important things you'll use in linear algebra throughout this course. So let me repeat it really slowly. So an eigenvalue and eigenvector is defined by this relation. We know that there are at least one eigenvalue for each matrix, and there is an eigenvector corresponding to it. And eigenvectors have this geometrical meaning where-- a vector is an eigenvector, if the linear transformation defined by A just scales that vector. So for our setting, the real good matrices are the matrices which can be broken down into these directions. And those directions are defined by this U. And D defines how much it will scale. So in this case U will be our v_1, v_2, v_3. And D will be our lambda_1, lambda_2, lambda_3 all 0. Any questions so far? So that is abstract. Now remember the question I posed in the beginning. So remember that matrix where we had stocks and dates and stock prices in the entries? What will an eigenvector of that matrix mean? What will an eigenvalue mean? So try to think about that question. It's not like it will have some physical counterpart. But there's some really interesting things going on there. The bad news is that not all matrices are diagonalizable. If a matrix is diagonalizable, it's really easy to understand what it does. Because it really breaks down into these three directions, if it's a 3 by 3. If it's an n by n, it breaks down into n directions. Unfortunately, not all matrices are diagonalizable. But there is a very special class of matrices which are always diagonalizable. And fortunately we will see those matrices throughout the course. Most of the matrices, n by n matrices, we will study, fall into this category. So an n by n matrix A is symmetric if A is equal to A transpose. Before proceeding, please raise your hand if you're familiar with all the concepts so far. OK. Good feeling. So a matrix is symmetric if it's equal to its transpose. A transpose is obtained by taking the mirror image across the diagonal. And then it is known that all symmetric matrices are diagonalizable. Ah, I've made another mistake. Orthonormally. So with this I missed matrices orthonormally diagonalizable. So it's diagonalizable if we drop this condition, and replace it with an invertible. So symmetric matrices are really good. And fortunately most of the n by n matrices that we will study are symmetric. Just by the nature of it, it will be symmetric. The one I gave as an example is not symmetric. It's not symmetric. But I will address that issue in a minute. And another important thing is symmetric matrices have real eigenvalues. So really this geometrical picture just the-- for symmetric matrices, this picture is really the picture you should have in mind. So proof of Theorem 2. Suppose lambda is an eigenvalue with eigenvector v. Then by definition we have this. Now multiply v transposed on both sides. It is lambda times the norm v. Now take the complex conjugate-- Real symmetric. And then first A conjugate, we have v^T A^T v, and then take the conjugate of it. Then we get lambda... v. And this side is equal to v^T A^T v. But because A is real symmetric, we see that A is equal to the conjugate of complex conjugate of A. So this expression and this expression is the same. The right side should also be the same. That means lambda is equal to the conjugate of lambda. So lambda has to be a real. So Theorem 1 is a little bit more complicated, and it involves more advanced concepts like basis and linear subspace, and so on. And those concepts are not really important for this class. So I'll just skip the proof. But it's really important to remember these two theorems. Wherever you see a symmetric matrix you should really feel like you have control on it. Because you can diagonalize it. And moreover, all eigenvalues are real, and you have really good control on symmetric matrices. That's good. That was when everything went well. We can diagonalize it. So, so far we saw that if for a symmetric matrix, we can diagonalize it. It's really easy to understand. But what about general matrices? In general, not all matrices are diagonalizable, first of all. But sometimes we still want to decomposition like this. So diagonalization was A equals U times D times U inverse. But we want something similar. We want to understand. So our goal, we want to still understand the matrix, give a matrix A through simple operations, such as scaling. When the matrix was a diagonalizable matrix this was done, this was possible. Unfortunately, it's not always diagonalizable. So we have to do something else. So that's what I want to talk about. And luckily the good news is there is a nice tool we can use for all matrices, even those slightly weaker, in fact, a little bit more weaker than this diagonalization. But still it distills some very important information of the matrix. So it's called singular value decomposition. So this will be our second tool of understanding matrices. It's very similar to this diagonalization, or in other words I call this eigenvalue decomposition. But it has a slightly different form. So what is its form? So theorem. Let A be an m by n matrix. Then there always exists orthonormal matrices U and V such that A is equal to U times sigma times V transpose. For some diagonal matrix sigma. Let me parse through the theorem a little bit more. Whenever you're given a matrix, it doesn't even have to be a square matrix anymore. It can be non-symmetric. So whenever we're given an m by n matrix, in general, there always exists two matrices, U and V, which are orthonormal, such that A can be decomposed as U times sigma times V transposed, where sigma is a diagonal matrix. But now the size of the matrix are important so U is an m by n matrix, sigma is an m by n matrix, and V is an n by n matrix. That just denotes the size, the dimensions of the matrix. So what does it mean for an m by n matrix to be diagonal? It just means the same thing. So only the (i,i) entries are allowed to be nonzero. So that was just a bunch of words. So let me rephrase this. So let me compare now eigenvalue decomposition, with singular value decomposition. So this is EVD, what we just saw before. It only-- SVD. This only works for n by n matrices, which are diagonalizable. SVD works for all general m by n matrices. However, this is powerful. Because it gives you one frame. So v_1 with a v_2, v_3 for which A acts as a scaling operator. Kind of like that. That's what A does, A does, A does. That's because these U on the both sides are equal. However, for singular value decomposition, this is called singular value decomposition. I just erased It. What you have instead is first of all, the spaces are different. If you take a vector in R^m, and bring it to R^n, apply this operation A. What's going to happen here is there will be one frame in here, and one frame in here. So there will be vectors v_1, v_2, v_3, v_4 like this. And there will be vectors u_1, u_2, u_3 like this here. And what's going to happen is when you take v_1, A will take v_1 to u_1 and scale it a little bit according to that diagonal. A will take v_2 to u_2, it will scale it. It'll take v_3 to u_3, scale it. Wait a minute. But for v_4, we don't have u_4. What's going to happen is this is just going to disappear. u_4, when applied A, will disappear. So I know it's a very vague explanation, but this geometric picture, try to compare them. A diagonalization, eigenvalue decomposition, works within its frame, so it's very, very powerful. You just have some directions and you scale those directions. But the singular value composition it's applicable to a more general class of matrices, but it's rather more restricted. You have two frames, one for the original space, one for the target space. And what the linear transformation does is, it just sends one vector to another vector and scales it a little bit. So now is another good time to go back to that matrix in the very beginning. So remember that example where we had a vector of companies, and dates, and the entry was stock prices. So if it's an n by n matrix, you can try to apply both eigenvalue decomposition, and singular value decomposition. But what will be more sensible is singular value decomposition in this case. I won't explain why, and what's happening here. Peter will probably. You will come to it later. But just try to do some imagining before listening what's really happening in real world. So try to use your own imagination, your own language to express. See what happens for this matrix, what this decomposition is doing. It just looks like totally nonsense. Why does this have even a geometry? Why does it define a linear transformation and so on? It's just a beautiful theory, which just gives many useful information. I can't really emphasize more. Because-- emphasize enough, because really this is just universal, being used in all science, these. I think the eigenvalue decomposition, and the singular value decomposition. Not just for this course, but pretty much it's safe to say in every engineering, you'll encounter one of the forms. So let me talk about the proof of the singular value decomposition. And I will show you an example of what singular value decomposition does for some example matrix, the matrix that I chose. Proof of singular value decomposition, which is interesting. It relies on eigenvalue decomposition. So given a matrix A, consider the eigenvalues of A times A transpose. Oh, A transpose A. First observation: that's a symmetric matrix. So if you remember, it will have real eigenvalues, and it's diagonalizable. So A^T of A has eigenvalues lambda_1, lambda_2, up to, it's an n by n matrix, so lambda_n. And corresponding eigenvectors v_1, v_2, up to v_n. And so for convenience, I will cut it at lambda_r, and assume all rest is 0. So there might be none which are 0. In that case we use all the eigenvalues. But I only am interested in nonzero eigenvalues. So I'll say up to lambda_r, they're nonzero. Afterwards it's 0. It's just a notational choice. And now I'm just going to make a claim that they're all positive. This part is kind of just believe me. Then if that's the case, we can rewrite the eigenvalues. Rewrite eigenvalues as sigma_1^2, sigma_2^2, sigma_r^2, and 0. That was my first step. My second step, that was step one, step two is to define u_1 as A*v_1 / sigma_1, u_2 as A*v_2 / sigma_2. And u_r as A*V_r / sigma_r. And then u times r+1 as-- up to u times m, as complete the above into a basis. So for those who don't understand, just think of we pick u_1 up to u_r first, and then arbitrarily pick the rest. And you'll see why I only care about the nonzero eigenvalues. Because I have to divide by sigmas, the sigma values. And if it's zero, I can't do the division. So that's why I identified those which are not zero. And then we're done. So it doesn't look at all like we're done. But I'm going to let my U be this, u_1, u_2, up to u_n. Sorry, it has to be n. My V I will pick as v_1, v_2, up to v_r. And then v_(r+1) up to v_n. So this again just complete into a basis. Now let's see what happens. So A times U transpose times V. Oh, ah. That's why it's a problem. You have to do U times A times V transpose. So I would write V is n, and this is m. Ah yes, so U times A times V transpose here. That will be u_1, u_2, u_m. A. V transpose will be v_1 transpose, v_2 transpose, to v_n transpose. I messed up something. Sorry. Oh, that's the form I want, right? Yeah. So I have to transpose U and V there. OK, sorry. Thank you. Thank you for the correction. I know this looks different from that. But I mean if you flip the definition it will be the same. So I'll just not-- stop making mistakes. Do you have a question? So, yeah. Thank you. Yeah. That will make more sense. Thank you very much. And then you're going to have u_1 transpose up to u_n transpose. A times V, because of the definition of V, will be lambda_1 of v_1. A times v_2 will be lambda_2 of v_2. Up to lambda_r of v_r, and the rest will be zero. These all define the columns. Now let's do a few computations. So u_1^T times lambda_1 v_1. u_1^T, and lambda_1 v_1. When you take the dot product, what you're going to get is v_1^T A transpose of v_1 lambda_1. I'm missing something. Ah, sorry about that. This is not right. These are As. I defined the eigenvalues for A transpose A. Then that's u_1 transpose times sigma_1 times u_1. That will be sigma_1. And then if you look at the second entry, u_1 transpose times A v_2, you get u_1 transpose times sigma_2 of u_2. But I claim that this is equal to 0. So why is that the case? u_1 transpose is equal to V_1 transpose A transpose over sigma_1. And we have sigma_2. u_2 is equal to A times v_2 over sigma_2. So those two cancel. And we have v_1^T A^T A v_2 over sigma_1. But v_1 and v_2 are two different eigenvectors of this matrix. At the beginning we can have an orthonormal decomposition of A transpose A. That means v_1^T times v_2 times that has to be equal to zero. Because that's an eigenvalue. We have v_1^T times lambda_2 v_2 over sigma_1. So we have lambda_2 over sigma_1 times v_1 transpose v_2. These two are orthogonal so give 0. So if you do the computation, what you're going to have is sigma_1, sigma_2 on the diagonal, up to sigma_r, and then 0, 0 rest. And 0 the rest. Sorry for the confusion. Actually the process is quite simple. I was just lost in the computation in the middle. So process is first look at A transpose A. Find the eigenvalues and eigenvectors. And using those, they define the matrix V. And you can define the matrix U by applying A times V over sigma. Each of those will define the entries of U. The reason I wanted to go through this proof is because this gives you a process of finding a singular value decomposition. It was a little bit painful for me. But if you have a matrix there's just these simple steps you can follow to find the singular value decomposition. So look at this matrix, find its eigenvalues and eigenvectors. Just arrange it in the right way. Of course, the right way needs some practice to be done correctly. But once you do that, you just obtain a singular value composition. And really I can't explain how powerful it is. You will only later see it in the course how powerful this decomposition will be. And only then you'll more appreciate how good it is to have this decomposition, and be able to compute it so simply. So let's try to do it by hand. Yes? STUDENT: So when you compute the [INAUDIBLE]. PROFESSOR: Yes. STUDENT: [INAUDIBLE] PROFESSOR: It would have to be orthonormal, yeah. It should be orthonormal. These should be orthonormal. These also. And that's a good point, because that can be annoying when you want to do it by hand. Actually this decomposition. You have to do some Gram-Schmidt process or something like that. What I mean by hand, I don't really mean by hand, other than when you're doing homework. Because you can use the computer to do it. And in fact, if you use computer there are much better algorithms than this that are known, which can do this a lot more quickly and more efficiently. So let's try to do it by hand. So let A be this matrix: [3, 2 2; 2, 3, -2]. And we want to make the eigenvalue decomposition of this. A transpose A, we have to compute that, is [3, 2, 2; 2, 3, -2]. And you will get [13, 12, 2; 12, 13, -2; 2, -2, 8]. And let me just say that the eigenvalues are 0, 9, and 25. So in this algorithm, sigma_1^2 will be 25. Sigma_2^2 squared will be 9. And sigma_3^2 squared will be 0. So we can take sigma_1 to be 5, sigma_2 to be 3, sigma_3 to be 0. Now we have to find the corresponding eigenvectors to find the singular value decomposition. And I'll just do one just to remind you how to find an eigenvector. So A transpose A, minus 25I is equal to, if you subtract 25 from these entries, you're going to get [-12, 12, 2; 12, -12, -2; 2, -2, -13]. And then you have to find the vector which annihilates this matrix. And that will be, I can take one of those vectors to be just 1 over square root of 2, 1 over square root of two, 0, after normalizing. And then just do it for other vectors. You find v_2 to be 1 over square root 18, negative 1 over square root 18, 4 over square root 18. Now then find v_3 to be the one that annihilates this. But I'll just say it's x, y, z. This will not be important. I'll explain why it's not that important. Then our v as written above, actually there it was transposed. So I will transpose it. That will be 1 over square root of 2, 1 over square root of 2, 0. v_2 is that. So we can write 1 over square root 18, negative 1 over square root 18, 4 over square root 18. And here just write x, y, z. And U will be defined as u_1 and u_2, where u_1 is A times v_1 over sigma_1. u_2 is A times v_2 over sigma_2. So multiply A by this vector, divide by sigma_1 to get U. I already did the computation for you. It's going to be-- and this is going to be-- yes? STUDENT: How did you get v_1? PROFESSOR: v_1? So if you did the computation right in the beginning to get the eigenvalues, then A^T A - 25I, this has to be-- has to not have full rank. So there has to be a vector v, which when multiplied by this gives [0, 0, 0] vector. And then you say [a, b, c] and set it equal to [0, 0, 0]. And just solve the system of linear equations. There will be several of them. For example, we can take [1, 1, 0] as well. But I just normalized it to have [INAUDIBLE]. So there's a lot of work involved if you want to do it by hand, even though you can do it. You have to find eigenvalues, find eigenvectors. In this case, you have to find three of them. And then you have to do more work, and more work. But it can be done. And we are done now. So now this decomposes A into U sigma V transformation. So U is given as [1 over square root 2, 1 over square root 2; 1 over square root 2, minus 1 over square root 2]. Sigma was 5, 3, 0. And V is this. So V transpose is just transpose of that. I'll just write it like that, where V is that. So we have this decomposition. And so let me actually write it, because I want to show you why x, y, z is not important. 1 over square root 2, 1 over square root 2, 0; 1 over square root 18, minus 1 over square root 18, 4 over square root 18; x, y, z. The reason I'm saying this is not important is because I can just drop-- oh what did I do here? It has to be 2 by 3. I can just drop this column, and drop this column together. It has to be that form. Drop this and drop this altogether. So the message here is that the eigenvectors corresponding to eigenvalue zero are not important. The only relevant ones are nonzero eigenvalues. So drop this, and drop this. That will save you some computation. So let me state a different form of singular value decomposition. So this works in general. There's a corollary. We get a simplified form of SVD. Where A becomes equal to U times sigma times V transpose. And A was an m by n matrix. U is still an m by m matrix. But now sigma is also m by m matrix. This only works when m is less than or equal to n. And V is a m by n matrix. So the proof is exactly the same. And the last step is just to drop the irrelevant information. So I will not write down why it works. But you can see if you go through it, you'll see that dropping this part just corresponds to exactly that information. So that's the reduced form. So let's see. In the beginning we had A. I erased A. A was the 2 by 3 matrix in the beginning. And we obtained the decomposition into 2 by 2, 2 by 2, and 2 by 3 matrix. If we didn't delete the fifth column and fifth row, we would have obtained a 2 by 2, times 2 by 3, times 3 by 3 matrix. But now we can simplify it by removing those. And it might not look that much different on this board. Because I just erased one row. But many matrices that you'll see in real application have a lot lower rank than the number of columns and rows. So if r is a lot more smaller than both m and n, then this part really-- it's not obvious here. But if m and n has a big gap here, really the number of columns that you're saving, it can be enormous. So to illustrate an example, look at this. Now look at the stock prices, where you have companies and dates. Previously I just gave an example of a 3 by 3 matrix. But it's more sensible to have dates, a lot more dates than companies. So let's say you recorded 365 days of a year, even though the market is not open all days, and just like five companies. If you did a decomposition this this, you'll have a 5 by 5, 5 by 365, 365 by 365 here. But now in the reduced form, you're saving a lot of space. So if you just look at the board, it doesn't look like it's so powerful. But in fact it is. So that's the reduced form. And that will be the form that you'll see most of the time, this reduced form. So I made lot of mistakes today. I have one more topic, but a totally irrelevant topic. So any questions before I move on to the next topic? Yes? STUDENT: [INAUDIBLE] PROFESSOR: Can you press the button? STUDENT: [INAUDIBLE] PROFESSOR: Oh, so in this data, what it means. You're asking what the eigenvectors will mean over this data? It will give you some stocks. It will give you like the correlation. So each eigenvector will give you a group of companies that are correlated somehow. It measures their correlation with each other. So I don't have a very good explanation what its physical meaning is. Maybe you can give just a little bit more. GUEST SPEAKER: Possibly. We will get into this in later lectures. But in the singular value decomposition, what you want to think of is these orthonormal matrices are really defining a new basis, sort of an orthogonal basis. So you're taking the original coordinate system, then you're rotating it. And without changing or stretching or squeezing the data. You're just rotating the axes. So an orthonormal matrix gives you the cosines of the new coordinate system with respect to the old one. And so the singular value decomposition then is simply sort of rotating the data into a different orientation. And the orthonormal basis that you're transforming to, is essentially the coordinates of the original data in the transformed system. So as Choongbum was commenting, you're essentially looking at a representation of the original data points in a linearly transformed space, and the correlations between different stocks, say, is represented by how those points are oriented in the new, in the transformed space. PROFESSOR: So you'll have to see real data to really make sense out of it. But another way to think of it is where it comes from. So all this singular value decomposition, if you remember the proof, it comes from eigenvectors and eigenvalues of A transpose A. Now if you look at A transpose A, or I'll just say it's A times A transposed. It's pretty much the same. If you look at A times A transpose, you're going to get an m by n matrix. And it'll be indexed both by these companies. And the numbers here will represent how much the companies are related to each other, how much correlation they have between each other. So by looking at the eigenvectors of this matrix, you're looking at the correlation between these stock prices, let's say, these company stock prices. And that information is represented inside the singular value decomposition. But again, it's a lot better to understand if you have real numbers and real data, which you will have later. So please be excited and wait. You're going to see some cool stuff. So that was all for eigenvalue decomposition and singular value decomposition. And the last thing I want to mention today is something called Perron-Frobenius theorem. This one even looks a lot more theoretical than the ones I showed you. But surprisingly a few years ago, Steve Ross, he's a faculty in the business school here, found a very interesting result called Steve Ross recovery theorem that makes use of this theorem, makes use of Perron-Frobenius theorem that I will tell you today. Unfortunately you will only see a lecture on Steve Ross recovery theorem towards the end of the semester. So I will try to recall what it is later. But since we're talking about linear algebra today, let me introduce the theorem. This is called Perron-Frobenius. And you really won't believe that it has any applications in finance because it just looks so theoretical. I'm just stating a really weak form. Weak form. Let A be an n by n symmetric matrix, whose entries are all positive, with positive entries. Then there are a few properties that they have. First there exists an eigenvalue, there exists a largest eigenvalue, lambda_0, such that lambda is less than lambda_0. Well that's true for all other lambda. So this statement is really easy for symmetric matrix. So forget about-- you can drop symmetric, but I'm just stated it, because I'm going to prove only for this weak case. Just think about the statement when it's not symmetric. So if you have an n by n matrix whose entries are all positive, then there exists an eigenvalue, lambda_0, a real eigenvalue such that the absolute value of all of other eigenvalues are strictly smaller than this eigenvalue. So remember that if it's not a symmetric matrix, they can be complex values. This is saying that there's a unique eigenvalue which has largest absolute value, and moreover, it's a real number. Second part, there exists an eigenvector, a positive eigenvector with positive entries, corresponding to lambda 0. So the eigenvector corresponding to this lambda 0 has positive entries. And the third part is lambda_0 is an eigenvalue of multiplicity 1, for those who know what it is. So this really is a unique eigenvalue with a unique eigenvector, which has positive entries. And it's larger, really larger than other eigenvalues. So from the mathematician point of view, this has many applications. It's probability theory. My main research area is combinatorics, discrete mathematics. It's also used in there. So from the theoretical point of view, this has been used in many contexts. It's not a standard theorem taught in linear algebra. So I don't think probably most of you haven't seen it before. But it's a well known result, with many uses, theoretical uses. But you also see one use in, later, as I mentioned, in finance, which is quite surprising. So let me just give you some feeling of why it happens. I won't give you the full detail of the proof, but just a very brief description. Sketch when A is symmetric, just a simple case, A is symmetric. In this case, this statement, if you look at it. First of all A has real eigenvalues. I'll say it's lambda_1, lambda_2, up to lambda_n. And at some point, I'll say up to lambda_i, it's greater than zero, pass to where this is smaller than zero. There are some positive eigenvalues. There are some negative eigenvalues. So that's observation one. Things are more easy to control, because they are all real. The first statement says that-- maybe I should have indexed it as lambda_0. I'll just call this lambda 0 instead. This lambda_0 is in fact larger in absolute value than lambda_n. That's the content of the first bullet. So if they all have all positive entries, then the positive, largest positive eigenvalue dominates the smallest negative eigenvalue, which yeah. So why is that the case? First of all, to see that you have to go through different steps. So we go into observation two. Lambda_1, so look at lambda_1. lambda_1 has an eigenvector with positive entries. Why is that the case? That's because if you look at A times v equals lambda times v. If v-- let me state it this way. Lambda_0 is the maximum of all lambda, lambda_0. That's not entirely correct. Lambda_1. Sorry about that. So If you look at this, if v has non-positive entries, if it has a negative entry, if v has a negative entry, then flip it. Flip the sign, and in this way obtain new vector v prime. Since A has positive entries, A has positive entries. What we conclude is that A times v prime will be larger than A times v. You have to look. Think about, because it has positive entries, if it had some negative part somewhere, the magnitude will decrease. So if you flip the sign it should increase the magnitude. And this cannot happen. This shouldn't happen. This should not happen. That's where the positive entries part is used. If you have positive entries, then it should have, the eigenvector should have positive entries as well. So I will not work through the details of the rest. I will post it on the lecture notes. But really this theorem, in fact, can be stated in a lot more generality than this. I'm stating only a very weak form. It doesn't have to have all positive entries. It has to only be something called irreducible, which is a concept from probability theory, from Markov chains. But here we will only use it in this setting. So I will review it later, before it's really being used. But just remember that how these positive entries kick into this kind of statement, where there is an eigenvalue, largest eigenvalue, why there has to be a vector which is all positive entries. Those will all come into play later. So I think that's it for today. If you have any last minute questions? If not, I will see you on Thursday.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
17_Stochastic_Processes_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: And today it's me, back again. And we'll study continuous types of stochastic processes. So far we were discussing discrete time processes. We studied the basics like variance, expectation, all this stuff-- moments, moment generating function, and some important concepts for Markov chains, and martingales. So I'm sure a lot of you would have forgot about what martingale and Markov chains were, but try to review this before the next few lectures. Because starting next week when we start discussing continuous types of stochastic processes-- not from me. You're not going to hear martingale from me that much. But from people-- say, outside speakers-- they're going to use this martingale concept to do pricing. So I will give you some easy exercises. You will have some problems on martingales. Just refer back to the notes that I had like a month ago, and just review. It won't be difficult problems, but try to make the concept comfortable. OK. And then Peter taught some time series analysis. Time series is just the same as discrete time process. And regression analysis, this was all done on discrete time. That means the underlying space was x_1, x_2, x_3, dot dot dot, x_t. But now we're going to talk about continuous time processes. What are they? They're just a collection of random variables indexed by time. But now the time is a real variable. Here, time was just in integer values. Here, we have real variable. So a stochastic process develops over time, and the time variable is continuous now. It doesn't necessarily mean that the process itself is continuous-- it may as well look like these jumps. It may as well have a lot of jumps like this. It just means that the underlying time variable is continuous. Whereas when it was discrete time, you were only looking at specific observations at some times. I'll draw it here. Discrete time looks more like that. OK. So the first difficulty when you try to understand continuous time stochastic processes when you look at it is, how do you describe the probability distribution? How to describe the probability distribution? So let's go back to discrete time processes. So the universal example was a simple random walk. And if you remember, how we described it was x_t minus x_(t-1), was either 1 or minus 1, probability half each. This was how we described it. And if you think about it, this is a slightly indirect way of describing the process. You're not describing the probability of this process following this path, it's like a path. Instead what you're doing is, you're describing the probability of this event happening. From time t to t plus 1, what is the probability that it will go down? And at each step you describe the probability altogether, when you combine them, you get the probability distribution over the process. But you can't do it for continuous time, right? The time variable is continuous so you can't just take intervals t and interval t prime and describe the difference. If you want to do that, you have to do it infinitely many times. You have to do it for all possible values. That's the first difficulty. Actually, that's the main difficulty. And how can we handle this? It's not an easy question. And you'll see a very indirect way to handle it. It's somewhat in the spirit of this thing. But it's not like you draw some path to describe a probability density of this path. That's the omega. What is the probability density at omega? Of course, it's not a discrete variable so you have a probability density function, not a probability mass function. In fact, can we even write it down? You'll later see that we won't even be able to write this down. So just have this in mind and you'll see what I was trying to say. So finally, I get to talk about Brownian processes, Brownian motion. Some outside speakers already started talking about it. I wish I already was able to cover it before they talked about it, but you'll see a lot more from now. And let's see what it actually is. So it's described as the following, it actually follows from a theorem. There exists a probability distribution over the set of continuous functions from positive reals to the reals such that first, B(0) is always 0. So probability of B(0) is equal to 0 is 1. Number two-- we call this stationary. For all s and t, B(t) minus B(s) has normal distribution with mean 0 and variance t minus s. And the third-- independent increment. That means if intervals [s i, t i] are not overlapping, then B(t_i) minus B(s_i) are independent. So it's actually a theorem saying that there is some strange probability distribution over the continuous functions from positive reals-- non-negative reals-- to the reals. So if you look at some continuous function, this theorem gives you a probability distribution. It describes the probability of this path happening. It doesn't really describe it. It just says that there exists some distribution such that it always starts at 0 and it's continuous. Second, the distribution for all fixed s and t, the distribution of this difference is normally distributed with mean 0 and variance t minus s, which scales according to the time. And then third, independent increment means what happened between this interval, [s1, t1], and [s2, t2], this part and this part, is independent as long as intervals do not overlap. It sounds very similar to the simple random walk. But the reason we have to do this very complicated process is because the time is continuous. You can't really describe at each time what's happening. Instead, what you're describing is over all possible intervals what's happening. When you have a fixed interval, it describes the probability distribution. And then when you have several intervals, as long as they don't overlap, they're independent. OK? And then by this theorem, we call this probability distribution a Brownian motion. So probability distribution, the definition, distribution given by this theorem is called the Brownian motion. That's why I'm saying it's indirect. I'm not saying Brownian motion is this probability distribution. It satisfies these conditions, but we are reversing it. Actually, we have these properties in mind. We're not sure if such a probability distribution even exists or not. And actually this theorem is very, very difficult. I don't know how to prove it right now. I have to go through a book. And even graduate probability courses usually don't cover it because it's really technical. That means this just shows how continuous time stochastic processes can be so much more complicated than discrete time. Then why are you-- why are we studying continuous time processes when it's so complicated? Well, you'll see in the next few lectures. Any questions? OK. So let's go through this a little bit more. AUDIENCE: Excuse me. PROFESSOR: Yes. AUDIENCE: So when you talk about the probability distribution, what's the underlying space? Is it the space of-- PROFESSOR: Yes, that's a very good question. The space is the space of all functions. That means it's a space of all possible paths, if you want to think about it this way. Just think about all possible ways your variable can evolve over time. And for some fixed drawing for this path, there's some probability that this path will happen. It's not the probability spaces that you have been looking at. It's not one point-- well, a point is now a path. And your probability distribution is given over paths, not for a fixed point. And that's also a reason why it makes it so complicated. Other questions? So the main thing you have to remember-- well, intuitively you will just know it. But one thing you want to try to remember is this property. As your time scales, what happens between that interval is it's like a normal variable. So this is a collection of a bunch of normal variables. And the mean is always 0, but the variance is determined by the length of your interval. Exactly that will be the variance. So try to remember this property. A few more things, it has a lot of different names. It's also called Wiener process. And let's see, there was one more. Is there another name for it? I thought I had one more name in mind, but maybe not. AUDIENCE: Norbert Wiener was an MIT professor. PROFESSOR: Oh, yeah. That's important. AUDIENCE: Of course. PROFESSOR: Yeah, a professor at MIT. But apparently he wasn't the first person who discovered this process. I was some other person in 1900. And actually, in the first paper that appeared, of course, they didn't know about each other's result. In that paper the reason he studied this was to evaluate stock prices and auction prices. And here's another slightly different description, maybe a more intuitive description of the Brownian motion. So here is this philosophy. Philosophy is that Brownian motion is the limit of simple random walks. The limit-- it's a very vague concept. You'll see what I mean by this. So fix a time interval of 0 up to 1 and slice it into very small pieces. So I'll say, into n pieces. 1 over n, 2 over n, 3 over n, dot dot dot, to n minus 1 over n. And consider a simple random walk, n-step simple random walk. So from time 0 you go up or down, up or down. Then you get something like that. OK? So let me be a little bit more precise. Let Y_0, Y_1, to Y_n, be a simple random walk, and let Z be the function such that at time t over n, we let it to be Y of t. That's exactly just written down in formula what it means. So this process is Z. I take a simple random walk and scale it so that it goes from time 0 to time 1. And then in the intermediate values-- for values that are not this, just linearly extended-- linearly extend in intermediate values. It's a complicated way of saying just connect the dots. And take n to infinity. Then the resulting distribution is a Brownian motion. So mathematically, that's just saying the limit of simple random walks is a Brownian motion. But it's more than that. That means if you have some suspicion that some physical quantity follows a Brownian motion, and then you observe the variable at discrete times at very, very fine scales-- so you observe it really, really often, like a million times in one second. Then once you see-- if you see that and take it to the limit, it looks like a Brownian motion. Then now you can conclude that it's a Brownian motion. What I'm trying to say is this continuous time process, whatever the strange thing is, it follows from something from a discrete world. It's not something new. It's the limit of these objects that you already now. So this tells you that it might be a reasonable model for stock prices because for stock prices, no matter how-- there's only a finite amount of time scale that you can observe the prices. But still, if you observe it infinitely as much as you can, and the distribution looks like a Brownian motion, then you can use a Brownian motion to model it. So it's not only the theoretical observation. It also has implication when you want to use Brownian motion as a physical model for some quantity. It also tells you why Brownian motion might appear in some situations. So here's an example. Here's a completely different context where Brownian motion was discovered, and why it has the name Brownian motion. So a botanist-- I don't know if I'm pronouncing it correctly-- named Brown in the 1800s, what he did was he observed a pollen particle in water. So you have a cup of water and there's some pollen. Of course you have gravity that pulls the pollen down. And pollen is heavier than water so eventually it will go down, eventually. But that only explains the vertical action, it will only go down. But in fact, if you observe what's happening, it just bounces back and forth crazily until it finally reaches down the bottom of your cup. And this motion, if you just look at a two-dimension picture, it's a Brownian motion to the left and right. So it moves as according to Brownian motion. Well, first of all, I should say a little bit more. What Brown did was he observed it. He wasn't able to explain the horizontal actions because he only understood gravity, but then people tried to explain it. They suspected that it was the water molecules that caused this action, but weren't able to really explain it. But the first person to actually rigorously explain it was, surprisingly, Einstein, that relativity guy, that famous guy. So I was really surprised. He's really smart, apparently. And why? So why will this follow a Brownian motion? Why is it a reasonable model? And this gives you a fairly good reason for that. This description, where it's the limit of simple random walks. Because if you think about it, what's happening is there is a big molecule that you can observe, this big particle. But inside there's tiny water molecules, tiny ones that don't really see, but it's filling the space. And they're just moving crazily. Even though the water looks still, what's really happening is these water molecules are just crazily moving inside the cup. And each water molecule, when they collide with the pollen, it will change the action of the pollen a little bit, by a tiny amount. So if you think about each collision as one step, then each step will either push this pollen to the left or to the right by some tiny amount. And it just accumulates over time. So you're looking at a very, very fine time scale. Of course, the times will differ a little bit, but let's just forget about it, assume that it's uniform. And at each time it just pushes to the left or right by a tiny amount. And you look at what accumulates, as we saw, the limit of a simple random walk is a Brownian motion. And that tells you why we should get something like a Brownian motion here. So the action of pollen particle is determined by infinitesimal-- I don't know if that's the right word-- but just, quote, "infinitesimal" interactions with water molecules. That explains, at least intuitively, why it follows Brownian motion. And the second example is-- any questions here-- is stock prices. At least to give you some reasonable reason, some reason that Brownian motion is not so bad a model for stock prices. Because if you look at a stock price, S, the price is determined by buying actions or selling actions. Each action kind of pulls down the price or pulls up the price, pushes down the price or pulls up the price. And if you look at very, very tiny scales, what's happening is at a very tiny amount they will go up or down. Of course, it doesn't go up and down by a uniform amount, but just forget about that technicality. It just bounces back and forth infinitely often, and then you're taking these tiny scales to be tinier, so very, very small. So again, you see this limiting picture. Where you have a discrete-- something looking like a random walk, and you take t as infinity. So if that's the only action causing the price, then Brownian motion will be the right model to use. Of course, there are many other things involved which makes this deviate from Brownian motion, but at least, theoretically, it's a good starting point. Any questions? OK. So you saw Brownian motion. You already know that it's used in the financial market a lot. It's also being used in science and other fields like that. And really big names, like Einstein, is involved. So it's a really, really important theoretical thing. Now that you've learned it, it's time to get used to it. So I'll tell you some properties, and actually prove a little bit-- just some propositions to show you some properties. Some of them are quite surprising if you never saw it before. OK. So here are some properties. Crosses the x-axis infinitely often, or I should say the t-axis. Because you start from 0, it will never go to infinity, or get to negative infinity. It will always go balanced positive and negative infinitely often. And the second, it does not deviate too much from t equals y squared. We'll call this y. Now, this is a very vague statement. What I'm trying to say is draw this curve as this. If you start at time 0, at some time t_0, the probability distribution here is given as a normal random variable with mean 0 and variance t_0. And because of that, the standard deviation is square root t_0. So the typical value will be around the standard deviation. And it won't deviate. It can be 100 times this. It won't really be a million times that or something. So most likely it will look something like that. So it plays around this curve a lot, but it crosses the axis infinitely often. It goes back and forth. What else? The third one is quite really interesting. It's more theoretical interest, but it also has real-life implications. It's not differentiable anywhere. It's nowhere differentiable. So this curve, whatever that curve is, it's a continuous path, but it's nowhere differentiable, really surprising. It's hard to imagine even one such path. What it's saying is if you take one path according to this probability distribution, then more than likely you'll obtain a path which is nowhere differentiable. That just sounds nice, but why it does it matter? It matters because we can't use calculus anymore. Because all the theory of calculus is based on differentiation. However, our paths have some nice things, it's universal, and it appears in very different contexts. But if you want to do analysis on it, it's just not differentiable. So the standard tools of calculus can't be used here, which is quite unfortunate if you think about it. You have this nice model, which can describe many things, you can't really do analysis on it. We'll later see that actually there is a variant, a different calculus that works. And I'm sure many of you would have heard about it. It's called Ito's calculus. So we have this nice object. Unfortunately, it's not differentiable, so the standard calculus does not work here. However, there is a modified version of calculus called Ito's calculus, which extends the classical calculus to this setting. And it's really powerful and it's really cool. But unfortunately, we don't have that much time to cover it. I will only be able to tell you really basic properties and basic computations of it. And you'll see how this calculus is being used in the financial world in the coming-up lectures. But before going into Ito's calculus, let's talk about the property of Brownian motion a little bit because we have to get used to it. Suppose I'm using it as a model of a stock price. So I'm using-- use Brownian motion as a model for stock price-- say, daily stock price. The market opens at 9:30 AM. It closes at 4:00 PM. It starts at some price, and then moves according to the Brownian motion. And then you want to obtain the distribution of the min value and the max value for the stock. So these are very useful statistics. So a daily stock price, what will the minimum and the maximum-- what will the distribution of those be? So let's compute it. We can actually compute it. What we want to do is-- I'll just compute the maximum. I want to compute this thing over s smaller than t of the Brownian motion. So I define this new process from the Brownian motion, and I want to compute the distribution of this new stochastic process. And here's the theorem. So for all t, the probability that you have M(t) greater than a and positive a is equal to 2 times the probability that you have the Brownian motion greater than a. It's quite surprising. If you just look at this, there's no reason to expect that such a nice formula should exist at all. And notice that maximum is always at least 0, so we don't have to worry about negative values. It starts at 0. How do we prove it? Proof. Take this tau. It's a stopping time, if you remember what it is. It's a minimum value of t such that the Brownian motion at time t is equal to a. That's a complicated way of saying, just record the first time you hit the line a. Line a, with some Brownian motion, and you record this time. That will be your tau of a. So now here's some strange thing. The probability that B(t), B(tau_a), given this-- OK. So what this is saying is, if you're interested at time t, if your tau_a happened before time t, so if your Brownian motion hit the line a before time t, then afterwards you have the same probability of ending up above a and ending up below a. The reason is because you can just reflect the path. Whatever path that ends over a, you can reflect it to obtain a path that ends below a. And by symmetry, you just have this property. Well, it's not obvious how you'll use this right now. And then we're almost done. The probability that maximum at time t is greater than a that's equal to the probability that you're stopping time is less than t, just by definition. And that's equal to the probability that B(t) minus B(tau_a) is positive given tau a is less than t-- Because if you know that tau is less than t, there's only two possible ways. You can either go up afterwards, or you can go down afterwards. But these two are the same probability. What you obtain is 2 times the probability that-- and that's just equal to 2 times the probability that B(t) is greater than a. What happened? Some magic happened. First of all, these two are the same because of this property by symmetry. Then from here to here, B(tau_a) is always equal to a, as long as tau_a is less than t. This is just-- I rewrote this as a, and I got this thing. And then I can just remove this because if I already know that tau_a is less than t-- order is reversed. If I already know that B at time t is greater than a, then I know that tau is less than t. Because if you want to reach a because of continuity, if you want to go over a, you have to reach a at some point. That means you hit a before time t. So that event is already inside that event. And you just get rid of it. Sorry, all this should be-- something looks weird. Not conditioned. OK. That makes more sense. Just the intersection of two properties. Any questions here? So again, you just want to compute the probability that the maximum is greater than a at time t. In other words, just by definition of tau_a, that's equal to the problem that tau_a is less than t. And if tau_a is less than t, afterwards, depending on afterwards what happens, it increases or decreases. So there's only two possibilities. It increases or it decreases. But these two events have the same probability because of this property. Here's a bar and that's an intersection. But it doesn't matter, because if you have the B of X_1 bar y equals B of x_2 bar y then probability of X_1 intersection Y over probability of Y is equal to-- these two cancel. So this bar can just be replaced by intersection. That means these two events have the same probability. So you can just take one. What I'm going to take is one that goes above 0. So after tau_a, it accumulates more value. And if you rewrite it, what that means is just B_t is greater than a given that tau_a is less than t. But now that just became redundant. Because if you already know that B(t) is greater than a, tau_a has to be less than t. And that's just the conclusion. And it's just some nice result about the maximum over some time interval. And actually, I think Peter uses distribution in your lecture, right? AUDIENCE: Yes. [INAUDIBLE] is that the distribution of the max minus the movement of the Brownian motion. And use that range of the process as a scaling for [INAUDIBLE] and get more precise measures of volatility than just using, say, the close-to-close price [INAUDIBLE]. PROFESSOR: Yeah. That was one property. And another property is-- and that's what I already told you, but I'm going to prove this. So at each time the Brownian motion is not differentiable at that time with probability equal to 1. Well, not very strictly, but I will use this theorem to prove it. OK? Suppose the Brownian motion has a differentiation at time t and it's equal to a. Then what you just see is that the Brownian motion at time t plus epsilon, minus Brownian motion at time t, has to be less than or equal to epsilon times a. Not precisely, so I'll say just almost. Can make it mathematically rigorous. But what I'm trying to say here is by-- is it mean value theorem? So from t to t plus epsilon, you expect to gain a times epsilon. That's-- OK? You should have this-- then. In fact, for all epsilon. Greater than epsilon prime'. Let's write it like that. So in other words, the maximum in this interval, B(t+epsilon) minus t, this distribution is the same as the maximum at epsilon prime. That has to be less than epsilon times A. So what I'm trying to say is if this differentiable, depending on the slope, your Brownian motion should have always been inside this cone from t up to time t plus epsilon. If you draw this slope, it must have been inside this cone. I'm trying to say that this cannot happen. From here to here, it should have passed this line at some point. OK? So to do that I'm looking at the distribution of the maximum value over this time interval. And I want to say that it's even greater than that. So if your maximum is greater than that, you definitely can't have this control. So if differentiable, then maximum of epsilon prime-- the maximum of epsilon, actually, and just compute it. So the probability that M epsilon is less than epsilon*A is equal to 2 times the probability of that, the Brownian motion at epsilon is less than or equal to a. This has normal distribution. And if you normalize it to N(0, 1), divide by the standard deviation so you get the square root of epsilon A. As epsilon goes to 0, this goes to 0. That means this goes to half. The whole thing goes to 1. What am I missing? I did something wrong. I flipped it. This is greater. Now, if you combine it, if it was differentiable, your maximum should have been less than epsilon*A. But what we saw here is your maximum is always greater than that epsilon times A. With probability 1, you take epsilon goes to 0. Any questions? OK. So those are some interesting things, properties of Brownian motion that I want to talk about. I have one final thing, and this one it's really important theoretically. And also, it will be the main lemma for Ito's calculus. So the theorem is called quadratic variation. And it's something that doesn't happen that often. So let 0-- let me write it down even more clear. Now that's something strange. Let me just first parse it before proving it. Think about it as just a function, function f. What is this quantity? This quantity means that from 0 up to time T, you chop it up into n pieces. You get T over n, 2T over n, 3T over n, and you look at the function. The difference between each consecutive points, record these differences, and then square it. And you sum it as n goes to infinity. So you take smaller and smaller scales take it to infinity. What the theorem says is for Brownian motion this goes to T, the limit. Why is this something strange? Assume f is a lot better function. Assume f is continuously differentiable. That means it's differentiable, and its differentiation is continuous. Derivative is continuous. Then let's compute the exact same property, exact same thing. I'll just call this-- maybe i will be better. This time t_i and time t_(i-1), then the sum over i of f at t_(i+1) minus f at t_i. If you square it, this is at most sum from i equal 1 to n, f of t_(i+1) minus f of t_i, times-- by mean value theorem-- f prime of s_i. So by mean value theorem, there exists a point s_i such that f(t_(i+1)) minus f(t_i) is equal to f prime s_i, times that. s_i belongs to that interval. Yes. And then you take this term out. You take the maximum, from 0 up to t, f prime of s squared, times i equal 1 to n, t_(i+1) minus t_i squared. This thing is T over n because we chopped it up into n intervals. Each consecutive difference is T over n. If you square it, that's equal to T squared over n squared. If you had n of them, you get T squared over n. So you get whatever that maximum is times T squared over n. If you take n to infinity, that goes to 0. So if you have a reasonable function, which is differentiable, this variation-- this is called the quadratic variation-- quadratic variation is 0. So all these classical functions that you've been studying will not even have this quadratic variation. But for Brownian motion, what's happening is it just bounced back and forth too much. Even if you scale it smaller and smaller, the variation is big enough to accumulate. They won't disappear like if it was a differentiable function. And that pretty much-- it's a slightly stronger version than this that it's not differentiable. We saw that it's not differentiable. And this a different way of saying that it's not differentiable. It has very important implications. And another way to write it is-- so here's a difference of B, it's dB squared is equal to dt. So if you take the differential-- whatever that means-- if you take the infinitesimal difference of each side, this part is just dB squared, the Brownian motion difference squared; this part is d of t. And that we'll see again. But before that, let's just prove this theorem. So we're looking at the sum of B of t_(i+1), minus B of t_i, squared. Where t of i is i over n times the time. From 1 to n-- 0 to n minus 1. OK. What's the distribution of this? AUDIENCE: Normal. PROFESSOR: Normal, meaning 0, variance t_(i+1) minus t_i. But that was just T over n. Is the distribution. So I'll write it like this. You sum from i equal 1 to n minus 1, X_i squared for X_i is normal variable. OK? And what's the expectation of X_i squared? It's T squared over n squared. OK. So maybe it's better to write it like this. So I'll just write it again-- the sum from i equals 0 to n minus 1 of random variables Y_i, such that expectation of Y_i-- AUDIENCE: [INAUDIBLE]. PROFESSOR: Did I make a mistake somewhere? AUDIENCE: The expected value of X_i squared is the variance. PROFESSOR: It's T over n. Oh, yeah, you're right. Thank you. OK. So divide by n and multiply by n. What is this? What will this go to? AUDIENCE: [INAUDIBLE]. PROFESSOR: No. Remember strong law of large numbers. You have a bunch of random variables, which are independent, identically distributed, and mean T over n. You sum n of them and divide by n. You know that it just converges to T over n, just this one number. It doesn't-- it's a distribution, but most of the time it's just T over n. OK? If you take-- that's equal to T, because these are random variables accumulating these squared terms. That's what's happened. Just a nice application of strong law of large numbers, or just law of large numbers. To be precise, you'll have to use strong law of large numbers. OK. So I think that's enough for Brownian motion. And final question? OK. Now, let's move on-- AUDIENCE: I have a question. PROFESSOR: Yes. AUDIENCE: So this [INAUDIBLE], is it for all Brownian motions B? PROFESSOR: Oh, yeah. That's a good question. This is what happens with probability one. So always-- I'll just say always. It's not a very strict sense. But if you take one path according to the Brownian motion, in that path you'll have this. No matter what path you get, it always happens. AUDIENCE: With probability one. PROFESSOR: With probability one. So there's a hiding statement-- with probability. And you'll see why you need this with probability one is because we're using this probability statement here. But for all practical means, like with probability one just means always. Now, I want to motivate Ito's calculus. First of all, this. So now, I was saying that Brownian motion, at least, is not so bad a model for stock prices. But if you remember what I said before, and what people are actually doing, a better way to describe it is instead of the differences being a normal distribution, what we want is the percentile difference. So for stock prices we want the percentile difference to be normally distributed. In other words, you want to find the distribution of S_t such that the difference of S_t divided by S_t is a normal distribution. So it's like a Brownian motion. That's the differential equation for it. So the percentile difference follows Brownian motion. That's what it's saying. Question, is S_t equal to e sub B_t? Because in classical calculus this is not a very absurd thing to say. If you differentiate each side, what you get is dS_t equals e to the B_t, times dB_t. That's S_t times dB_t. It doesn't look that wrong. Actually, it looks right, but it's wrong. For reasons that you don't know yet, OK? So this is wrong and you'll see why. First of all, Brownian motion is not differentiable. So what does it even mean to say that? And then that means if you want to solve this equation, or in other words, if you want to model this thing, you need something else. And that's where Ito's calculus comes in. OK. I'll try not to rush too much. So suppose-- now we're talking about Ito's calculus-- you want to compute. So here is a motivation. You have a function f. I will call it a very smooth function f. Just think about the best function you can imagine, like an exponential function. Then you have a Brownian motion, and then you apply this function. As an input, you put the Brownian motion inside the input. And you want to estimate the outcome. More precisely, you want to estimate infinitesimal differences. Why will we want to do that? For example, f can be the price of an option. More precisely, let f be this thing. OK. You have some s_0. Up to s_0, the value of f is equal to 0. After s_0, it's just a line with slope 1. Then f of Brownian motion is just the price exercise-- what is it-- value of the option at the expiration. T is the expiration time. It's a call option. That's the call option. So if your stock at time T goes over s_0, you make that much. If it's below s_0, you'll lose that much. More precisely, you have to put it below like that. Let's just do it like that. And it looks like that. So that's like a financial derivative. You have an underlying stock and then some function applies to it. And then what you have, the financial asset you have, actually can be described as this function. A function of an underlying stock, that's called financial derivatives. And then in the mathematical world, it's just a function applied to the underlying financial asset. And then, of course, what you want to do is understand the difference of the value, in terms of the difference of the underlying asset. If B_t was a very nice function as well. If B_t was differentiable, then the classical world calculus tells us that d of f is equal to d of B_t over d of t times dt. Yes. So if you can differentiate it over the time difference, over a small time scale. All we have to do is understand the differentiation. Unfortunately, we can't do that. We cannot do this. Because we don't know what-- we don't even have this differentiation. OK. Try one, take one failed, take two. Second try, OK? This is not differentiable, but still I understand the minuscule difference of dB_t. So what about this? df-- maybe I didn't write something, f prime-- is equal to just dB_t of f prime. OK? What is this? We can't differentiate Brownian motion, but still we understand the minuscule and infinitesimal difference of the Brownian motion. So I just gave up trying to compute the differentiation. But instead, I'm going to just compute how much the Brownian motion changed over this small time scale, this difference, and describe the change of our function in terms of the differentiation of our function f. f is a very good function, so it's differentiable. So we know this. This is computable. This is computable. It's the difference of Brownian motion over a very small time scale. So that at least now is reasonable. We can expect it. It might be true. Here, it didn't make sense at all. Here, it at least make sense, but it's wrong. And why is it wrong? It's precisely because of this. The reason it's wrong, the reason it is not valid is because of the fact dB squared equals dt. And let's see how this comes into play, this factor. I think that will be the last thing that we'll cover today. OK. So if you remember where you got this formula from, you probably won't remember. But from calculus, this follows from Taylor's expansion. f of t plus x, I'll say, is equal to f of t plus f prime of t times x, plus f double prime of t over 2, times x squared plus-- over 3 factorial x cubed plus-- df is just this difference. Over a very small time increase, we want to understand the difference of the function. That's equal to f prime t times x. OK. In classical calculus we were able to ignore all these terms. So in the classical world f(t+x) minus f(t) was about f prime t times x. And that's precisely this formula. But if you use Brownian motion here-- so what I'm trying to say is if B at some time t plus x, minus Brownian motion B at time t, then let's just write down the Taylor formula. We get f prime at B_t. x will be this difference, B at t plus x minus B at t. That's like the difference in B_t. So up to this much we see this formula. And the next term, we get the second derivative of this function over 2 and x squared, x plus this difference. So what we get is dB_t squared. OK? But as you saw, this is no longer ignorable. That is like a dt, as we deduced. And that comes into play. So the correct-- then by Taylor expansion, the right way to do it is df is equal to the first derivative term, dB_t, plus the second derivative term, double prime over 2 dt. This is called Ito's lemma. And now let's say if you want to remember one thing from the math part, try to make it this one. This had great impact. If you follow the logic it makes sense. It's really amazing how somebody came up with for the first time because it all makes sense. It all fits together if you think about it for a long time. But actually, I once saw that Ito's lemma is one of the most cited lemmas, like most cited paper. The paper that's containing this thing. Because people think it's nontrivial. Of course, there are facts that are being used more than this, classical facts, like trigonometric functions, exponential functions. They are being used a lot more than this, but people think that's trivial so they don't cite it in their research and paper. But this, people respect the result. It's a highly nontrivial result. And it's really amazing how just by adding this term, all this theory of calculus all now fit together. Without this-- maybe it's a too strong statement-- but really Brownian motion becomes much more rich because of this fact. Now we can do calculus with it. So there's two things to remember. Well, if you want to remember one thing, that's Ito's lemma. If you want to remember two things, it's just quadratic variation, dB_t squared is equal to dt. And I remember that's exactly because B_t is like a normal variable with 0, t. And time scale-- B_t is like a normal random variable 0, t. dB_t squared is like the variance of it. So it's t, and if you differentiate it, you get dt. That was exactly how we computed it. So, yeah, I'll just quickly go over it again next time just to try to make it stick in to your head. But please, think about it. This is really cool stuff. Of course, because of that computation, calculus using Brownian motion becomes a lot more complicated. Anyway, so I'll see you on Thursday. Any last minute questions? Great.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
1_Introduction_Financial_Terms_and_Concepts.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JAKE XIA: This is the second time we are having this class. We had it last year in a smaller version. That was for six units of a credit, and we had it once a week. And mostly practitioners from the industry, from Morgan Stanley, talking about examples how math is applied in modern finance. And so we got some good response last year. So, with the support of the math department, we decided to expand this class to be 12 units of credit and have twice a week. So, we have every Tuesday and Thursday afternoon from 2:30 to 4:00, as you know, in this classroom. So last year, Dr. Vasily Strela and I-- by the way, I'm Jake Xia and that's Dr. Vasily, and we were the main instructors last year. Now we doubled it up to four main instructors. That's Dr. Peter Kempthorne and Dr. Choongbum Lee. The reason we doubled up the main instructors is we have newly added math lectures, mostly focusing from linear algebra, probability to statistics, and some stochastic calculus to give you the foundation to understand the math will be used in those examples in the lecture taught by the practitioners from the industry. And the purpose of this course is really to give you a sampling menu to see how mathematics is applied in modern finance and help you to decide if this is a field that you would be-- RECORDED VOICE: Thank you, for using WebEx. Please visit our website at www.webex.com. JAKE XIA: OK, you heard that. And so hopefully, this will give you enough information to decide this is a field you would like to pursue in your future career. In fact, last year when we finished the class, we had a few students coming to work in the industry. Some work at Morgan Stanley, some work at elsewhere. So that's really the goal. And at the same time, obviously, you will further solidify your math knowledge and learn new content. And we put the prerequisite about the math part a bit later. So I will use today's first lecture's time to give you an introduction, really, to prepare you some basic background knowledge about the financial markets. Some terminologies will be used, which you may not have heard before. So before I get into the introduction, I always like to know who are actually in the classroom, so let me ask you a few questions. You just need to raise your hands so I know roughly what kind of background and where you are. So how many undergraduate students are here? So I would say 80% percent. How many graduate students are here, just to verify? Yep, that's about right, 20%. And how many students are in finance and business major? Just one. And how many of you are a math major? Most of you. How many of you are engineering majors? A few. How many of you actually are from other universities? Great, because last year we had quite a few, so I want to specifically tell you that you're very welcome to attend the classes here. So it's open door. And last year I remember we had a couple of students from Harvard. That's where I actually work right now. I forgot to mention that, but I'm affiliated with both the math department and the Sloan school here. So anyway, thanks for that. We will be doing a bit more polling along the way, mainly to get feedback of how you feel about the class. Last year we had it online, so if you feel the class is going too fast, or the math part is going too slow, or the finance part is a bit confusing, the easiest way is really just to send us emails, which you will find from the class website. So anyway, today-- VASILY STRELA: And all of us got MIT emails. JAKE XIA: Yes. We all have MIT emails, which are listed on the website. VASILY STRELA: [INAUDIBLE]. JAKE XIA: And obviously, we have offices here. You can easily stop by Peter and Choongbum's offices. And Vasily and I probably will be less often on campus, but we'll be here quite often and definitely love to be more. So anyway, I will start today's lecture with a story, and a quiz at the end. Don't worry, it's not a real quiz. Just going to ask you some questions you can raise your hand and give your answer. But let me start with my story. This is actually my personal story. I want to tell you why I tell the story later. But the story actually was in the mid '90s. I just left Salomon Brothers -- that was my first financial industry job -- to go to Morgan Stanley in New York to join the options trading desk. So the first day, I sat down, I opened the trading book, I found something was missing. So, I turned around, I asked my desk quant. I said, where is the vega report? So, let me show you. So that's the story. So I'm obviously not going to tell you the story of Pi or "Life of Pi." That's not a financial story. The rest of the story, alpha, beta, delta, gamma, theta, which you will learn from Peter and Choongbum and Vasily's classes. So I'm going to talk about vega. So by the way, before I tell you the story, what's unique about vega on this list? AUDIENCE: It's not a Greek letter. JAKE XIA: It's not a Greek letter. That's right. So I turned around and asked my desk quant, I said, where's the vega report? But how many of you actually know what a vega is? OK, lot of people know. So anyway, I'm not going to-- just for the people who haven't heard about it before, it's a measurement about a book or portfolio or position's sensitivity to volatility. So, what is volatility? Which again, you will learn more in rigorous terms how it's defined in mathematics. But the meaning of it is really a measurement or indication of how volatile, or what's the standard deviation of a price can change over time. That's all you need to know right now. I'm not going to ask you questions later. So my desk quant look at me, said-- this is supposed to be options trading desk, so he look at me puzzled. So instead of answering my question, he handed over me a training manual for new employees and new analysts. So I opened the training manual and looked it through. I actually found my answer. So actually, at Morgan Stanley this is not called vega, it's called kappa. So now, I remember to call it kappa. Kappa is actually a Greek letter. So further, I look on the same page there was actually a footnote, which I copied down. So the footnote about why it's called kappa at Morgan Stanley. Kappa is also called vega by some uneducated traders at the Salomon Brothers. That's where I came from. I just joined. They have mistaken vega as a Greek letter after gambling at Vegas. So anyway, so that was my first day. So obviously, I learned how to call kappa very quickly, because I came from Salomon Brothers. And I called it kappa in the last 17 years, but you will hear people calling it vega. Obviously, I have probably more people calling it the vega. But anyway, so that's my first day at Morgan Stanley. But why did I tell you the story? What point I try to make? So this story is actually-- when you think about it, mathematical or quantitative finance is a rather new field. A lot of these terms were newly introduced. And the pricing model of options, as you know, was introduced in the Black-Scholes in the '70s, or some of the ground work may be done a bit earlier. But it's not like finance was a quantitative profession to start with. So what we witness in the last 30 years was really a transformation of the trading profession coming from mostly under-educated traders. Some of them typically joined the firms in the mail room and became trader later on. That's typical career path. And to nowadays, if you walk on the trading floor, you talk to the traders, most of them have advanced degrees and quite a few of them have very high training in mathematics and computer science. So what has changed over the last 20 or 30 years? I myself, personally, was probably one of the data point experiencing this change. And I certainly didn't expect I would be doing this when I was at MIT, but I did that in the last 20 years. So the point I'm trying to tell you is, before you dive into any details of mathematics or any concept in finance in this class, just bear in mind, this is a field developed in the last mostly 30 years, or even shorter. And what you really need to ask questions is-- it's not really is it right or wrong in mathematics, is it right or wrong in physics? So, how the concepts are established and defined and verified. Because this is a field-- the transformation about the participants, products, models, methodology, everything are changing very rapidly. Even nowadays, they're still changing. So with that, I will give you some background on how the financial markets actually started, and that's really the history part of this industry. So, when we talk about markets, we know in early days people need to exchange goods. You have something I don't have, I have something you don't have, so there's exchanges. Then it becomes centralized. There are stock exchanges, futures exchanges all over the world where these products will be listed as securities on these exchanges. That's one way of trading, which is centralized. Obviously, in the last 10, 15 years, now we have ECNs, electronic platforms. Trade over-- you know, even larger volume of those trades. So, financial products is really just one form of trading. There are many other ways of trading aside from exchanges. One of them, which is called OTC, is over-the-counter, meaning two counterparties agree to do a trade without really subject to the exchange rules, or the underlying trading agreement does not have to be a securitized product, or standardized, or whatever ways you define it. And the different regions have different exchanges and markets, as well. And they typically specialize in local products, local company stocks, local bonds, and local currencies. So, there are many different forms. So again, what's in common? That's the question you need to ask. Also, you don't know the specifics. And the currencies, money itself, are also traded. And that's where different currencies issued by different countries. So, when we talk about trading stocks-- there are also people trade baskets of stocks, trade groups of stocks together, and that's stock index or indices. So, there are different products. How the stock get listed on the stock exchange? It goes through IPO-- Initial Public Offering process. So, when a company changes from private to public, it goes through this IPO process. It's called primary market, primary listing. And once the stock is listed on the exchange and it becomes traded in the market, we call it secondary trading. So, that's after the primary market. And equity or stock is one form of trading or one form of financial products. What are other forms? Loans. Actually, debt products are more generic than equity products. When you started thinking about it, what is really finance is about? It's really about someone has money, someone doesn't. Someone has money to lend out, someone needs to borrow money. So, that's loan. Loan is really a private agreement between two counterparties or multiple counterparties. When you securitize them, they become bonds. And when you look at bonds, every government will issue large sovereign debt. So, US government has large outstanding US Treasury debt-- bonds, notes, bills. And corporates have issued a lot of debt product, as well. They borrow money when they need to build a new factory or expand. Universities borrow money. When MIT needs to build a new building, some of the money will come from the endowment support, some will come from some other form of research budget, or some will come from debt financing. Just borrow from the public-- local governments, states, counties, even. So, they have various forms. So, that's debt product. Commodities, actually, you know. Metal, energy, agriculture products are traded, mostly in the futures format and some in physical format, meaning you take deliveries. When you actually buying and sell, you build a warehouse to take them. You ship a tank to store above the ocean. And the real estate, you're buying and sell houses. 2008 financial crisis, if you read about it, this has a lot to do with the real estate market, the mortgages, and asset-backed securities. So, I'm not trying to give you all the definition, dumping the information on you. But I like you at least hearing it once today, and then you have more interest, you can read on the side. So asset-backed securities is when you have an asset, you basically issue a debt with the asset backing it. And how do you rate the asset's risk level and what's the income stream, cash flow? And before 2008 financial crisis, as you heard, large amount of CMBS-- basically, it's a commercial real estate backed securities, mortgage securities, and the residential, as well. And further of all of these, you heard probably a lot about the derivative products. So, that started with swaps, options. And the structure of the products, it become more tailor-made for either investors or borrowers to structure the products in a way to suit their needs. And some of the complexity of those structured products become quite high, and the mathematics involved in pricing them and the risk management become rather challenging. So coming back to the players in the market, one large type of player is really bank. Essentially, after 1933 Glass-Steagall legislation, there were two main types of banks. One is called commercial bank, the other is investment bank. Commercial bank is supposedly, you're taking deposits and lend out the money, and doing more commercial services. Investment bank supposed to focus on the capital markets, raising capital, trading, and asset management. But obviously, after 1999, the Glass-Steagall was repealed. There's no longer that. Some people blame that, and probably for a very good reason, for the cause of 2008 financial crisis. But I want to tell you how currently investment banks are organized. Vasily just mentioned he works in the fixed income. So banks typically organized by institutional business and asset management. So, within the institutional client business, it has typically three main parts. Fixed income, which trade the debt and the derivative products. Equity, trade stocks and the derivative products. And IBD, stands for Investment Banking Division, which really covers corporate finance, raising capital, listing a stock, IPO, and merger and acquisition, and advisory. So that's how banks are organized. Outside banks, other players, basically, the asset managers, are obviously a very big force in the financial markets. So the question a lot of people ask is, is this a zero sum game? I'm sure you've heard this many times. So, in the financial markets, some people win, some people lose. A lot of times, it depends on the specific products you trade, the market you're in. It is, lot of times, pretty net zero. But why do we need financial markets? This comes back to what I described before. Because something existed-- actually, there's a need for it. It's really the need to bridge between the lenders and the borrowers. That's really coming down to the essential relationship. So, investors who have money need to have better yield or better return, better interest. In the current environment, when you have a savings account, you don't really earn much at all. And so you would have to take more risk to generate more return, or you have longer horizon CDs, other type of products, or trade the stocks. So, when somebody has money, when you trade stocks, you're essentially-- you're buying a stock, you give the money somewhere. Supposedly, it will go to the company. Company use the money to generate a better return. And for the borrowers, whoever needs money, they need to have access to the capital. So obviously, different borrowers have different risks. Some people borrow money, never return. So, never generate any returns, or never even return the principal. And so the trade between lenders and the borrowers, is again, essentially the main driver of the financial markets. So, a few more words about the market participants. So, banks and so-called dealers play the role of market making. What is market making? So, when you or some end user go to the market, wants to buy or sell, typically, if there's no market, you don't really find the match. And some of the products you want to buy or sell may not necessarily be liquid. So, the dealers step in the middle, make you a price. Say, OK, you want to buy or sell. I can tell you-- this stock, I make you price. $0.99, and that's my bid. $0.95, that's my offer. So, that's the price I'm willing to buy or sell. But what the result of the trade-- the dealer actually takes the other side of your trade. So, they take principal risk, in this case. So, that's the difference between dealers and the brokers. So, brokers don't really take principal risks. If you want to buy something or sell something, if I'm a broker, I don't make you a price. I go to the market makers. I actually put two people together, matchmaking, make that trade happen. So, I earn the commission. So, that's a broker's role. So obviously, there are individual investors, retail investors, same meaning. Mutual funds, who actually manage public investors' money, typically in the long-only format. Long means you buy something. So, you don't really short sell a particular security. Insurance companies has large asset. They need to generate a return, generate cash flow to meet their liability needs. So, they need to invest. And the pension funds, same thing. As inflation goes higher, they need to pay out more to the retirees, so where do you get the return? Sovereign wealth fund, similarly, endowment funds-- they all have this same situation, have capital and needs to deploy and to make better return. So this other type of players, hedge funds. So, how many of you have heard hedge funds? OK, good. Almost everyone. And Peter mentioned that he used to work at a hedge fund. And so, there are different types of strategies, which I will dive into a bit more, but hedge fund play the role in the market-- they basically find opportunities to profit from inefficient market positioning or pricing, so they have different strategies. And the private equity is different type of funds. They basically look to invest in companies and either take them private or invest in a private equity form to hopefully improve the company's profitability, and then catch up. And governments obviously have a huge impact on the market. So, we know in the financial crisis, government intervened. And not only that, at the normal market condition, government always have a very large impact on the market, because they are the policymakers. They decide the interest rate and interest rate curve. And the different policies they push out, obviously, will generate different outlook for the future markets, therefore, profitability. Then the corporate hedges and the liabilities. When corporates borrow money, they create some risk, so they need to be sensitive to the market, it changes. So, to summarize the types of trading. The first type is really just hedging. That means you're not proactively adding risk to what you have. You already have some exposure. Just give you an example. Let's say you borrow money, you bought a house, so you have mortgage. So, let's say it's a floating rate mortgage payments. And you're worried about interest rates going higher, so you can lock that rate in into the fixed rate format. Or you can find ways to hedge your exposure. Or your corporate has a large income coming from Europe. So, you have euros coming in, but you're not sure if euro would trade stronger to the US dollar in the future, or trade weaker. If you think it will be stronger, you just leave it. But if you think it will trade weaker, so you may want to hedge it, meaning you want to sell euro and buy US dollars. And so that's the hedging type. The second type, as I mentioned, is a market maker. So, market maker also takes principal risk, but the main source of profit is really to earn the bid offer. I gave you the example $0.90 bid, $0.95 offer. So, that's what the market maker is trying to profit from. But obviously, they have residual risks sitting on the book. Not every trade is matched. So, how to optimize those group of trades, that's what market maker is doing. Most of the bank's dealers are market makers. In the new regulation, obviously, proprietary trading is banned, right? And so the third type is really the proprietary trader, the risk taker. So, these are the hedge funds or some portfolio managers. They need to focus on generating return and control the risk. So, that's where the beta and alpha, the concept comes in. So, if you're a portfolio manager, some people say, don't worry. Don't go pick any stocks. Just buy S&P 500 index fund. Very cheap. You can pay very little cost to do it. That's true. But if you want to beat the S&P 500 index-- let's assume we call S&P 500 index fund is asset b. So, the return of that, R(b). That's a return of that index. Now, you have a portfolio a. Your time series of return of your asset a, obviously, you can do linear regression. A lot of you are math major here, and you can find a correlation between those two time series. So, how the two returns are related in a simplified form. So you can say, this actually-- somehow it came out. It's supposed to be alpha and beta, but it turned out to be the letters. So, in a short description, beta is really-- just think as correlated move with the other asset. Alpha is really the difference in the return. It's a format. You want to beat S&P 500, so you want to basically have certain tracking of this index, but you want to return more on top of that. So let me just go in bit of details of how each type of trade actually occurs. So, when we talk about hedging, I mentioned the currency example. Let me give you another example. There are a lot of people issue bonds, or issue debt. So this example I'm going to give you is, let's think about Australian corporate. Because interest rate in Australia is higher than in Japan, so typically, people like to borrow money in Japan, because you pay smaller interest. And they invest it in Australia. You earn higher interest rate. So let me ask you a question. Who can tell me, why don't people just do that all day long, just borrow from Japan and invest it in Australia? Then that interest rate, I'm giving you example of a difference is about 3.5% for the roughly 10 year swap rates. Yeah, go ahead. AUDIENCE: [INAUDIBLE]. JAKE XIA: Right. Because you invest in the Australia Ozzie, Australian dollar. The Australian dollar may become weaker to the yen. You may lose all your profit, or even more. And further, if everybody plays the same game, then when you try to exit, you have the adverse impact of your trade. So, let's say you think that's the right time to do it, but then at one time, you wake up, you said, huh, I think too many people are doing this. I want to hedge myself. So, what do you do? AUDIENCE: [INAUDIBLE]? JAKE XIA: Yep. So, you try to lock in, right? So basically, you sell the Australian dollars, buy the Japanese yen. Or on the interest rate terms, you say you'll basically pay the Australian dollar in the swap leg, and receive yen. This involves foreign exchange trade, interest rate swap, and the cross-currency swap. So, your answer about currency forward is roughly right, but obviously involves a bit more in actual execution. So that's just to give you example. Even if you are not a finance guy, you work in a corporate, you just do you import, export, or building a factory, you have to know, actually, what the exposure is. So, risk management, nowadays, becomes pretty widespread responsibility. It's not just the corporate treasury's responsibility. So, that's on the hedging side. Obviously, if you are Intel, for example, you sell a lot of chips overseas. And your income-- actually, Intel does have lot of overseas income sitting outside the States. So, the exposure to them is if the exchange rate fluctuates, dollar becomes a lot stronger, they actually lose money. So, they need to think about how to hedge the revenue produced overseas. And obviously, for import-exporters, that's even more apparent. And if you're entering in a merger deal, and one company is buying another, you need to hedge your potential currency exposure and your interest rate exposure. And whatever is on the assets, or the liability, or the balance sheet, you need to hedge your exposure. So we talked about hedging activity. Let's talk about market making. So if it's a simple transparent product, everybody pretty much knows where the price is. So, if you buy Apple stock, I think a lot of people know pretty much where it is. You may even have it on your cellphone, know where that stock is. But if it's not transparent, so what do you do? So, if instead of asking you where Apple is, probably you're going to tell me $495 today. AUDIENCE: I don't really know. JAKE XIA: OK. But if I asked you instead, what is the call option on Apple stock in two month's time? I'll give you a strike, let's say, 500. So you're probably less transparent. So that market maker comes in to provide that liquidity, and then takes the risk. They manage the book by balancing those Greeks, which I mentioned earlier. Delta, which describes the [INAUDIBLE] relationship of this whole book to the underlying stock, or underlying whatever currency. That's called delta. Gamma is really the change of the portfolio. Take the derivative to the delta, or to the underlying spot. So, that's second-order derivative. Delta is the first order. So gamma, now you have curvature or convexity coming in. And theta is really-- nothing changes in the market. Nothing changes in your position. How your trading book is carrying or bleeding away money. And we talk about the volatility exposure was vega. And on top of that, what are the tail risks? What are the events can actually get you into big trouble? So people use value at risk. So you will hear this "VaR" concept in some of the lectures, which is also, obviously, a very important concept. I think Peter will-- or Choongbum will-- probably Peter will teach. Then capital. How much capital are you using? It becomes a very important issue nowadays. And balance sheet. Again, you have asset, you have liability. How do you leverage? How much leverage you have? Before the crisis, for example, lot of the banks leverage up 40 times, meaning when you have $1, you had $40 exposure. So when the market moves little, you get wiped out. That's really what amplified in the 2008 financial crisis. And how do you measure the asset in balance sheet when you have derivatives rather than a straightforward notional? So lot of quantitative type of people like to focus a bit more on the risk taking side, because people heard stories about successful cases of some hedge funds using high math. They generated very impressive returns and they seem to have an edge. So now, people focus on trading strategies. So that falls into the category of proprietary trading or risk taking. So that you can just simply doing directional trading strategies. Just go long or short the stock. That's very simple. Those so-called the gut traders, gut feeling. Go with your gut. You don't even think. You say, I'm eating curry today, so I go long. I'm eating rice tomorrow, so I go short. So, this arbitrage. Arbitrage is really to find the relationships between prices, and try to profit from those relationship mispricing. This is actually very interesting. Not many people focus on arbitrage, because lot of people are gut traders. You essentially just watch your own market. You don't really care what's going on. If you trade gold in the States, the gold price happen in Asia and in Europe matters, right, because you're trading the same thing. If they are not priced the same way, you can profit from the difference. And that's just a simple example. But a spot price versus forward price, that's a deterministic relationship. It's a mathematical relationship. If that relationship breaks down, you can also profit. So there are many examples mathematical relationship which gives you the arbitrage opportunity. The other type is called a value trader, or relative value strategies. Think there's a deterministic, temporary mathematical relationship. You look at the longer term in horizon, trying to determine what is really the underlying value of a particular instrument, then trade on the relative value. Obviously, there are successful value investors out there. And the systematic trader builds computer models. One example is trend following, so just follow the price trend. That used to be an effective strategy for some time, but when lot of people doing the same thing, that becomes much less effective. Or momentum, same thing. Stat arb, finding statistical relationship among large number of stocks, then trade at the higher frequency. And fundamental analysis, you're really trying to understand what's going on in the world. What is the trade balance? What is the earning potential of a company? What's the trade balance of a country? What is a policy change? What does it mean when Federal Reserve announce they're going to taper the quantitative easing? Why the stock market is sold off in the last couple months, especially why stocks in India, Brazil, Indonesia, sold out more. Why is that? So it goes through those fundamental analysis. And there are special situations. Some companies are going through particular difficulties, assets are priced very cheaply. So, there are firms out there -- you probably heard Bain Capital and many others -- where they focus on these private equity and special situation opportunities. So what have all of these to do with mathematics? Where does math come in? How do you use math? So, I want to give you some aspects of that. So from my personal experience, I joined the market, really start to working on pricing models. So, that's the first area. So, math is very effective, because when you, your bank, your corporate, you want to buy some financial instruments, you have to know where is the price. It's easy to observe a stock in the market, but when it comes to more complex products, they just take one step forward on the complexity, which is the option. You have to know how to price an option. So, that's where the math comes in. You actually have to be able to solve differential equations to get a model price, then you obviously adjust to your assumptions to fit into the market. So, pricing model, which Vasily and many of his colleagues can tell you more-- which is very much a very interesting and challenging area. How do you price all these instruments? And when I say pricing, it's not in the narrow definition of just coming up with the price. When you build a pricing model, you also generate the risk parameters of these instruments, and how do you risk manage them. So, that comes to the second part. So math is very useful in risk management, which I will give you some -- not quiz -- questions after this slide. You can see that risk management itself is very challenging. It's not a purely mathematical question, but yet, math plays a very important role to quantify how much exposure you have. Then, the third is trading strategies. Again, I think a lot of people with math background, or in general, people are looking for the so-called holy grail trading strategies. It's almost like perpetual motion machines people looking for 100 years ago. You just turn it on. It makes money by itself. You go to sleep, you go on vacation, you come back, you'll have more in your bank account. Obviously, that's not going to happen. The robotrader, a robotic trader, is a dream. It has its place or its use, but it's a fast evolving market. You have to constantly either upgrade your research and adjust your strategies. There's no such thing you can build and leave it alone, it runs for itself forever. But I just want to mention that because maybe towards the end of the term you will feel, hmm, I came up with this brilliant trading strategy. I think it's going to make money forever. Please let me know first. AUDIENCE: And me second. PROFESSOR: So, I want to leave some time to Vasily. Actually, he can give you some examples of projects of last year's students who actually came to this class and did some real application at Morgan Stanley. But before I hand it over to Vasily, let me ask you some questions. I just want to-- not really to quiz you, just give you the sense how math and intuition and judgment can come into the same place. So, let me first give you an example I call risk aversion. So, you are facing two choices, choice A and a choice B. Choice A being you have 80 chance to lose $500. You have 20% chance to win $500. That's pretty clear, right? That's choice A. Or choice B, you basically just lock in you have 100% chance to lose $280. Let me ask you, for whoever likes to choose choice A, please raise your hand. One, two, three, four. About six out of say, let's call it 50. So, can I ask you why you think choice A makes sense? AUDIENCE: So, I know it's a lower expected value, but I enjoy gambling and I would rather take the chance of-- JAKE XIA: Right, because you don't want to lock in that $280 loss, right? That, or you still have 20% chance to win. For the ones raised their hand for choice A, are there any other reasons? Same reason. AUDIENCE: [INAUDIBLE] JAKE XIA: I assume the rest of you would choose choice B, unless you-- Neither? How many of you choose choice B? Choice B. And are there anybody think neither is right? You have to choose. No, you have to choose. So, either choice A or choice B. So, let me just talk a little bit about this. Again, I'm not trying to tell you which one is right, but I just share my thoughts how we look at these. Why it called risk aversion? So, this is very common human behavior. When you go to the market, you buy a stock. When the stock goes up, makes bit of money, the natural tendency -- for especially someone is new to the market -- is to let's take profit. Let's sell. Oh, I made $1000. I made $500. Let's go have a nice meal or whatever. Buy an iPad. But when the stock loses money, what's the natural tendency? AUDIENCE: [INAUDIBLE] JAKE XIA: That's-- AUDIENCE: [INAUDIBLE] JAKE XIA: I think natural tendency, lot of people will keep it. I think if you have the discipline to get out, that's great. Trading is really all about how do you risk manage, have the discipline, and how to manage your losses. The natural tendency of a lot of people is, well, I think there's a 20% chance to come back, and I'm going to make $500 more. Why do I want to lock in to stop myself out at 280? So even though the expected value-- I think lot of people said, you lose expected value, which is $300 in choice A, but you would still not to choose choice B, because you don't want to lock in the $280 loss. Again, I'm not trying to inject the idea to you of which one is right or wrong, but think about it. So, that's really the common behavior, which mathematically may not make sense, but lot of people still would like to do. And also, really, when you think about it, depends on your situation. And let's say, you think the market-- I'm giving you the stock example again. If you're not purely following the discipline of stop loss, but you just think the fundamental picture has changed. You really don't think the stock should go up anymore. Obviously, at whatever level you should get out, regardless how much loss you lock in. But if you think the fundamental story is still very sound, you should think about as if you don't have a position, what you want to do next. But anyway, mathematically, I just want to see-- I guess this is MIT, so many people think mathematically where you would actually choose choice B, because that's low expectation, which makes sense. But I think if you ask a larger audience, I think a lot of people don't really want to choose choice B, because they don't want to lock in the loss. Now, let me change the question a little bit. So, choice A becomes instead of the 80% chance to lose, now you have 80% chance to win $500 and 20% chance to lose $500. Choice B, you have 100% chance to win $280. Who would choose choice A? Again, minority of this audience. Let's say less than 10%. Who would choose choice B? The rest of you. All right. Can someone choose choice A give me an argument why would you? AUDIENCE: [INAUDIBLE] JAKE XIA: Yep. Anyone want to give me a reason for choice B? AUDIENCE: Higher Sharpe. JAKE XIA: Higher Sharpe? Mm-hm. Yup. Well, let me just leave it here. Again, I think we can talk a bit more along in the class. I mean, the last day of the class, hopefully we'll have much deeper discussion on this. It's not unique. The answer, I think it can go you either way, as you said. If your bank account balance is-- let's say you are a freshman student. Your bank account is $800. Your choice will be very different from someone has $100,000 in his bank account. And also, your risk tolerance, how much you can tolerate. I'm not going to give you say, this is right or wrong. But with that, let me move on and give you some homework. So, before I give you the homework, I want to make a few more comments. Do people always learn from their experiences? In science, we collect evidence, we build models. We first understand the physics. We build mathematical models, then we verify in physics, doing experiments. But is that the same investigation process in finance? Market cycles are typically very long, but people tend to have short memories. So, how do people really learn from their experiences? A very interesting question. And very natural tendency is to extrapolate historical experience. What happened in 2008? People still remember. What happened in 1970s? Maybe some people still remember. What happened 100 years ago? So, people tend to extrapolate, drawing conclusions from very recent experience. And deterministic relationship versus statistical relationship is very interesting, as well. When you try to trade on those, how do you really build models? Is the market really efficient? What part is efficient? How do you really apply those theories in your day-to-day risk management or trading activities? And sometimes, people tend to oversimplify. Just say, oh, I can model this. This is one important parameter. I just take that. So I just give you all the warnings that the-- again, very young, new field and largely, often, this is art, than science. So keep that in mind, even though we're talking about mathematics in finance. Math is very powerful and useful in finance. So learn the math, learn the finance first, but keep those questions along the way when you are learning during this class. So suggested homework, optional. I mentioned a lot of terminologies today. Go to the course website, read what we have put up for the financial glossary. So if you still have things you don't understand, compile your own list of financial concepts, which you can search on the web or even ask us. But I encourage you to do that. It will prepare you well. So, that's really-- and read other materials on the course work. So we got maybe-- how about this? We still got about 15 minutes or 12 minutes left, so I'll pass it to Vasily, then maybe we can leave five minutes for some questions. VASILY STRELA: Yeah. JAKE XIA: Yeah, OK. VASILY STRELA: [INAUDIBLE] mentioned that, Apple trades, that now it's $494.4 Yeah, just a couple of [INAUDIBLE]. Well, first of all, no offense to people who were [INAUDIBLE], but I just wanted to give an example of [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. VASILY STRELA: --because he was working in our group, and it just will give you a little bit of an idea what we will be talking about and what actually we do in the daily life, or what an intern or somebody who comes to work in this industry could do. And one project is [INAUDIBLE] worked was on estimating the noisy derivative. Derivative is called delta. Delta is usually the first derivative to a function. And as we will see in the class, quite often, to obtain a price, you do it through Monte Carlo, meaning running a lot of paths and then averaging along them. So, it's a statistical method. So obviously, there is a noise to your answer every time. So, if you want to differentiate this functions and get a derivative, then this derivative will be quite noisy. And so, instead of getting the true derivative, you might obtain something quite different from true derivative just because there is a confidence interval around any point. And obviously, there is a trade off here, as well, because you can run more paths, throw more computational power, which will reduce your confidence interval. You will know better where you are, more precise. Or the other solution could be, if you know that your function is not too concave and reasonably flat, you might do the numerical differentiation on wider interval. Basically, reducing the significance of the error, and you will hope to arrive to a better approximation. So obviously, there is somewhere balance, and the question was, is there an optimal shift size to get the derivative? And that's what-- uh oh, the slide got corrupted. So, there was quite a bit of mathematics involved and minimization and optimization. There was an answer. And that's actually what we finally arrived at. And that's some toy example, but still, it shows you that if you use constant size and not optimal size, that would be your numerical derivative of this blue function. While if you use an optimal shift size, which [INAUDIBLE] computed, it would be much smoother and much better. So, that's one of example, and that's what he did. And we actually are implementing it in our systems and plan to use it in practice. Another project was actually quite different. And it was about electronic trading and basically how to better predict prices of currencies and exchange rate. And funny enough, it was on ruble/US dollar, because it was actually aimed for our Moscow office. And basically, what we had, we had the noisy observation of broker data and it was coming out at different non-uniform times. Basically, at random times. So, we decided to use Kalman filter and to study how it can predict. And that's one of the nice graphs [INAUDIBLE] produced, which again, we will use this strategy and the Kalman filters which he constructed in our e-trading platform in Moscow. So, that's just a couple of examples, which I wanted to give you as a preview of what we will be talking in the class. Just to remind, the website is fully functional. We put syllabus there, a short list of literature. We will be posting a lot of materials there. Probably most lectures will be published there. Jake's slides are there already. So, any questions? JAKE XIA: Please hand back the sign up sheets. We like to get your emails so we can put you on the website for further announcements, but you can also add yourselves. [INAUDIBLE]. But it's probably easier if you put your email on the sign up sheet, so we can [INAUDIBLE]. VASILY STRELA: Yeah, but please visit and sign up here, because there will be announcements to the class. Thank you very much.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
23_Quanto_Credit_Hedging.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Anyway, welcome today. Stefan Andreev is our guest speaker from Morgan Stanley. And as I understand you have a degree, a PhD Degree in chemical physics. STEFAN ANDREEV: In chemical physics, yes. And maybe I should go here. [LAUGHTER] PROFESSOR: And now he's in the world of finance. And we're here to benefit from your experience. STEFAN ANDREEV: Thank you very much for the introduction. Yeah. I went to school at Dartmouth College undergrad, and then up the street at Harvard for my PhD. And then I transitioned from science to finance. And for the last eight years, I've been working at Morgan Stanley, working with Vasily Strela, an instructor in the course. So today, what are we going to be talking about? Well, to give you a big-picture view of where our topic fits within the grand scheme of finance-- in general, there are really two big areas in my view. Well, there's probably more, but these are kind of most famous, I would say, areas where quantitative skills can be-- are very valuable in finance. And the two areas are-- one area is statistics, predictions, which is essentially, say given some historical behavior in the market, how do we predict what will happen in the future? And that's certainly a huge industry. People have made a ton of money applying quantitative concepts to that. But that's not what we're going to talk about today. What we're going to talk about is another very big area called pricing, which is pricing and hedging of complex instruments. And that area is really about essentially when you have a complex product that you don't really know the price of, but you know the prices of other products. And then you can use the other products to essentially replicate the payoff of your complex product. Then you can use mathematical techniques to essentially say, look, the main statement is hey, because I can replicate my payoff, and using products that I know the price of, then that means that I can say something about the price of my complex product. I can basically price it. And not only can I price it, but I can also-- when I give the price, I know that I can eliminate any uncertainty from owning the product by executing a hedging replication strategy-- at least theoretically speaking. So that's the area we're going to focus today. And our main focus is going to be on FX-- foreign exchange-- interest rates, and credit-- and in particular about credit-FX hybrid models. We're going to be talking about essentially what happens, why do we need credit-FX hybrid models, and going through an example of a simple one, and how to apply it. In particular-- and there's the mathematical techniques we're going to be using-- as I said, we are going to be talking about the risk-neutral pricing, which is essentially replication. And we're going to talk about how to use jump processes-- which you might have seen in other parts of your studies as Poisson processes-- to describe certain behaviors of price behavior that you cannot really describe very easily using pure diffusion Brownian motions that you probably have seen so far in the course. And why do we care about that? Well, there are certain financial applications where this is important. And in particular, something that happened in the last few years-- the sovereign crisis in Europe. And also, it has happen not just last year. This happened many times in other parts of the emerging markets. And given the emerging markets as my background, I've worked on these kind of models. And this is, when you have Greek bonds in Euros, and there's a potential for Greek default. And as we know, as you might have read in the news, there was really a big worry about what will happen to the Euro currency if there is a spate of sovereign defaults. And in fact, Euro currency did-- in anticipation of the possibility of default-- it actually did depreciate for while back in 2011 and 2012. Now it's pretty much back where it was before that, but it certainly-- there was a fear in the market-- which was also very, very obvious in terms of option prices-- that Euro currency could depreciate significantly if, in fact, a disorderly default did happen. Now it didn't happen, so that's good. But in other emerging markets in history, it has happened before. So it's not really an empty question. So foreign exchange-- how do we describe it in math finance? Well, we think of it as the price of a unit of foreign currency in dollars. In our presentation we're going to denote the spot FX rate, which is the current rate of exchange, by S. And here is a sample graph of euro-USD FX rates. You can see it looks like a random walk. It's very well described in normal circumstances as random walk. So one very fundamental property that connects FX and interest rates is the so-called FX forwards interest rate parity, which says if I have a certain amount of money-- in this example, $5 million, and I can invest it, there's two ways I can utilize this money. One way is to just invest it at a dollar kind of risk-free rate. And we're assuming here we have a risk-free rate-- the standard assumption. Or we can do something like we can take the money, exchange it into, say, euros, invest it using the euro risk-free rate, and then exchange it back into dollars. And this is essentially used to price FX forward contracts. So FX forward contracts are a contract that allow you to say, look, I'm going to agree with you, then in one month's time, I'm going to, say, give you 4,108,405 euros, and you're going to give me back $5,170,000. It's essentially an agreement; it's a derivative contract. And if you see the-- if you have this forward contract, you can lock in, essentially, through conversion in euros. So you can lock in an effective dollar interest rate. So FX forwards can be essentially described fully by knowing the interest rates in each currency and the spot FX rate. Conversely, you can infer foreign interest rates knowing the FX forwards. They're very connected. Yes? AUDIENCE: In this example, there's no mispricing, so you get that same amount. Is that the idea? STEFAN ANDREEV: In this example, there is no mispricing. You get back the same amount. So we are assuming, essentially, there's no arbitrage. We're not assuming, but we're given-- if the prices were, indeed, if this interest rate-- 4.6% in Euros-- interest rate was 4.6, in dollars it's 3.4. And here are the current spots, which is 127 and the forward, 125-- if these were, in fact, the observable mark quantities in the market, then there would be no arbitrage. And there is-- you're basically indifferent whether you invest the money in dollars, or you go the way of exchanging into Euros, and investing in Euros, and then back into dollars. So in this example, the way I've presented, worked it out, there is no arbitrage. Now, if some of these numbers-- say the interest rate in Euros were 4% instead of 4.6, and all the other quantities were the same, then, in fact, there would be arbitrage. And you could make money by borrowing money in dollars and investing. I mean, the purpose of this slide is really to illustrate kind of hey, if there's no arbitrage, how one would actually compare-- how one would actually look for arbitrage in this example. This is-- again, this is a little bit of definition what are compound interest, interest rates. We're going to talk about instantaneous risk-free rates. We're going to, again, say they're risk-free. So basically, we know for sure we're going to get our money back. You can think of risk-free rates as the one that treasuries pay in real life, or the one that Federal Reserve guarantees on deposits. There's various examples of risk-free rates. And while, in practice, different risk-free rates can actually be different, so they're not really risk-free. But in our world right now, in our model, we're going to assume that there is such a thing as a risk-free rate for every currency, and it's unique. And now, as we talk about our dynamics of the FX process, what's really focused on here is an FX-- we're making an FX model. And we want to see-- in the previous example, we saw if you're given a FX rate and given some interest rates, here's what the FX forward really has to be in order to have no arbitrage. Well, now we are trying to describe. We tried to describe or define a process for the FX currency. Essentially, this kind of no-arbitrage condition leads to having certain constraints on what the stochastic differential equation has to be. So in this particular case, the constraint is that the drifts of the process has to be the differential in interest rates. So if one currency pays more than the other currency, obviously, people would want to invest in that currency. So that in order for no arbitrage to exist, there has to be an expectation that the currency that pays more would depreciate in the future, otherwise it would be an arbitrage. So if it doesn't depreciate, if you can kind of say, hey, this currency won't depreciate, then you can just always invest in that currency that pays higher interest rates and make money-- which, in fact, many people do. But again, that's a-- they're taking a certain risk. They're taking the risk that the currency will depreciate. So what do we actually want? What we want is to say-- we want to essentially-- and for the arbitrage conditions from before-- which is to say that my forward rate has to be essentially the spot rate-- well, this condition here has to be observed, essentially. And what does that mean? It means that my forward rate has to be my spot-- AUDIENCE: [INAUDIBLE] each other, you mean? STEFAN ANDREEV: What did you say? AUDIENCE: [INAUDIBLE] set those equal to each other? STEFAN ANDREEV: Yes. My forward rate has to equal to the spot rate times, essentially, the interest rate differential. If that is true, than in the previous set up, there will be no arbitrage. And why is that? Well, that's because the amount of money I earn on the domestic leg is e to the rd. The amount of money I earn on the foreign leg is e to the rf, but then I multiply by the forward, and that has to equal to the e to the rd. So this is a standard. This is a the most basic dynamic FX model that people use in industry. It's referred to as the Black-Scholes FX model. And the stock price-- you've seen stochastic models before. Usually, you see the stock price when people talk about options. In that case, this drift is just the risk-free interest rate. Well, here in FX, it's a differential of interest rates. Otherwise, it's very similar. So FX has some interesting properties. So we're gonna talk about the game. Before we go to the game, one question: can FX exchange rate ever be negative? What do you guys think? Can the dollar-euro exchange rate be negative? Any ideas? No, it's hard, because what does negative mean? It means I have to pay you money to give you euros. Why would you? You have to pay me money to give me euros. Nobody would do that. It can be 0, potentially, if dollars are worthless or something, but it cannot really be negative. So that's one reason why I wrote my SDE as a kind of a log-normal process. You recognize this by the form dS over S. So the changes in the FX are proportional to the value of the FX. So the process can never become negative. OK, so it can never become negative, but how big can it get? And the answer is, it can get very big. I mean, we have currencies, notably some currency like Zimbabwean dollars, that traded-- I don't know, I mean, I actually don't know what Zimbabwean dollars trade at, but I think it's somewhere in the billions of Zimbabwean dollars per dollar, so something. It's a really extreme example. It can really get extremely big. So there is now really upper bound, while there is a lower bound. So the distribution, as you can imagine has a skew. It's not symmetric around the average. It's limited on the lower side. It can go very high on the high side. And log-normal distribution has that property. Have you guys seen a log-normal distribution? You've talked about this stuff in the course before, right? So let's go back to our game. So our game is, we have assumptions. My assumptions are not to be realistic, but to make it simple. Let's assume that our dollars and euros exchange rate is one, so we can exchange one Euro for $1. Clearly not exactly the case, but let's make that assumption. And we also assume that the FX forwards is 1, which basically means that the interest rates in both currencies are the same. And now let's say I'm going to make you a bet that-- now dollars and euros is a volatile process. Right now, it's one, but in the future, it could be different from one-- could be higher or lower. So if dollar-euro FX process is more than one in one month, then you give me money. And then if it's less than one in one month, then I give you money. And we're going to have two payoffs, so two games. I don't know why it says "bet B." Should say just "bet." I'm sorry about that. That in the payoff A, basically you're going to give me $100 if I win, and I'm going to give you $100 if you win. And in payoff B, you're going to give me 100 euros if I win, and I'll give you 100 euros if you win. And the question is, which game would you prefer to play, or do you not care? So in each case, you kind of win, lose same number. So I want to see hands. Who wants to play game A? Come on guys, wake up. Who wants to play game A? I mean if you don't know, you just-- lets say you like Euros better. You can say this is not really graded, so it's OK. OK, nobody really knows what to play? Like how about game B? Anybody wants to play game B? OK, you guys want to play? Three people for game-- four-- game A nobody still? Same person for game A and B? All right. OK, two people-- so now-- AUDIENCE: Behavioral science says people are reluctant to lose-- more reluctant to lose. STEFAN ANDREEV: That's true, that is true. However, so people are reluctant to lose. And I said, look, the FX forward in one month is 1. So you can actually-- that's the market price that in one month, FX forward is 1. And this-- our bet-- kind of the strike is 1. So our bet level is 1. So you can kind of say, well, this looks like kind of fair game. So I don't expect to win or lose much, but I'm just reluctant to do it. And I can get that feeling. That's the risk aversion aspect of it. But if you were forced to make a bet, question is, which one would you prefer? So I understand you might not want to play. But I'll say, OK, so you guys don't seem to be in the mood to play. That's fine. Let's look at some scenarios. So let's say in one month, dollar-euro goes to 1.25. In bet A, I lose $100. In bet B, I lose 100 euros. So bet A-- actually, you lose 100 euros, not-- so bet A for you, you are $25 better than bet B. And in the second case, if dollar-euro is 0.75, you make $100, or you make 100 euros in bet B. In that case, you also-- bet A is $25 better. So it doesn't matter what happens. Bet A seems to be the better case. So if you're, like, our dear professor here, then you don't like to lose, then you probably are going to choose bet A, I assume, right? That's a better deal. And that's kind of strange, though. I mean, like both payouts were symmetric-- so it's 100 euros, 100 euros, $100, $100. Why is it one is better than the other? Well it's like what really happens is the units of the bet-- the value of those units depend on whether you win or lose. So it's not like if I was betting using acorns, then you get two acorns or I get two acorns, then actually, it might be a fair bet. But because I'm betting in euros and dollars, and the value of these things-- the relative value changes based on the actual whether you win or lose, then the game is not symmetric any more. So the reason I wanted to take you through this game is because there are a lot of cases in finance where people make bets. But then the value of what you get depends on whether you win or you lose. And that has an effect on the value of the bet. And in particular, the case we're going to talk about today is one of these cases, which is the credit FX. That's why we need to credit-FX quanto models. To give you an illustration from finance, let's take Italy bonds. So Italy issues bonds both in dollars and in euros. Why does it issue in both currencies? Because Italy has to issue a lot of bonds. And they need to find as many investors as they can. And some investors want to buy euro bonds, and some investors want to buy dollar bonds. And they want to access both bases of investors. Now, these bonds they cross-default, meaning if Italy defaults on one bond, all of its bonds default together, including the euros and the dollar bonds. So then there is a notion of credit spread, which is the measure of how risky Italy is. So you can take euro bonds, and you can say, well, how much premium does Italy pay over German bonds? Let's assume that German bonds are risk-free, which is the standard assumption for euros, that the German bonds-- Germany is the main underlying economic force for the Euro. They're kind of risk-free bonds. And Italy pays a certain spread over euros. Same thing for dollars-- it pays a certain spread over USA. So if Italy wants to borrow money, they have to pay a higher interest rate, just like if you want to borrow money for student loans, you have to pay higher interest rate than the Fed. And the size of that spread is in the market. It determines how risky of a borrower you are. Well, it turns out that these spreads are not the same in both currencies. One currency has a higher spread than other currencies. That's kind of an interesting thing. So there is two questions really: when the spreads are not the same, which currency would Italy prefer to issue bonds in? And which currency do investors prefer to buy bonds in? So this is kind of similar to the previous game we played, because if you're an investor trying to buy bonds, well, if Italy defaults, then chances are the euro is not doing so well. So you would lose money. And if you have euro bonds, you would lose euros. If you have dollar bonds, you lose dollars. On the other hand, if Italy does not default and pays you back, then chances are, euros are not doing that bad. So you would actually be making euros and dollars. So it's an interesting-- it's kind of a similar dynamic going on. So there's the same kind of question that I asked before-- USD, euros, are equal in both. So what do you think now? Now that we've gone through an example, maybe we'll have a higher participation in my pop quiz. Who thinks that USD bonds have a higher credit spread, and who thinks-- so A. Vote for A. One, two. So who thinks that euro bonds will have a higher credit spread? OK, one, all right-- so two to one. I think the two to one wins. All right, I must say, you guys seem that-- maybe it's the format of the auditorium. People don't like to raise their hands too much, or maybe they're afraid that they're being filmed. [LAUGHS] OK. Well, how are we going to do this? How are we going to answer this question? Before I give you the answer, we're going to go through a slide. Well, first we're going to say, well, FX rates are volatile. There is volatility, as we said before. So now we're going to-- in order to compare euro bonds to dollar bonds, we need to really come up with a strategy to replicate one with the other, and then look at the price-- look at how much do we need to buy one to replicate the other. If we're able to come up with such a replication strategy, then we can immediately say, hey, if you need 150 euro bonds to replicate 100 dollar bonds, then that means that the euro bonds have to be cheaper. That's basically the replication argument. So you can try to do that by piecing together bonds, or we can use the powerful tools of mathematical finance that you've been learning about, which is all about replication and pricing. And the three steps are: we're going to analyze the payoffs of the instruments, and we're going to write some model, a model for FX and for credit, and we're going to price those bonds. And then we're going to look at the results, and try to understand the problem intuitively. And that's basically what we do, pretty much. That's what option quants do on Wall Street all the time. So here's the answer: dollar versus euro spreads from the marketplace. So usually what happens in these kind of questions in finances, you kind of have an answer, and then you try to compute a model that explains the difference. So that's what we're going to do now. Well, the USD spreads are actually lower-- USD bond spreads are actually lower. Now, so there is really-- when we're talking about bonds, risky bonds, there's two states. They are either performing, or they are non-performing and in default. And we're going to go here through an example of two bonds. And we're going to use two zero-coupon bonds, which essentially have zero recovery. And the idea there is really to make the question simple so we can analyze it better. But you don't lose a lot of generality by saying zero-coupon versus coupon. It's not-- the answer, the intuition would be exactly the same. So let's say we have two zero-coupon bonds, same maturity, they pay 100 on maturity. And by the way, bonds-- I don't know how much you guys have-- I say these things. I'm very familiar with them. Bonds are nothing more than loans. So zero-coupon bond means I give you some amount of money, and at some pre-agreed maturity, you're going to pay me 100. So let's say I give you 80 cents, one year from now, you pay me 100. And I call this a zero-coupon bond, because you don't pay me any intervening coupons. There's no interest payments, but just I pay you less money now, and you pay me more at maturity. OK, so we know that bond U pays $100. Bond E pays 100 euros. And let's say we denote the prices-- price of U is Pu, price of E is Pu. Our spot FX rate, we're are going to call it St; our FX forward, Ft. Now we're going to have kind of a simple arbitrage strategy. Well, let's say if we can sell 100 times Ft dollar bonds and with the proceeds, buy 100-- we're going to get-- if we sell 1,000 dollar bonds, we're going to get this much proceeds. There's the price. And if we buy 1,000 euro bonds-- so we can enter into an FX forward contract for 100,000 euros for maturity T at zero cost. All right, so let's see how this strategy actually pays out. Well, what happens is you-- there's 100,000 euros. You get 100,000 euros for selling the euro bonds. You pay 100,000 times Ft dollars, say dollar bonds. There is an FX forward contract, and at maturity, you can exchange this $100,000 for 100,000 euros using this FX forward contract. You already agreed to do that. So your FX forward actually exactly hedges-- you can basically use the proceeds of these bonds to-- you can exchange the proceeds at zero cost at maturity, because you have entered into the FX forward contract. So your net payoff is 0. So that means that the prices of these bonds have to be the same. But what if they're not? What if Ft, which is in this case is 1, forward contract-- what if the price in dollars is different from exchange rate times the price in euros? Well, in that case, you can say, well, there is an arbitrage. And you'll be right, if you would be able to make money if, in fact, the bonds performed. But what happens if there is a default? If there's default, that wouldn't really necessarily be the case, because if there's default, these bonds don't pay anything, and you just have an FX forward contract. And this FX forward contract is going to be worth something after default, especially if the FX rate depends, like jumps, upon default. So arbitrage, again, is-- so you start with 0 money. You make money if there's nonzero probability. And let's say in this particular case, the strategy-- payoff in case of default with 25% recovery rate is-- you actually have only-- 25% means you only have a quarter of the payoff now at maturity, if default occurs. But you have a hedge for the full 100,000. Your FX forward is for full 100,000. So for 25,000 of it, you can use the FX forward to exchange money. For the remaining 75,000 you just have an FX forward outright. So if FX moved against you, you would lose money. So that's why the strategy is not necessarily an arbitrage. And that's why the two prices of the dollar and euro bond are not necessarily related to each other. They don't have to be equal, because, in fact, there is a possibility of default. And you cannot really directly hedge. You cannot really construct an arbitrage strategy by using FX forwards and the bonds together so easily. You have to take into account what happens if default occurs. OK, so give an example. What happens upon FX when default occurs? Well, one of the most recent defaults of a country-- of a big country that has its own currency-- is Argentina, 2001. And when it defaulted, the Argentinian peso skyrocketed. Here is the graph of the price series. So if you had an FX forward contract that essentially-- if you had a position where you were left with a naked FX forward contract, where you were receiving pesos and paying dollars in the event of default, you would have lost a lot of money when the default happened. It would have really gone against you. And this is, by the way, this is a massive move. And the Argentinian peso still is not recovered from that default. So can we do better? What do we actually-- what should we be doing when we're hedging this? And the answer is, again, we have to apply mathematical models to really try to come up with a replication strategy. So what is the main features of a model that will help me do this? Well, first I need to model a credit default, the credit default event. I need to have this in my model. And I need to have something which says FX has to move upon default. And then we're going to construct a complete market. Then we're going to define some simple dynamics on our exchange rate and on our defaults, and we're going to try to price for bonds. So how do we do that? Well, what we-- generally, again, how we're going to use the models, we're going to define an SDE, like I just defined initially a dS over S of something. And I'm going to solve this SDE either analytically or numerically. And what's important, the way we actually use these models in trading, we're going to look at how the price of each instrument depends on the hedging instrument. And that is going to define my hedge ratio or my replicating strategy. That's really kind of the main part. It's really hedging and evaluation and pricing are the same-- right and left hands. We're really talking about the same thing. You cannot really price without hedging. And pricing without hedging is kind of meaningless, in some sense. Pricing represents the price of a hedging strategy. OK, how do we-- basic credit model, how do we model default? Well the standard model in finance for default is to define the default events. And we say well, this default event arrives as a discrete event. And it arrives at the time tau, which is a random time. And we're going to model the tau, the time, as a Poisson process, which means that we don't know when it's going to come, but we know something about the probabilities of when it's coming. And the Poisson process has an intensity. The intensity in this case is h. And basically, the meaning of intensity means the probability of the default time not the arriving by time capital T is e to the minus h times capital T minus little t. Little t means now. Let's say we're saying at time t, we know the default has not arrived. Here is the probability the default will not arrive by some time t later. So in our model, we're going to make a simple assumption. Let's say constant hazard rate, and we can, since we know the probability of the default time not arriving after a certain time capital T-- that's like a cumulative distribution-- we can also find the probability density the default time happens at some time capital T, or around some time epsilon around capital T. It's just the derivative of the cumulative distribution. And corollary is that the probability density of the default at any given time is h, which is essentially the limit of capital T going to little t. So now in our model, what happens to FX rate? Well FX rate is going to be denoted by S. And FX rate right after default would be equal to FX rate before default times e to the power J. And J essentially is our kind of percent devaluation, you can think of it. So it's kind of like a percent devaluation. So J can go from minus infinity to infinity. If J is 0, then that means is there's no devaluation. So you can see the log of St basically jumps by J. So in a log-normal process, the log of St is normal, and essentially, it's just a shift of the normal distribution. OK, so how do we describe this? We define a jump from default Poisson process with intensity h, as on the board. And our FX dynamics-- and I apologize for the small script-- is that our d log of S will have some drift, mu_t dt, and then a jump process J*dN. So this is slightly different from what you've seen so far. So far you've seen Brownian motions. This is J*dN. This is now a jump process. Now what we want, again, we want still our standard no-arbitrage condition to remain constant. And from before, we had a condition that expected value of S of T has to be S of 0 times e to the rf minus rd times T. So that still has to be the case. And in our case, we're going to assume that rf and rd are both 0. So in our case, we're going to ask-- basically zero interest rate environment to make, again, the model simple. Then we just want the expected value of S_T to be S_0. So how do we achieve that? Well, we need to show, essentially, that this mu, the drift, has to equal to this expression here, h times 1 e to the J. That's known as the compensator term. And you can think-- you can imagine this as a formula. Like, if I have a Poisson process that has a possibility of jumping up, then in order for that Poisson process to be on average to be equal to the initial value, it has to be kind of trending down most of the time. And then, so that when the possibility of jump is there, the average of the two can be 0. So that's known as a compensator term of the Poisson process. OK, so we can go through and derive how do we get-- what we want to do is, we're going to check that this form actually does indeed give you that expectation, does satisfy the condition for the expectation. OK, so again, we start with dS_t is mu dt. So in our case, it's going to be h times 1 minus e^j times the indicator function of tau bigger than T, plus J dN_t. OK, so we're-- not dS_t, sorry. This is d log of S_t. OK, so now what do we want to do? We want to integrate this equation. So essentially, what we're going to do is write integral from 0 to capital T of d of log S_t. We integrate both sides-- integral from 0 to capital T, h 1 minus e to the J, tau is bigger than t, times-- here has a dt in here-- times dt plus integral from 0 to t of J dN_t. OK, so then this I just gives me essentially the log of S_T over S_0. This is just basic calculus. And then here we have-- we can-- this indicator function just says if tau is bigger than t, it's 1. If tau is less than t, then it's 0. That's basically what it is. So I know that essentially, this is only 1 when t is less than tau. So my integral goes from 0 to tau now. I can replace this from an integral from 0 to tau. And I can take out the indicator function now. Of h, 1 minus e to the J, dt. And then I can say, well, what if tau is bigger than-- there is also a possibility here that tau is-- this is tau is less than capital T. And there's also a possibility that tau is greater than capital T. In which case, if tau is greater than capital T, this integral is there without any indicator functions. So again, integral from 0 to capital T of h 1 minus e to the J dt times indicator function, tau being greater than capital T. So I kind of divided this, counting both possibilities separately, essentially. And now the second part, integral from 0 to capital T of J dN_t. Now, N is-- what was N? Well, N of t is essentially-- it starts out as 0 for t less than tau, and then becomes 1 for t bigger than tau. So this integral is just-- J is a constant, so it's just J times N of t. And this is capital T here. And by the way, all these derivations are posted on the notes, so you don't necessarily have to worry if you can't-- can I can move this board up? Not really. So I'm going to do one more line. I'm going to erase this top line. So we get to here, and there's one more step, which is now to actually do the integration. We're going to have log of S_T over S_0. Well, two things-- now, if tau is less than T-- so default happened before capital T-- then what is N_t? N_t is going to be 1. So I can say this equals to h tau times 1 minus e to the J. This is the first-- this integral now-- plus J. So this is if tau is less than T. And then if tau is bigger than T, then this term is 0. This is a term that's for tau bigger than T. This is just a constant, so it just becomes h times capital T, 1 minus e to the J times indicator function of tau bigger than or equal to T. And we can then exponentiate both sides. And it becomes-- use the magic of the blackboard. You can erase. S_T equals S_0 times the exponential of this. So I have-- this is what my exchange rate is going to be, essentially, at time capital T. Now, what was I trying to do? I was trying to do this-- to compute this expectation. With the computed expectation, now I have to integrate over the probability distribution of tau. Now remember, probability distribution of tau is a Poisson process. So we have essentially-- I'll write it here-- phi of 0, t is just h times e to the minus ht. That's kind of the probability density of tau. So now what I need to do is essentially, the expectation of S_T is just the integral from 0 to infinity of S of tau, times phi(0, tau) d tau. So here is my S of T. You can think of this S of T of tau for time tau. So this is for a given time tau, I know what my value of S of T is. So I can do this integral. And now we're going to do it. So what is going to be the first term? So exponential of S_T-- not exponential, expectation of S_T is going to be-- it's going to be integral from 0 to capital T. It's going to have two terms. First, I'm going to integrate from 0 to capital T. And then I'm going to integrate from capital T to infinity. I'll split this integral into two parts. And from 0 to capital T, I have essentially-- h times e to the minus h*tau. And this is my density function. And then I'm going to plug that in here. So this is for tau being less than T, so it's basically this first term. I'm going to divide it by 0 here, to make it easy. So first term is going to be e to the h*tau times 1 minus e to the J plus J. OK, and so this is d tau. So this is the first part from 0 to T. And the second part is essentially the integral from capital T to infinity for tau being bigger than capital T. Now that's actually-- this part here does not depend on tau. It's a constant. So it would be just h, capital T, 1 minus e to the J times-- what's the probability of tau being bigger than capital T? That's just the cumulative probability distribution we saw before, just e to the minus hT. That's the probability that tau is bigger than T. OK so it's e to the hT, 1 minus e to the J times e to the minus hT. So I can now simplify this expression somewhat. You can see that, say, this term and this term, this term and this term go away. And also this term and this go away. So I'm left with the integral from 0 to T of, essentially, h times e to minus h*tau e to the J times e to the J. So you can think of this as h times e to the J times e to the minus h*tau e to the J d tau, plus e to the minus h capital T times e to the J. So this is-- if I think if h e to the J as this is the constant in front of tau, this is just a standard integral of exponential, so this just becomes, essentially, e to the minus hT e to the J, minus 1 plus e to the minus hT e to the J. And these two terms are going to cancel out. And I'm going to have 1. So again, the ratio of e to the S_T over S_0 just gives you 1. So all this is just to kind of show you a little bit how you work with jump processes, and take expectations. It's not-- nothing you haven't seen in terms of math. It's just slightly different from Brownian motions. But still the same idea-- you have dN and you have a compensator term. So this here proves that, essentially, my drift guess that I started with, in fact does make my expectation 0. OK, so what have we done so far? We've defined dynamics for log of S with jump on default, defined probability density. And now we have to derive the dynamics of S, price euro bonds, hedge ratios, and so on. OK so log of S dynamics, we-- I apologize again for the small font-- here we have the log S dynamics. Applying Ito's lemma, there is an equivalent-- Ito's lemma you know from Brownian motion, but there is another one for Poisson processes, as well. And that is-- Ito's lemma is like the chain rule. So if you know the process for some log of S, how do you find the process for S itself? Well, in this case, what's going to happen is our dS over S is going to be the same drift-- h times 1 minus e to the J, tau is less than T, dT-- sorry, T less than tau-- plus e to the J minus 1, so J minus 1, dN, dN_t. So that's really the derivation of the-- that's the final result for S. Now, how do we get to this? Well maybe I should-- I can write Ito's lemma. What does it say? Ito's lemma basically says that if we have dX_t is equal mu dt plus J dN, then-- and you have a function Y of t, which is f of X_t, then dY is df/dx mu dT plus f of X_t plus J minus f of X_t dN_t. So this is the kind of the term that is kind of an analog of the convexity term in your Brownian motion Ito's lemma, but it's now for jump processes. So this f of X_t plus J and f of X_t-- so what happens, essentially, so you have some function f, and X_t plus J is what happens if a jump happens. And X_t is before the jump, so the effect of the jump on the function. That's what this term is. That's like the convexity term. I think of as a convexity term. I don't know how it's called. Maybe more mathematical minds here might. So in our case, if you look at the top equation, our function is just essentially the exponent. And what happens is when the function goes up by J is that the exponent goes e to the J minus e to 0. That's what this term is. OK, so that's how you write the equation. And now the SDE, solving the SDE generally means write down what S is. So we have S of t. In our case, it's going to be S of little t. I'm not going to write it. You have it on the board. I think we're going to get late, so hurry up a little bit. We're going to the next part, which is the pricing exercise. So we have two bonds-- zero-coupon, zero-recovery bonds. One pays $1. The other pays one euro. So how are we going to price this? We have to use our model. We have a model for the FX rate. We have a model for credit. So we price both bonds in dollars. What is the price in dollars for each bond? And the ratio of prices kind of gives you the ratio of the notionals in your hedge portfolio, if you want to hedge one against the other. So it's a zero-coupon bond. So I wrote here the dollar bond price is this. So why do I write it like that? Well, it's a zero-coupon bond. So what a zero-coupon bond says is at maturity, it pays 1. So we have something where the payoff at time T is either 1 if tau is bigger than T, or 0 if tau is less than T. OK, so now what is my price? Well, I know that standard pricing theory tells me that the price of time little t is equal to expectation of a price at time big T. And you can kind of say there is a money market account. But money market accounts, in our case, is just 1, because interest rates are 0. So that's really just the case. That's just true. So now the expectation of this-- well, that's just equal to the expectation of an indicator function of tau bigger than T, which just equals to the probability of tau bigger than T. So if that's true, we know what that is. That's just the probability-- that's the cumulative probability function e to the minus hT. That's why the price of the bond in dollars has to be e to the minus hT. Euro bond price-- same idea, except euro bond price in dollars is that. So why is the euro bond price in dollars like that? Well, the euro bond price in dollars, again, what is the payoff? Same payoff, except the payoff is in euros, right? So if I want to do the payoff of my bond in dollars-- so this, I'm going to call this the euro bond. But the payoff now, if I want to do it in dollars, is not really 1. It's 1 times S of T, and 0 times S of T. That's really my payoff. So then the expectation here is not just 1, but actually S of T. So now I have something where I have to take the expectation of S of T, essentially, at maturity. My bond price in euros is equal to the expectation of S of T. And what is my expectation of S of T? Well, it's e to the minus hT times e to the J. And that's the expectation of S of T the indicator function of tau bigger than T, right? So not just-- the expectation of S of T is S of 0, but the expectation of S of T times indicator function only in the cases of tau bigger than capital T. Now, that's not 0. That's basically this-- e to the minus hT times e to the J. OK, so what can we do? Well, we construct a-- what we should do is we construct a portfolio at time equals 0, which is we sell one dollar bond, and we buy this much amount here of euro bonds. And the portfolio value at time equals t equals 0 is 0. Basically, you can take-- so e to the hT, the first bond, you would get e to the minus hT. And from the second amount would cost you e to the minus hT to buy. That's how I've chosen these scaling factors. We start a portfolio which costs 0. And I should probably-- I'm going to go back here, and going to write down the notionals, because we lost them. So how many dollar bonds do we have? We have minus 1. And how many euro bonds do we have? We have e to the minus hT times 1 minus e to the J. This is how many bonds we have. OK, so some time delta T later, what happens to our bond prices? Well, we know what the bond prices are. The only thing that changed was that some time expired. So now instead of capital T, we have T minus delta T to expiration. So these are the bond prices if we didn't default. Of course, if we defaulted, then the bond prices are 0. So obviously, if we defaulted, since both bond prices are 0, we started with the a portfolio that's worth 0. If default happened, now we have a portfolio that's worth 0. So nothing changed, right? So the key part is, OK, now what if default didn't happen? Would we have the same price as well? That's what we want to check. And if we have the same price, both in the case of default and in the case of no default, then that means we have, essentially, a replicated portfolio-- a hedged portfolio. OK, so what is the value of the bonds if default did not happen? Again, we have these is a dollar bonds here, and these are the euro bonds, and this is my FX rate. Why did my FX rate move? Well, because default did not happen, so a jump did not happen. But still I had my drift, my compensator drift, so FX drifts in the opposite direction. OK, so the dollar bonds-- dollar bond that was one bond, minus 1 bond, and the price. So the value of the dollar bond is just minus e to the minus h T minus delta T. What about the Euro bonds? Well, the Euro bonds-- here is the number of bonds we have. This is divided by S_0, by the way. In our case, S_0 is 1, so it doesn't matter. Price of each bond, again, we take that from-- the price of each bond comes from this formula. And then the FX rate-- multiply by the FX rate. And then when you actually multiply all these guys out, you end up with, essentially, the value in dollars of you euro bond equals, again, the value of your dollar bonds. So we started out with a portfolio that was worth 0, and then some time delta T later, it's worth 0 again, both in the case of default and in the case of no default. So there's no arbitrage. In some sense, not terribly surprising, because we actually derived these prices based on the assumption of no arbitrage. But it's a good check. It kind of tells you, hey, if I actually follow this model to hedge, I'm really going to be hedged. And I'm going to be hedged not just when default occurs, or only if default does not occur, but I'm hedged in both situations-- if default occurs and default does not occur. And you can't really do that unless you have models that actually are hybrid models-- that allow you to mix and match-- to basically describe both the current event and the FX process. So that's kind of the usefulness. And the hedging strategy you can see-- it's interesting that the hedging strategy-- the hedge ratio depends on the credit riskiness. So how much bonds we bought depends on J. First it depends on h, the credit riskiness. And it also depends on J, the jump size. So it really depends. How many bonds you use-- how many Euro bonds you buy to hedge your dollar bonds, it depends on both the probability of default and on the jump size. So that's what I mean by it depends on credit riskiness. It's also dynamic, in the sense that for a given amount of dollar bonds, the amount of euro bonds you need to sell is going to vary as FX and time goes forward. As you can see, if you have one day before expiration, the hedge ratio of the two are going to be different than one year before expiration. So you have to be rebalancing your portfolio continuously. Which is not-- again, not unusual. If you're hedging an option, they also have to rebalance. But it's different from, say, a static replication strategy, where you say, I'm going to buy x amount of euro bonds, x amount of dollar bonds, and I won't have to ever worry about it. It's not really the case. Here you're saying, well, I buy this ratio of bonds, and if default does not happen, I'm going to have to readjust my ratio. Because the original ratio took into account the probability of default happening. And if default did not happen, now I have some information-- extra information. And now I have to readjust my ratio to reflect that. So what happens if recovery is bigger than 0? And by the way, how much time do we have-- a quick check? PROFESSOR: We have till 4 o'clock. STEFAN ANDREEV: OK. So we have about 12 minutes, 10 minutes. OK, Good. So what happens in case the recovery is bigger than 0? Well, if recovery is bigger than 0, we can go through this exercise that we did, again, the pricing exercise, and see what happens to our bond prices. So let's do this for dollars and euro bonds, just to give an example of some of the complexity that can arise when you start making the model more realistic. Because usually bonds do not have zero recovery. So then we assume that our payoff of the zero coupon, zero recovery bonds was 1 if default doesn't happen, 0 if default happens. Now, it's going to be the payoff of dollar bond at time T is going to be 1 if default did not happen, so if tau is bigger than T, and R if default was less than T. OK, so now when we price our expectation, it's going to be like this. P of little t would be just expectation at time little t of-- or let's say in this case, I'll call expectation the initial price of 0-- the expectation of P of capital T, which is equal to expectation of essentially 1 of tau bigger than T plus R 1 of tau less than T. Well, what we have here is essentially-- so you can think we have this first guy is going to be e to the-- if tau bigger than T, it's e to the minus hT. And the second guy plus R times the probability of tau being less than T, is 1 minus the probability of tau being bigger than T, so 1 minus e to the minus hT. Which essentially gives you R plus e to the minus hT times 1 minus R. So that's how you derive the dollar bond price. And for the euro bond price, you would do the same thing, except now these will be multiplied by the FX rate. And now the FX rate-- the tricky thing about the FX rate is that the FX rate jumps on default. So it's not going to be the same number. So in this case, P_T-- this is for one kind of dollar unit-- it's 1 times S of T and R times S of T. So now we have P little T-- this is for euros-- the price at time 0 of the euro bond divided by S_0, that equals to expected value time 0 of S of T of tau bigger than T plus R S of T times tau less than T. Well, OK. The first part, S of T, tau bigger than T, that was like the zero-coupon bond price. So that's just essentially, the-- in order to really, I would say, guess this well, we have to go back to what was S of T. So if we go back to the equation for S of T, let me write that. So S of T is S of little t times e to the hT, 1 minus e to the J plus J times 1 tau bigger than T, this is h tau, tau less than T, and this is-- if tau is less than T, and then times e to the hT, 1 minus e to the J, tau bigger than T. So if default has not occurred, S of T is S of 0-- in this case, S of T is S of 0 times this term. And if default has occurred, then it's S of 0 times this term. So the two terms are the same, except for the J part. OK, so now when we try to do this expectation, here we're in the situation where tau-- where default has not occurred, so our FX rate is essentially S_0 times the second term. So we have expectation of S_0 times-- well, and we're kind of dividing by S_0, so S_0 drops out. e to the hT times 1 minus e to the J. OK, and when tau bigger than T. That's the first expectation. And the second one, the expectation of-- so we put this R times the expectation of-- now here we have tau is less than T. So we're going to have our S of T is the first part only would be true. Second part would be 1, so that would be the formula-- e to the h tau, 1 minus e to the J plus J times 1, tau less than T. So this e to the J term that you see here in the euro price, that comes from this term here. So how do I do this expectation? Well to do this expectation, again, you have to do an integral, essentially, over the interval from 0 to infinity of the probability density. Since tau here is bigger than T, I'm really integrating from T to infinity. So this here is just a constant. So this first term-- I'll write it here. So you have P_0 over S_0, the first term would be e to the hT, 1 minus e to the J. And it's going to be integral from big T to infinity of the partial differential function, so that is just e to the minus hT. So this looks like something we've already done before in the previous calculation. And then the second term is R times-- now we're integrating from 0 to tau. So this would be integrating from 0 to T, e to the h*tau, 1 minus e to the J plus J-- I can do like this, e to the J. Let's put it like that. And times h times e to the minus h*tau d tau-- this part being the distribution function, probability distribution function. So again, we have this guy cancels this. And what we're left with-- first term gives us e to the minus hT e to the J plus R times h times e to the J times tau. This is, again, an exponent function. So we have e to the hT, e to the J minus 1. That's true. Oh, sorry, there's a minus sign here in front of this. The reason there's a minus sign is we have minus h e to the J times tau, and so we have to put a minus here in front when we do the integral. So there is a minus here in front. So this thing just basically reduces to that expression on the board. So that's basically-- so this is how we expand the problem to having non-zero recoveries. What you could do for your final paper, if you decide to do a final paper on this topic, is to extend the model one step further, and say, in our model, our FX rates jumped, but did not have any diffusive elements. It was just-- our equation was d log of S was mu dT plus J dN_t. That was our SDE for log of S. So next step would be hey, why don't we just add another term, plus sigma dW? So without the jump, this is just a standard, log-normal process that you know how to do. Now we add jump, essentially. So you take a log-normal process. You add a jump process to it. And you repeat the same things we were going through so far-- pricing euro bonds, dollar bonds, and coming up with a replication strategy. This is, for example, a model that-- we're currently working to implement a model like that at Morgan Stanley. Our model has non-zero interest rates. It has dynamic interest rates. So that makes it a little bit more complex, but overall, it doesn't make it too much more complex. Having non-zero interest rates just kind of has an extra drift term. It doesn't really change that much the mathematics of it. And the reason why we want to do that is because we want to be able to price, essentially, the contracts which are credit contingent, meaning the payoff depends on whether something has survived or not, whether credit default has occurred or not. And the payoff is in units, anything, like foreign currency. A typical example would be a credit default swap denominated in Brazilian reais. Or that happens-- a credit default swap on Brazil denominated in Brazilian reais. Now, common sense is that when Brazil defaults, Brazilian real is not going to cost very much. It's not going to be very valuable, just as we saw on the graph with the Argentinian peso, which totally devalued, it would devalue as well. Now Brazil is a very big economy, strong country. So right now, people are buying a lot of their bonds. People are investing in it. Still, it has credit risk. And you can buy-- you can trade the credit risk. You can trade credit default swaps in dollars. And you can also enter into contracts that essentially quanto the credit risk into Brazilian currency itself. And to be able to really price this, you can do it. We've done it for many years without having a jump model. But then your hedge ratios are not very good. And you cannot really explain the prices you see in the market. So we are essentially implementing infrastructure to-- we've already implemented this model or a version of this model, but we're implementing infrastructure to kind of really put it in production. As you can see in this model now, your FX process depends on credit. So it actually-- calibration and all these things become a little bit more tricky. Which I don't want to worry about for your final project, but I think it would be a very interesting exercise to take something like that, and basically work out all the steps. It does get a little bit more complicated, because now you have to-- if you're doing Ito's lemma, you've got to do it both for diffusive processes and for a jump process, so you're going to have two terms in your Ito's lemma. But you've seen them both. They're in your class notes. If you're so inclined, you can do it. And you can-- once you solve the model, then you can kind of check your results. You can actually build a Monte Carlo simulation, or actually run a bunch of paths where you simulate both the default and the diffusive part, and see if your prices arrived at analytically match with your expectations computed by Monte Carlo. This will be a good-- it's always a very good check to see if-- usually, we do this exercise to check if our Monte Carlo simulations is correct, because we know that our math is right. But you can also do it to check the other way around. OK, so in real life, as we went over-- I mentioned a couple of times during the lecture-- our models are more complicated. We have stochastic interest rates, stochastic hazard rates. So currently, we assumed that our hazard rate, h, is a fixed number. h can be stochastic as well. It can have its own distribution, and typically that's what we use in our models-- stochastic effects. So when I say stochastic, both jump and diffusion processes. And then if you get really fancy, then you can start putting correlations between interest rates, FX and hazard rates. So in particular, having a jump of FX on default naturally introduces a correlation between credit and FX. When credit occurs, FX devalues. So clearly, there's going to be a correlation. But there also could be a correlation between the hazard rates themselves and FX. So it's another source of correlation. And these correlations would produce different effects in the market. So basically, you can, if you have enough data points, you'd be able to say, well, this model seems like it describes the market better than that model. Both of them produce quanto effects, though. And whether we use analytic solutions or Monte Carlo, they're different approaches to price derivatives and compute risk. It depends, really, on how complex your model is. And for certain markets, you'd rather have a more complex model that is slower and requires Monte Carlo. And in other places, you want to have faster, more tractable models that can price your derivatives analytically. But maybe your models, they don't have as many features in them. So there's a whole range of models implemented for various markets in Morgan Stanley. It's a very big area of expertise for us. So I think that's it. I think I ran a little bit over time. I apologize-- five minutes. PROFESSOR: Thank you very, very much. And we'll thank our speaker first, I guess. [APPLAUSE] I think there's probably a question or two that people might have. AUDIENCE: I was wondering if we could now answer which of the Italian bets was better. STEFAN ANDREEV: Which what? AUDIENCE: Which of the bets that we initially were considering on the Italian bonds was better? Could we answer that now? Because we haven't, I think. STEFAN ANDREEV: Yeah. Yeah, let's go back. Which Italian bonds was better? What was that? OK, so let's try to answer that together. And we can answer it within our model, right? So in reality, there's all kinds of factors going into the price. So there's supply and demand, liquidity in euros, liquidity in dollars. Well, let's say if you're trying to invest in euros, or trying to invest in dollars, if I invest in dollars, if a default happens, I lose essentially-- let's say the recovery was zero. So I lose all my money in dollars. I thought I had some amount of money in dollars. Default occurs. I lost my dollars. Same thing in euros. If I invest in euros, if default occurs, I lose my euros. So how much do I lose in a case of euros and in the case of dollars? So if I invested euros, you say, well, if a default happens, my euros are maybe not as valuable. So euros are not as valuable, so I lost my euros, but what I lost was not as much, because already it's also the value of the lot. Conversely, is we saw, because of the compensator drift-- remember, if you have a jump that makes the currency devalue upon default, the currency will tend to appreciate if default doesn't happen. Because we want the-- the expected value of the currency has to be-- that's determined by interest rates parity, the first thing we talked about-- the interest rate differential. So that is kind of an ironclad arbitrage condition that we have to follow. So if you want your FX forward to really-- the expected value of your FX to remain fixed by the interest rate differential, and you know that upon default, your currency would devalue, that means if the currency does not devalue, it's going to appreciate. Because if a default does not happen, the currency would appreciate relatively speaking. So in our case, when we're buying bonds, we only get paid if default does not occur. So you would rather, essentially, buy the bonds in the currency that's going to relatively appreciate, essentially. Suppose interest rates were zero in both cases. You would rather buy the bond where FX would appreciate it default does not occur. Because if it occurs, you get nothing in any case, right? But if it doesn't occur, when you get paid, you want something that would appreciate versus something that would not. So the dollar, for example, let's say the dollar doesn't move versus other currencies when the euro default happens. So you'd rather get the euro bonds. AUDIENCE: If you want to estimate recovery, can you use a bunch-- I mean, not necessarily factors already in the model, but outside factors like macroeconomic factors to predict the expected value of recovery? STEFAN ANDREEV: Absolutely, yeah. Recovery is something that we cannot really price, necessarily, because usually we have bonds. And the bond price-- you can say we model default, probability of default versus probability of non-default. But now if you introduce a second variable, which is the recovery, now you have essentially both probability of defaults and recovery amount as variables. And you have only price as your data point. And you can have infinitely many solutions. So typically, what happens is you fix the recovery at something. Now what do we use to fix the recovery? Well, for sovereign countries we use 25% and for corporates we use 40%. But these numbers-- everybody knows that they're kind of just conventions, really, more than anything. We don't really believe that recovery is really 40% or 25%. It varies a lot by corporation. And there are studies by credit agencies about how much recoveries-- what are the recoveries for various bonds. And this 25% for sovereign is based on some study like that that went over the last 50 years, looked at the recoveries of sovereigns, of which there are not that many every year. But if you look at 50 years, there's quite a few. And then they made some statements-- some recover higher, some recover lower, but on average, they recover 25%. If you remember in Greece, what happened in Greece, how much did bondholders in Greece get for their bonds? Now, they didn't really default, technically. Well, they did default technically, but it was a very managed process. But they got definitely less than 25%. I think they got something on the order of $0.15 on the dollar. So recovery there was, like I say, was less than 25%. Same for this Argentinian default I'm talking about, the 2001-- Argentina is still being sued by creditors trying to get money back from this. And it's a big thing in the news. AUDIENCE: [INAUDIBLE] if you have a claim from Argentina and they fly over, it can be seized by [INAUDIBLE] funds. STEFAN ANDREEV: Exactly. They tried to do some settlements. So how much did people recover? Well, it depends who you are. If you took the original deal, maybe you got $0.20, $0.25, $0.30 on the dollar. Maybe you got $0.20 on the dollar. But now if you hold out-- if you held out, apparently you got a little bit more eventually. So it's a little bit of a fuzzy concept. But it's not something-- you usually make an assumption of what it is. AUDIENCE: And in a related question, so how would we also estimate the other constants like the hazard rate and the J? STEFAN ANDREEV: So once you fix the recovery rate, then you can take the bond price. And because bond price theoretically is e to the minus hT, you can estimate h from the bond price. So if you observe a bond price in the market, you can say, I'm going to estimate h. So let's say I'm going to take some benchmark bonds which I know the price of, and I'm going to estimate h for each of these bond prices. And I'm going to create a curve, which is going to be my hazard curve. And then I take another derivative or bond that I don't know the price of, and I can use the same curve to price it. So essentially by doing this, what I'm saying is, I'm going to replicate my derivative using these benchmark bonds as much as I can. That's the assumption that I'm making. AUDIENCE: And how long [INAUDIBLE] if multiple currencies are involved, if we are trying to trade with multiple different currencies, how does the whole model differ? STEFAN ANDREEV: If multiple currencies are involved, you can-- first you can-- it becomes tricky. You can say each currency can devalue X amount. If default happens, you can have more than one currency being devalued. If you have more than one currency, if you have more than two currencies, like three currencies, there's other identities you have to take care. You really simulate-- if you have three currencies, there is a triangle identity that, say, dollar-euros times euro-yen exchange rate has to equal to dollar-yen exchange rate. That's kind of an arbitrage condition. Just like interest rate FX forward parity-- even stronger in some sense. And so you can basically, you can write down multiple processes and price stuff. AUDIENCE: How much do these equations change when you add in bonds that are paying coupons? And how do you factor in duration and all that? STEFAN ANDREEV: Well, you just-- it's not hard, really. You just, instead of having this, you just write down all the coupon payments, when you pay them. And then you just take an expectation of all the coupon payments. So it's really the same process. You just repeat it for every coupon. PROFESSOR: Why don't we shut the formal class over now. But if people have questions afterwards, we'll [INAUDIBLE]. STEFAN ANDREEV: Yeah, I'm certainly around to answer questions, if anybody wants. PROFESSOR: Thank you very much. STEFAN ANDREEV: Thank you.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
26_Introduction_to_Counterparty_Credit_Risk.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Our last class Yi is running from his home in New Jersey due to snow. So he couldn't fly in. But actually, now I'm learning a lot. It's a good way to run the classes going forward. I think. We may employ it next year. So Yi will present CV modeling for about an hour. And then Jake, Peter and myself, we will do concluding remarks. We will be happy to answer any questions on the projects or any questions whatsoever. All Right? So Yi, please. Thank you. YI TANG: OK. I'm here. Hi everyone. Sorry I couldn't make it in person because of the snow. And I'm happy to have this opportunity to discuss with you guys counterparty credit risks as a part of our enterprise-level derivatives modeling. I run a Cross Asset Modeling Group at Morgan Stanley. And hopefully you will see why it's called Cross Asset Modeling. OK, counterparty credit risk exists mainly in OTC derivatives. We have an OTC derivative trade. Sometimes you owe your counterparty money. Sometimes your counterparty owes you money. If your counterparty owes you money, on the payment date, your counterparty may actually default, and therefore, either will not pay you the full amount it owes you. The default event includes bankruptcy, failure to pay and a few other events. So obviously, we have a default risk. If our counterparty defaults, we would lose part of our receivable. However, the question is before the counterparty defaults, do have any other risks? Imagine you have a case where your counterparty will pay you in 10 years. So he doesn't need to pay you anything. Then the question is are you concerned about counterparty risks or not? Well, the question is yes, as many of you probably know, it's the mark-to-market risk due to the likelihood of a counterparty future default. It is like the counterparty spike widens, even though you do not need a payment from you counterparty. If you were to sell, a derivative trade to someone, then someone may actually worry about that. So therefore the mark-to-market will become lower if the counterparty is spread wider. This is similar to a corporate bond in terms of economics. You own a bond on the coupon payments date, or on the principal date, the counterparty can default. Of course, they can default in between also. But in terms of terminology, this is not called counterparty risk. This is called issue risk. So here comes the important concept credit valuation adjustment. As we know the counterparty is a risk. Whenever there's a risk, we could put a price on that risk. Credit valuation adjustment, CVA, essentially is the price of a counterparty credit risk. Mainly mark-to-market risks, of course, include default risk too. It is an adjustment to the price of mark-to-market from a counterparty-default-free model, the broker quote. So people know, there's a broker quote. The broker doesn't know the counterparty risk. A lot of our trade models do not know the counterparty risk either, mainly because of we're holding it back, which I will talk about in a minute. Therefore, there is a need to actually have a separate price of CVA to be added to the price for mark-to-market from counterparty default free model to get a true economic price. In contrast, in terms of a bond, typically there's no need for CVA because it is priced in the market already. And CVA not only has important mark-to-market implications, it is also a part of our Basel III capital. Not only change your valuation, but could impact your return on capital. Because of a CVA risk, the capital requirements typically is higher. So you may have a bigger denominator in this return RE, return on capital or return on equity. CVA risk, as you may know, has been a very important risk, especially since the crisis in 2008. During the crisis, a significant financial loss actually is coming from CVA loss, meaning mark-to-market loss due to counterparties' future default. And this loss turned out to be actually higher than the actual default loss than the actual counterparty default. Again, coming back to our question, how do we think in terms of pricing a derivatives and price the CVA together with the derivatives. First of all, it adds some portfolio effect the counterparty can trade multiple trades. And the default loss or default risk can be different depending on the portfolio. And when people use a trade-level derivatives model, which is by default what people would call a derivatives model, typically you price each trade, price one trade at time. And then you aggregate the mark-to-market together to get a portfolio valuation. So when you price one trade, you do not need to know there may be another trade facing the same counterparty. But for CVA or counterparty risk, this is not true. We'll go over some examples soon. This is the one application of what I call enterprise-level derivatives, essentially focusing on modeling the non-linear effects, non-linear risks in a derivatives portfolio. Here's a couple of examples. Hopefully, it will help you guys to gain some intuition on the counterparty risks and CVA. Suppose you have an OTC derivatives trade, for instance like an IR swap. It could be a portfolio of trades. Let's make it simple. Let's assume the trade PV was 0 on day one. Of course, we assume we don't know anything about the counterparty credit risk. We don't know anything about CVA. This is just to show how CVA is recognized by people. So to start with again, the trade PV was 0 on day one, which is true for a lot of co-op trades. And then the trade PV became $100 million dollars later on. And then your counterparty defaults with 50% recovery. And you'll get paid $50 million of cash. OK, so $100 million times 50% recovery. If the counterparty doesn't default, you eventually would get $100 million. Now he defaults, you get half of it, $50 million. The question is have you made $50 million dollars or have you lost $50 million over the life of the trade. Anyone have any ideas? Can people raise your hand if you think you have made $50 million? Can I see the people in the class? I couldn't see anyone. PROFESSOR: How do I raise this? YI TANG: OK, no one thinks you made the $50 million. So I guess then, did you all think you have lost $50 million? Can people raise their hand if you think you have lost $50 million? OK, I see people. Some people did not raise your hand. That means you are thinking you are flat? Or maybe you want to save your opinion later? OK, so this is a common question I normally ask in my presentation. And I typically get two answers. Some people think they've made $50 million. Some people think they've lost $50 million. And there was one case, someone said OK, you know they're flat. Now, this would look like a new interesting situation where no one thinks you made $50 million. I mean, come on, you have $50 million of cash in the door. And they don't think you have made $50 million. You have a $0 from day one. Now, you have $50 million. OK? All right, anyway so for those of you who think you have lost money-- I don't know if it's a good idea [? Ronny-- ?] can someone tell us why do you think you lost $50 million? You went from 0 to positive $50 million. Why do you think you lost $50 million? Are we equipped to allow people to answer questions? PROFESSOR: Yeah, I think if someone presses a button in front of them. YI TANG: OK, so people choose not to voice your opinion? AUDIENCE: It is because you have to pay to swap and you have to pay $100 million to someone on the other side of trade? YI TANG: OK, very good. So essentially, you are saying hedging. That was what you are trying to get to? So you have a swap as 0 and you have an offsetting swap as a hedge. Is that what you are trying to say? AUDIENCE: No. I'm saying that if you're the intermediary for a swap, then you have to pay $100 million on the other end. So if you're receiving 50 and paying 100, you have a loss. YI TANG: That's good. Right, so intermediary is right. And that's similar to a hedge situation also. So that's correct. That's the basically the reason for a dealer. Essentially, we are required to hedge. We're very tight on the limit. We actually would lose $50 million maybe on the hedge fund. When our trade went from 0 to a positive $100 million, our hedge would have gone from to 0 to negative $100 million. In fact, we receive only half of what we need to receive. And yet, we have to pay the full amount that we need to pay on the hedge side. Essentially, we lost $50 million. But that's where the CVA and CV trading, CV risk management would come in. Again, CVA is the price of a counterparty credit risk. And you know, if you hedge, the underlying trader or whoever trades swap, if you hedge with the CV desk. Theoretically, you will be made whole on a counterparty default. So you would receive $50 million from counterparty, and theoretically you receive $50 million from the CV's desk if you hedge with CV desk. Now, the second part is how do we quantify CVA. How much is the CVA? CV on the receivable, which we typically charge to the counterparty, essentially is given by this formula. MPE means mean positive exposure, meaning only our receivable sides when the counterparty owes us money, and times the counterparty CDS par spread, times duration. The wider the spread the more likely the counterparty will default, the more we need to charge on the CVA. And the same thing is true for the duration. The longer the duration of trade is, there's more time for the counterparty to default so we charge more. Very importantly, there's a negative sign. Because CVA on the receivable side, is our liability. It's what we charge our counterparty. And there are some theoretical articles, they don't include the sign, that's OK for theoretical purposes. But practically, if you miss the sign things will get very confusing. Now, here is more accurate formula for CVA. You know how the MPE side, on the asset side. So we can see to start with, there's an indicator function where this capital T is the final maturity of the trade or counterparty portfolio. This tau is the counterparty's default time, first default time. And if the tau is greater than this capital T, essentially that means a default happens after the counterparty portfolio matures. And therefore, we don't have counterparty risk. So that's what this indicator is about. If the counterparty defaults before the maturity, that's when we will have counterparty credit risk. And there's a future evaluation of the counterparty portfolio right before the counterparty default. And this is how much collateral we hold against this portfolio. So the net receivable, the net amount, where the future value is greater than the collateral, is our sort of exposure, how much the counterparty would owe us. And this 1 minus R essentially is the discount rate. So 1 minus R times the exposure essentially is the future loss given default. And beta essentially is a normal mock money market account for defaulting, and this is the expectation in the risk-neutral measure. It looks simple. But if you get to the details, it's actually very complex maybe because the portfolio effect and this option-like payoff. If you recognize this positive sign here, essentially you recognize this is like options. And so again, here is about some details of non-linear portfolio effects. First of all, we talk about offsetting trades. In the previous example, you have one trade and went from 0 to $100 million. Counterparty defaults, you get paid $50 million, essentially, you lost $50 million. But what if you have another trade facing the same counterparty? Well, that's offsetting. When the first trade went from 0 to $100 million, the offsetting trade can go from 0 to negative $100 million. And therefore if the counterparty were to default, you're going to have a 0 default loss. That's just one example of portfolio effects because I'm offsetting trades. So therefore, in order to price CVA, you've got to know all the trades you have facing the same counterparty. This is very different from a trade-level model where you only need to know one trade at a time. There's also asymmetry of handling of the receivable, meaning assets versus the payable, meaning liabilities. And that's where the option-like payoff comes about. Typically, roughly speaking, if we have a receivable from our counterparty, if the counterparty were to default we're going to receive a fraction of it. So we would incur default loss. However, if we have a payable to our counterparty, if the counterparty were to default, we more or less need to pay the full amount. We don't have a default gain, per se. So this asymmetry is the reason for this option-like payoff we just saw previously. And as you know, a counterparty can trade many derivative instruments across many assets, such as interest rate, FX, credit, equities, a lot of time also commodities and sometimes also mortgage. And then my group is responsible for the modeling of the underlying exposure for CVA for capital as well as for liquidity, because multiple assets are involved and we need to model cross assets. So therefore, we named our group Cross Asset Modeling. Furthermore, it is not only we have option-like payoff, which is non-linear, we have an option essentially on a basket of a cross asset derivative trades. And the modeling becomes even more difficult. So that's when the enterprise-level will come in. And the enterprise-level model, which we'll touch upon even more later on, will need to leverage trade-level derivative models, and therefore, will need to do a lot of martingale-related stuff, martingale testing, resampling, interpolation. So here's a little bit more information on the CVA. We have talked about assets or MPE CVA, essentially for our assets or receivable. In this formula, we have discussed already the first one. There is also, theoretically, a liability CVA. Essentially, it is the CV on the payable side, when the bank or when us having a likelihood of default. And this is a benefit for us, all right. So the formula is fairly symmetric, as you can recognize, except the default time or default event is not for the counterparty but for us. OK? And then the positive sign here became negative sign, essential to indicate this is a payable negative liability to us. This is an interesting discussion first to default. We talked about how if the counterparty were to default, we more or less pay the counterparty full amount. So argument can be used on the receivable side. So if we have a receivable, and if we were to default first, roughly speaking the counterparty would pay us close to the full amount. And there, some people start to think about OK, when we price CVA, we've got to know, among counterparty and ourselves, which one is first to default. But my argument is that we do not need to consider that. And I have some reference for you guys to take a look if you are interested in this topic, but I'm not going to spend much time because we have lots to go over. Now, here's another example. You have a trade, same as the previous trade. The trade PV was 0 on day one, and the trade PV becomes $100 million later on. This time of course the counterparty risk are properly hedged. Then the question is do you have any other risks. Does anyone want to try to tell us do you see any other risks? There are actually several categories of risk we will have. I wonder if anyone would like to try to share with us your opinion. Sorry, I couldn't hear you. Yes? AUDIENCE: Some form of interest rate risk. YI TANG: Interest rate risk, OK, fine. OK fine, this is a market risk. Yes, you're right there is interest rate risk, but I did mention here that the market risks are properly hedged. So that means this interest rate risk of the trade will be handled by the hedge. What other risks? AUDIENCE: Is there a key man risks? So if the trader that made the trade leaves and doesn't know about the-- YI TANG: Ah, OK AUDIENCE: --portfolio? YI TANG: That's a good point. Yeah, there is a risk like that. Yeah. Any other risks? OK. Let's go over this then. I claim there is a cash flow liquidity funding risk. OK? Our trade is not collateralized. And then I claim we need funding for uncollateralized derivative receivables, meaning we are about to receive $100 million in the future. We don't have it now. And I claim we actually need to come up with cash for it in many cases, in most cases, not every trade. Anyone have any idea of why when you are about to receive money, you actually need to come up with money? This comes back to the hedge argument similar to CVA. Essentially, if you were to hedge your trade with futures or with another dealer which are typically collateralized. That means when you are about to receive $100 million, essentially you are about to pay $30 million on your hedge. In fact, you had to be futures that maybe mark to market, that means you need to actually really come up with $100 million cash. The same is true for collateralized trades. And there that's where the risk is. Because when you need to come up with this money and you don't have it, what are you going to do? You may end up like Lehman. And there's also a contingent on the liquidity risk, meaning how much liquidity risk is dependent on the market conditions, how much interest rates changed, how much other market risk factor changes like that. And depending on the market condition, the liquidity may be quite different and you may not know beforehand. So that's the another challenge. [INAUDIBLE] If you turn the argument around, applying the argument to the payable and if you have uncollateralized payable, essentially you would have a funding or liquidity benefit. So one interesting thing to manage this liquidity risk essentially is to use uncollateralized payable funding benefits to partially hedge the funding risk in uncollateralized derivatives receivables. There are a lot of other risks, for instance, tail risk and equity capital risks. Now here is one more example I'd like to go over with you guys on the application of CVA. This is about studying put options or put spreads. If you trade stocks yourself, you may have thought about this problem. I mean, either you can buy the stock outright or you sell put possibly with a strike lower than the current price. With that, you more or less have a similar payout. Some people may argue OK, if you see put, if your stock comes down, you're going to lose money. But you're going to lose money if you were to hold the stock outright also. One of these strategies that if you sell put, if the stock is not put to you, and you're not participating the up side when the stock price increases significantly then you are not going to capture that price. But of course, one thing people can do is [? you continue to ?] sell put so they become like an income trade. So it's an interesting strategy. Some people say that selling put is like name your own price and get paid for trying it. And that's why we have this famous trade, Warren Buffett, Berkshire, sold long dated puts on four leading stock indices, in US, UK, Europe, and Japan, collected about four billion premium without posting collateral. Without posting collateral, that was very important. This is something I actually was very involved in one of my previous jobs. This happened about, I think, around 2005, 2006. It's one of the biggest trades. And I was told when I was involved with this, this was one of the biggest cash outflow in the derivatives trades at that time, because Warren Buffett collected the premium without posting collateral. If he had agreed to post collateral during the crisis of 2008, he's going to post many billions of dollars of collateral. And one reason he had more cash than other people was he's very careful [INAUDIBLE] and I think I put a reference if you are interested and then you can essentially see the [INAUDIBLE] link [INAUDIBLE]. And what's interesting is that, there were quite a few dealers who are interested in this trade, but they know the size. And in a long-dated equity option, it is not easy to handle, but I think a lot of people were able to handle. To me, some people were not able to trade or enter this trade, not because they could not handle the equity risk. It's they could not handle the CVA compounded. First of all, we know there's a CVA. Essentially, we bought this option from Warren Buffett. Eventually, he may need to pay and at that time, he may default. So that's a regular CVA risk. But this is also a wrong-way risk, meaning a more severe risk. You can imagine when the market sells off, Warren Buffet would actually owe us more money. Do you think in that scenario he will be more likely to default or less likely to default? He'll be more likely to default. That's where the term wrong-way risk comes in. When your counterparty owes you more and more money, that's when he's more likely to default. And that's even harder to model. And there's a liquidity funding risk which can also be wrong-way, because as a dealer you may need to come up with a billion or two cash to pay Berkshire. Where do you get the money from? Typically, people need to issue a debt to fund in a [? sine ?] secure way and essentially, you'll pay for quite a spread on your debt. That is essentially the cost of your liquidity of funding. So what we did was, essentially, we charged Warren Buffett CVA and wrong-way CVA, charge of the funding costs, some wrong-way funding costs. Another challenge, of course, is that some dealers, I suspect, they could've priced CVA, but they do not have a good CV trading desk risk management to deliver risk management of CVA and funding. Once you have this position at hand, you have counterparty risks. But how do you hedge it? You charge Warren Buffet x million dollars for the CVA. If you don't do anything, when their spread widens, you're going to have a lot more CVA loss. So you need to risk manage that. Of course, you can do that with any hedge. But at any hedge, if we drill down to details, you suffer a fair amount of gap risk. It's not like a bond. If you own a bond, you can buy a CDS protection on the same bond. More or less, you are hedged for a while in a static way. But for a CVA, it's not. The reason for that is the exposure can change over time. One thing we tried at the time, essentially we sort of structured like a credit-linked note type of trade. Essentially, you go to people who own or would like to buy Berkshire's bond. Essentially, you should tell them OK, we have a credit asset similar to Berkshire's bond. If you feel comfortable with owning Berkshire's bond, you may consider our asset which pays more coupon. And the reason we were able to pay more coupon is we were able to charge Berkshire a lot of money. And there's also a tranched portfolio protection thing that's involved, but I'm going to skip that for the sake of time. So then the question is the we charged a lot of the money from Berkshire. Why would he want to do this trade? What would they think? So here's my guess. As you know, they have an insurance business. Then they wanted to explore other ways to sell insurance. So selling puts essentially is spreading insurance on the equity market. They sold like 10, 15 year maturity puts at below their spots. So then people can think, OK, what's the likelihood of a stock price coming down to below the current stock in 10, 15 years. Well, it happens, but it's not very likely. And they do have a day one cash inflow. So essentially, I think one way Berkshire was thinking is that they thought low funding costs. If you read Warren Buffett's paper, essentially he's saying it's like 1% interest rate on a 10 year cash, or something like that. And it's very important to manage your liquidity well. They do not have any cash flow until the trade matures. So that's how they avoided the cash flow drain during 2008, even though they did suffer unrealized mark-to-market loss. But what's interesting is that during 2008, 2009, Berkshire did explore the feasibility of posting collateral. This started with no collateral posting. But then they wanted to post collateral. They actually approached some of the dealers saying oh, I want to post some collateral. Why is that? There's no free lunch. So what happened was they were smart not to post collateral, but during the crisis their spread widened. Everyone's spread widened. So Berkshire's spread widened. Then Warren Buffett owed more money. So guess what? The CVA hedging would require the dealer to buy more and more protection on Berkshire. When you buy more and more protection on someone, that will actually drive that person's, that entity's, credit spread even wider. So Berkshire essentially saw their credit spread widening a lot more than they had hoped for, than they had anticipated. And later on, they found out it was due to CVA hedging, CVA risk management. That actually affected their bond issuance. When you have a high credit spread from CDS market, essentially the cash market may actually question may actually follow. And whoever would like to buy Berkshire's bond would think twice. OK, if I have to buy this bond, if I ever have to buy credit protection, it's going to cost me a lot more money because of the spread widening. So therefore, I'm going to demand higher coupon on Berkshire's bond, and that drives their funding cost high. So they explored in different to post collateral. Another thing of course is a very interesting thing to ask. Berkshire thinks they're making money and the dealer thinks they're making money, which is probably true. But then the question is, who is losing money or who will lose money. Anyone has any ideas? I think there's probably a lot of answers to this. My view is that essentially whoever needs to hedge, whoever need to buy put. If the market doesn't decline as much as much as you hoped for, essentially you'll pay for put premium and do not get the benefits. Here's an interesting CV conundrum. Now, hopefully by this time, you guys fully appreciate the CVA risks and the impact of CVA. In terms of risk itself, in terms of magnitude, as I mentioned earlier being the crisis, 2008 crisis, which [? killed ?] among easily billions of dollars loss for some of the firms due to CVA, and that's more than the actual default loss. Now given you know the CVA, so if you trade with counterparty A, naturally you'll say you want to think OK, I want buy protection to hedge my CVA risk, to buy credit protection on A, from counterparty B. If you trade with counterparty B, you would have CVA against counterparty B. You would have a credit risk against counterparty B. So what are you going to do? If you just follow the simple thinking, essentially you may think oh OK, maybe I should buy credit protection on B from counterparty C. But if you were to do that, then you have to continue on that. It becomes an infinite series. Infinite series are OK I'll say theoretically, but in practice I feel it's going to be very challenging to handle. So what would be a simple strategy to actually terminate this infinite series quickly? Yeah this also has theoretical implications for CVA pricing. Sometimes we say, OK, arbitrage pricing is really replication, use hedge instruments. Now, you have to use an infinite number of hedge instruments. That's going to impact your [? replication ?] modeling. So the way we would do it practically is to buy credit protection on A from counterparty B fully collateralized, typically from a dealer. So however much money you owe from counterparty B right away, they're going to post collateral. In a way, it's more or less similar to a futures context settling. That minimized the counterparty risk [INAUDIBLE]. So you can cut off this infinite series easily. Here, I'd like to touch upon what I call enterprise-level derivatives modeling. We mentioned trade-level derivatives models. That is essentially, is just a regular model. When people talk about derivatives model, usually people talk about trade-level models. Essentially, you model each trade independently. Your model is price, mark-to-market or its Greeks sensitivity. Then when you have a portfolio of these trades, essentially you can just aggregate their PV, their Greeks, through linear aggregation. Then essentially you get the PV of the portfolio. But as we have seen already, that doesn't capture the complete picture. There are additional risks that require further modeling. One is non-linear portfolio risks. So essentially, these risks cannot be like a linear aggregation of the risks of each of the component trades in the portfolio. The example we have gone through is CVA, funding is of similar nature, capital liquidity are also examples. The key to handle this situation is to be able to model all the trades in the market and the market risk factors of a portfolio consistently so that you can handle the offsetting trades properly. Of course, we need to leverage the trade-level model essentially to price each individual trade as of today, as of a future date. What's interesting is that there's also feedback to the trade-level models. For instance, when we price a cancellable swap of a very public trade. Now this cancellable swap we trade with a counterparty, let's say assumed uncollateralized, we trade with a counterparty that's close to default. You know the trade-level model doesn't know about this counterparty, about default. The trade-level will give you independent, the exercise boundaries, when do you need to cancel the swap, independent of the counterparty credit quality. That invites a question, when the counterparty is close to default, even if your model says OK, you should not exercise based on the market conditions, but shouldn't we consider the counterparty condition, credit condition. If the counterparty were close to default, if you cancel the swap sooner essentially you'll eliminate or reduce the counterparty risk. This is actually interesting application and feedback between a trade-level model and the enterprise-level models. So what we did was, in some of my previous jobs, what we did was actually figure out the counterparty risk in these trades, the major trades. Then essentially, we just tell the underlying trader, if you were to cancel this trade, we have a benefit because we're going to reduce the CVA or even zero out CVA. So the CV trader would be able to pay the underlying trader. So therefore, the underlying model actually can take this as input rather than as part of the exercise condition modeling, knowing if you cancel earlier you potentially can get additional benefits. This model may eventually be able to tell you to handle the risks more properly, market risks together with counterparty risks. This is roughly the scope and the application of the enterprise-level model. This is actually a fairly significant modelling effort as well as significant infrastructure and data effort. Essentially, it requires a fair amount of martingale testing, martingale resampling, martingale interpolation and the martingale modeling. The reason for that is you have a trade model, and each trade model can model a particular trade accurately, and there's certain market modeling, simulations of the underlying market or great PDE. But when you put a portfolio of trades together, now the methodology you use for modeling one trade accurately may not necessarily be the methodology you need to model all the trades accurately. Some of these require PDE and some require simulations, but you need to put them together. Typically, we use simulation. And that introduced numerical inaccuracies. And the martingale testing will tell us are we introducing a lot of errors, martingale resampling essentially would allow us to correct the errors. As you know, the martingale is a foundation of the arbitrage pricing. Essentially, martingale resampling will actually be able to enforce the martingale conditions in the numerical procedure, not only theoretically. Martingale interpolation modeling are other important interesting aspects if we have time we can [INAUDIBLE] There are different approaches for how to do it in a systematic way and still remain additional ways. I'd like to quickly go over some of the examples of martingale and martingale measures. I may need to go through this quite quickly due to the time limitations. But hopefully, you guys have learned all these already. This will hopefully be more like a review for you guys. So essentially, we are talking about a few examples. What's the martingale measure for forward price, forward LIBOR, forward price, forward FX rate, forward CDS par coupon. I would hope you guys would know the first few already. The for CDS par coupon in my view is actually fairly challenging. For simplicity, I'm not considering the collateral discounting explicitly. That adds additional challenges but still we can address that. So under the risk neutral measure, essentially for this Y of t being the price of a traded asset with no intermediate cash flow. Essentially, that is y_T over beta(t) is a martingale. This is essentially the Harrison-Pliska martingale no-arbitrage theorem. It says for two traded assets with no intermediate cash flows, satisfying technical conditions, the ratio is a martingale. There's a probability measure corresponding to the numeraire asset. Therefore, naturally we have this composite. The forward arbitrage-free measure essentially corresponding to a numeraire of zero-coupon bonds. Naturally, we can find this Y_t and P(t, T) ratio is a martingale. Again, it's just a ratio of two traded assets with no intermediate cash flow. From the definition of the forward price, essentially the forward price is a martingale under the forward measure. Forward LIBOR-- this is the forward LIBOR-- essentially, it's a ratio of two zero-coupon bonds. So naturally, we know it's a martingale under of the numeraire asset. So essentially it's a forward measure up to the payment on the forward LIBOR. So this is the martingale condition. Similarly, we can do this argument of the forward swap rate. Essentially, a forward swap rate is, we can start with, like an annuity numeraire. And since the forward swap rate, you essentially know, is the difference of two zero-coupon bonds divided by annuity. And therefore we can conclude based on Harrison-Pliska theorem the forward swap rate essentially is the martingale under the annuity measure, with this annuity as the numeraire. The same argument goes for the forward FX rate. Mainly the idea is or the pattern you probably have seen is, for any quantity you see if you can find two assets and then use these two asset ratio to represent this in a quantity. So the forward FX essentially is a ratio like this. This is nothing more than the interest rate parity. From the spot you grow both [INAUDIBLE]. You start with spot, you grow the domestic currency and then you grow the foreign currency. You get FX forwards. And FX forward is a martingale measure under the domestic forward measure. This is a simple technique to do change of probability measure. It's roughly how I remember change of probability measure and Radon-Nikodym derivatives. You essentially start with, again, martingale, assuming this is martingale under a particular measure corresponding to the numeraire asset. And then this quantity is also a martingale under a different measure corresponding to a different numeraire asset. One key point is when you change probability measure essentially you change the numeraire corresponding to the probability measure. And therefore essentially the important thing is we know the PV or the mark-to-market, of a traded security is measure-independent. It doesn't matter what mathematics you use if the traded security is going to match the market price. And therefore, you can price this security under one measure or one numeraire. And then you can price again with another measure, another numeraire. They've got to be the same. Then naturally, you see this simple equation as the starting point to do the change of measure. If you just simply change the variables, then essentially you get your change of measure as well as Radon-Nikodym derivative. And if you worked on the BGM model, you'll probably recognize this change of measure which is used for the BGM model under the old measure. Now here's the subtlety, credit derivatives. Naturally, people would think OK, since the forward swap rate is a martingale under the annuity measure, naturally people would think OK, then forward CDS par rate, it's like a forward swap rate. It's got to be a martingale under the risky annuity measure. So that's quite intuitive except there's one problem. If the reference credit entity has zero recovery upon default. Then, this risky annuity could have a 0. And now we're talking about we want to use something that could be 0 as our numeraire. How do we resolve the technical mathematical problem. So that actually very interesting. Schönbucher was the first person who published a paper on this model. I was just trying to do some work myself when I was working on BGM model. I thought oh, it would've been nice to expand the BGM model to the credit derivatives. But then immediately I stumbled with this difficulty where when the recovery is 0, you're going to have a 0 in your numeraire, in your risky annuity. So Schönbucher, essentially, his idea was let's focus on survival measures, meaning we have a difficulty if a default happens and the recovery is 0. Now his idea is let's forget about that state. Let's not worry about that. One immediate question people will ask, if that's the case, the probability measure, physical probability measure or risk-neutral probability measure, and this survival probability measure are not equivalent because the survival probability measure knows nothing about the default event. So they are not equivalent. That's, essentially, you actually transform one mathematical difficulty to another one. Luckily, the second one turns out to be actually easier to solve. So the starting point is again using Harrison-Pliska theorem. Essentially, you just need to identify like a numeraire asset, and the denominator assets. You identify two assets. You make a ratio and then those are a martingale. So essentially this is forward swap rate and forward annuity. If we have this indicator of the default time of j-th credit name, greater than this t, essentially this is like the premium leg of CDS. That's a traded asset. So therefore, we have a martingale [INAUDIBLE] like this. The subtlety as you probably can envision is going to come in when we do the change of probability measures. OK, so we have talked about how are we going to find the martingale measure of a CDS par coupon or forward CDS par rate. This is a starting point of martingale model. Essentially, for any quantity you want to model you try to find its martingale measure. Once you find this martingale measure, you can do a martingale representation. And then often times you need to a change of a probability measure. So that all the term structure functions, a consequence of a variables are modeled in a consistent probability measure. So finding the martingale measure is the first point. Survival probability measure, essentially, he just defined this with. You can define this Radon-Nikodym derivative. Once you define that essentially-- if you remember the previous formula-- you will have a martingale condition like this. [INAUDIBLE] The probability measures are not equivalent anymore, but yet they can still do change of probability measure. You need to separately model what going to happen when the default happens if you want to use this model. Now, I'd like to move onto the second part, martingale, martingale testing and martingale resampling and interpolation. Martingale testing essentially given the previously model formula's conditions. Those are, by the way, just examples. There are a lot more. Essentially, you know that's what it should be theoretically you just test in your numerical implementation and see if those conditions are satisfied. That's the martingale test. Martingale resampling is we know most likely if you were to test, we're going to fail. This is not necessarily for enterprise-level models but even for trade-level derivatives models. A lot of times, I think the martingale conditions are not exactly satisfied. So one way to do that, is to correct that, correct this error. The rationale is essentially because of a numerical approximations. Whatever quantity we model essentially is not a true quantity. The true quantity we model essentially is some function of what we have in our model. So therefore, you expect a certain function. Sometimes you can have a linear, log-linear function. This X_0 is what we have in our model, and then X is what we need to satisfy the martingale condition. Essentially, in this [? Purdue ?] case is very simple. You first of all, use the mean and then you would adjust the deviation. So therefore, given any quantity X_0, you can have, you can force it to be any given mean. This mean, in our case, will be determined by the martingale condition. The next interesting thing is martingale interpolation. Oh, I have a typo here. Sometimes you have an interest rate model, for instance, you model LIBOR. Your LIBOR, you can have different tenors. When you have a yield curve, you know, at any given time, there's a term structure. In the model, a lot of times we can model a few selected points. But what if your model requires a term structure, a term that not in your model. So what people normally do is you do martingale, you do interpolation. So you have a 1-year LIBOR and you have a 5-year liable. And then you need a 3-years. What do you do? You interpolate, for instance. But interpolation doesn't automatically guarantee martingale relationships. The martingale interpolation has a goal of automatically satisfying the martingale relationships, so we're particular with our interpolating. Actually, it turns out to be a [INAUDIBLE] The starting point is the martingale condition that I wrote out on the slide. Essentially, this s and t are the calendar time. And this capital T is really like a term structure. You have a 1-year rate, 2-year rate, 5-year interest rate, those term structures. How do we interpolate such that after interpolation the corresponding martingale relationships are satisfied. So here's what we do. We start with, let's say, capital T_1. Capital T_1, that's a point we model. We assume that one is properly martingale resampled and satisfies the martingale condition. This is a martingale for T capital 2. That also satisfies the corresponding martingale conditions. Our goal is to figure out T_3. How do you do interpolation for the term T_3 such that this T_3 will satisfy its own corresponding martingale condition. If you do simple linear interpolation using T as independent variable, essentially, you are not going to achieve that. So the key is we need the choose the proper independent variable for the interpolation. Essentially, it's the previous time or time 0 quantity. So time s is before time t. Imagine time s will be 0. So using the corresponding quantities at time 0 as the independent variable, essentially, you can achieve that. It's still linear interpolation, it's just to use a different independent variable. Essentially, you can show that very easily. This is just simple algebra. If you take the expectation, this one being martingale, this little t will become s. Then if you do expectation here, the little t will become little s. And therefore if you combine these two, a lot of terms will actually cancel. Essentially, you will be left with this martingale at time s and T_3, meaning this is the martingale target of this particular term. And that turns out to be the expectation of this quantity. So it's a very simple linear-- simple algebra. You guys can figure it out if you want to. So this one essentially guarantees the interpolated quantity automatically satisfies all the conditions of the martingale target. Of course, you need to know the martingale target. If you don't know, that's a different story. Then you need to do something else. Specifically, time 0, for instance, is what the market tells us. Often time we do a big time assumption. So whatever assumption on time 0 you make, in you dynamic model, you automatically satisfy the needed martingale condition. This is just a brief introduction of how we do the martingale modeling. This LIBOR market model, as you guys probably have learned already, there's different forms of BGM as the initial form. And then Jamshidian came with another form. And in terms of a general martingale model, what we'll do typically is we start to find the martingale quantity. And we know a forward LIBOR is a martingale in its own forward measure. Then we know we can use martingale representation. Under certain technical conditions, the diffusion process can be represented by Brownian motion. Then we can assume log-normal just for example. We don't have to, we can use CEV, we can use [INAUDIBLE] stochastic volatility. The starting point is martingale, identifying the martingale measure and then perform martingale representation. Essentially, you get this stochastic differential equation. They need to change measure or change numeraire. Because this one essentially says for particular LIBOR, you have a Brownian motion and a different measure. So that has limited usage. A lot of the derivative trade, IR trade essentially, it's [INAUDIBLE] the entire yield curve. So you need to make sure you model the entire yield curve consistently. So therefore you have to change the probability measure so that everything is specified in the same common measure. Of course, you can have a choice which one you want to use as common measure. Through a simple change of numeraire, essentially, we can get a stochastic equation like this. We have a Brownian motion. Right, we have a Brownian motion with a correlation like this. So this is essentially a market model in a general form, with full dimensionality meaning one Brownian motion per term of a LIBOR. So that's the full dimensionality. PROFESSOR: Yi? YI TANG: And then you need to do-- Yeah, hi. PROFESSOR: Can you wrap up because we need a bit of time for questions. YI TANG: Oh you need me to end. It's all right I can actually wrap up now. If you want to. PROFESSOR: Sure. OK, any conclusions? YI TANG: Well, OK. The conclusion is the following thing. There is a need for enterprise-level models to handle non-linear portfolio effects and we need to leverage our trade-level models. By doing so we do employ martingale testing, martingale resampling, interpolation. And not only we need that for CVA, but we also need that for funding liquidity capital risks which are very critical risks. And people have started paying more and more attention to these risks, especially since after the crisis. Because of time limits, I'm not going to be able to finish another example. But if you like, you can take a look on page 22 of the slides. Hopefully, Vasily can still get it to you guys. Thank you guys. PROFESSOR: Thank you Yi. We will publish the slides probably later tonight so please take a look. So to wrap up, let me see. I want to bring up, PROFESSOR 2: That's OK. Probably if it's the course website, that's fine. PROFESSOR: I did add a few topics which were used last year for final paper for interest in the document which is on the website. So take a look. Basically, the themes there were mostly Black-Scholes or more advanced models or manipulation of Black-Scholes equation. There was a very interesting work on statistical analysis of commodity data. So if somebody's up for it, that would be very interesting. And there were a few numerical and Monte Carlo projects. So any questions? PROFESSOR 2: Yeah, so actually we were planning to give you a bit more time to ask your questions. But since we have five minutes, I think maybe I'd like to ask you to just think about what we learned this term. So Peter can add on what we think in on the mathematics and also those applications, and in conjunction while you're doing the final paper, just focusing on the new things you think that you learned and what did you like to explore in the next stage of your research. So I think probably we don't have a lot of time for lots of questions. But if you have any questions, this will be a good opportunity to ask about the paper or the course. Peter you want to make some comments? PETER: Sure. I'd just like say that I think this course was a very challenging course for most of you and that was, I guess, our intention. And I really respect all the hard work and effort everyone put into the class. And in terms of the final paper, we will be looking at your background and look for insights that demonstrate what you've learned in the course. And I've already reviewed several papers. I'm very pleased with the results. So I think everyone's done a great job. This course, I think, is intended to provide you with the foundations of the math for the financial applications as well as an excellent introduction and exposure to those applications. I think you'll find this course valuable over the course of your careers, and look forward to contributing insights with questions you might have following the course. I'm sure the other faculty feel the same way. We want to be a good resource for you now and afterwards. PROFESSOR: Very, very well put. So please feel free to contact us. And please stay in touch. All the contact details are on the website. We plan to have a repeat of this class next year. So please, tell your friend or stop by next year, which will be the next fall. It will not be exactly the same. We will try to make it slightly different, but it will be close. PROFESSOR 2: If you have any suggested topics, you feel you haven't been exposed to and would like to know more, send us email if you can. I think one of the values of this class is we can bring in pretty much everyone from the frontier in this industry to give you some insights of what's going on. PROFESSOR: Please take a review on the website, this is important. And that's all. PROFESSOR 2: OK. Thank you for your participation this semester. [APPLAUSE] PROFESSOR: And thank Yi for the pleasure.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
15_Factor_Modeling.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today's topic is factor modeling, and the subject here basically exploits multivariate analysis in statistics to financial markets where our concern is using factors to model returns and variances, covariances, correlations. And with these models, there are two basic cases. There's one where the factors are observable. Those can be macroeconomic factors. They can be fundamental factors on assets or securities that might explain returns and covariances. A second class of models is where these factors are hidden or latent. And statistical factor models are then used to specify these models. In particular, there are two methodologies. There's factor analysis and principal components analysis, which we'll get into some detail during the lecture. So let's proceed to talk about the setup for a linear factor model. We have m assets, or instruments, or indexes whose values correspond to a multivariate stochastic process we're modeling. And we have n time periods t. And with the factor model we model the t-th value for the i-th object-- whether it's a stock price, futures price, currency-- as a linear function of factors f_1 through f_k. So there's basically like a state-space model for the value of the stochastic process, as it depends on these underlying factors. And the dependence is given by coefficients beta_1 through beta_k, which are depending upon i, the asset. So we allow each asset, say if we're thinking of stocks, to depend on factors in different ways. If a certain underlying factor changes in value, the beta corresponds to the impact of that underlying factor. So we have common factors. Now these common factors f, this is really going to be a nice model if the number of factors that we're using is relatively small. So the number k of common factors is generally very, very small relative to m. And if you think about modeling, say asset-- equity asset returns in a market, there can be hundreds and thousands of securities. And so in terms of modeling those returns and covariances, what we're trying to do is characterize those in terms of a modest number of underlying factors which simplifies the problem greatly. The vectors beta_i are termed the factor loadings of an asset. And the epsilon_(i,t)'s are a specific factor of asset i, period t. So in factor modeling, we talk about there being common factors affecting the dynamics of the system, and the factor associated with particular cases are the specific factors. So this setup is really very familiar. It just looks like a standard sort of regression type model that we've seen before. And so let's see how this can be set up as a set of cross-sectional regressions. So now we're going to fix the period t, the time t, and consider the m-variate x variable as satisfying a regression model with intercept given by alphas. And then the independent variables matrix is B, given by the coefficients of the factor loadings. And then we have the residuals epsilon_t for the m assets. So the cross-sectional terminology means we're fixing time and looking across all the assets for one fixed time. And we're trying to explain how, say, the returns of assets are varying depending upon the underlying factors. And so the-- well OK, what's random in this process? Well certainly the residual term is considered to be random. That's basically going to be assumed to be white noise with mean 0. There's going to be possibly a covariance matrix psi. And it's going to be uncorrelated across different time cross sections. Let's see if I can move the mouse, if this is what's causing the problem down here. So in this model we have the realizations on the underlying factors being random variables. The returns on the assets depend on the underlying factors. Those are going to be assumed to have some mean, mu_f, and some covariance matrix. And basically the dimension of that covariance matrix omega_f is going to be k by k. So in terms of modeling this problem, we go from an m by m system of covariances, correlations, to focusing initially on an a k by k system of covariances and correlations between the underlying factors. Psi is a diagonal matrix with the specific variances of the underlying assets. So we have basically epsilon-- the covariance matrix of the epsilons is a diagonal matrix, and the covariance matrix of f is given by this omega_f. Well, with those specifications we can get the covariance for the overall vector of the m-variate stochastic process. And we have this model here for the conditional moments. Basically, the conditional expectation of the process given the underlying factors is this linear model in terms of the underlying factors f. And the covariance matrix is the psi matrix, which is diagonal. And the unconditional moments are obtained by just taking the expectations of these. Well actually, the unconditional expectation of x is this. The unconditional covariance of x is actually equal to the expectation of this plus the variance of the conditional expectation, or the covariance of the conditional expectation. So one of the formulas that's important to realize here is that if we're considering the covariance of x_t, which is equal to covariance of B f_t plus epsilon_t, that's equal to the covariance of B f_t plus the covariance of epsilon_t plus twice the covariance between this term and this, but those are uncorrelated. And so this is equal to B covariance of f_t B transpose plus psi. With m assets, how many parameters are in the covariance matrix if there's no constraints on the covariance matrix? AUDIENCE: [INAUDIBLE]. PROFESSOR: How many parameters? Right. So this is sigma. So the number of parameters in sigma. AUDIENCE: [INAUDIBLE]. PROFESSOR: m plus what? AUDIENCE: [INAUDIBLE]. PROFESSOR: OK, this is a square matrix, m by m. So there's possibly m squared, but it's symmetric. So we're double-counting off the diagonal. So it's m times m plus 1 over 2. How many parameters do we have with the factor model? So if we think of a-- let's call this sigma star. The number of parameters in sigma star is what? Well, B is an m by k matrix. This is m by k, so we have possibly m times k values. The f_x is-- or the covariance of f_t is the number of elements in the covariance matrix of f, which is k by k. And then we have psi, which is a diagonal of dimension m. So depending on how we structure things, we can have many, many fewer parameters in this factor model than in the unconstrained case. And we're going to see that we can actually reduce this number in the covariance matrix of f dramatically because of flexibility in choosing those factors. Well let's also look at the interpretation of the factor model as a series of time series regressions. You remember when we talked about multivariate regression a few lectures ago, we talked about cross-sectional regressions and time series regressions, and looking at the collection of all the regressions in a multivariate regression setting. Here we can do the same thing. In contrast to the cross-sectional regression where we're fixing time and looking at all the assets, here we're looking at fixing the asset i and the regression over time for that single asset. So the values of x_i, ranging from time 1 up to time capital T, basically follows a regression model that's equal to the intercept alpha_i plus this matrix F times beta_i, where beta_i corresponds to the regression parameters in this regression, but they are the factor corresponding to an asset i on the different k factors. In this setting, the covariance matrix of the epsilon_i vector is the diagonal matrix sigma squared times the identity. And so this is the classic Gauss-Markov assumptions for a single linear regression model. Well, as we did previously, we can group together all of these time series regressions for all the m assets together by simply putting them all together. So we start off with x_i equal to basically F beta_i plus epsilon_i. And we can basically consider x_1, x_2, up to x_n. So we have a T by m matrix for the m assets. And that's equal to a regression model given by basically what's on the slides here. So basically, we're able to group everything together and deal with everything all at once, which computationally is applied in fitting these. Let's look at the simplest example of a factor model. This is the single-factor model of Sharpe. We went through the capital asset pricing model, how returns on assets and stocks are basically-- the excess return on stock can be modeled in terms as a linear regression on the excess return of the market. And the regression parameter beta_i corresponds to the level of risk associated with the asset. And all we're doing in this model is, by choosing different assets we're choosing assets with different levels of risk scaled by the beta_i. And they may have returns that vary across assets given by alpha_i. The covariance matrix of the assets has-- the unconditional covariance matrix has this structure. It's basically equal to the variance of the market times beta beta prime plus psi. And so that equation is really very simple. It's really self-evident from what we've discussed, but let me just highlight what it is. Sigma squared beta beta transposed plus psi. And that's equal to sigma squared times basically a column vector of all the betas, beta_1 down to beta_m times its transpose plus a diagonal matrix with the psi. So this is really a very, very simple structure for the covariance. And if you had wanted to apply this model to thousands of securities, it's basically no problem. You can construct a covariance matrix. And if this were appropriate for a large collection of securities, then the amount-- the reduction in terms of what you're estimating is enormous. Rather than estimating each cross-correlation and covariance of all the assets, the factor model tells us what those cross covariances are. So that's really where the power of the model comes in. And in terms of why is this so useful, well in portfolio management one of the key drivers of asset allocation is the covariance matrix of the assets. So if you have an effective model for modeling the covariance, then that simplifies the portfolio allocation problem because you can specify a covariance matrix that you are confident with. And also in risk management, effective models of risk management deal with, how do we anticipate what would happen if different scenarios occur in the market? Well, the different scenarios that can occur can be associated with what's happening to underlying factors that affect the system. And so we can consider risk management approaches that vary these underlying factors, and look at how that has an impact on the covariance matrix very directly. Estimation of Sharpe's single index model. We went through before how we estimate the alphas and the betas. In terms of estimating the sigmas-- the specific variances-- basically, that comes from the simple regression as well. Basically, the sum of the squared estimated residuals divided by t minus 2. Here we're assuming unbiasedness because we have two parameters estimated per model. Then for the market portfolio, that basically has a simple estimate as well. The psi hat matrix would just be the diagonal of that-- the diagonal of the specific variances. And then the unconditional covariance matrix is estimated by simply plugging in these parameter estimates. So, very simple and effective if that single-factor model is appropriate. Now needless to say, a single-factor model doesn't characterize the structure of the covariances and/or the returns typically. And so we want to consider more general models, multi-factor models. And the first set of models we're going to talk about are common factor variables that can actually be observed. Basically any factor that you can observe is a potential candidate for being a relevant factor in a linear factor model. The effectiveness of a potential factor is determined by fitting the model and seeing how much contribution that factor makes to the explanation of the returns and the covariance structure. Chen, Ross, and Roll wrote a classic paper in 1986. Now Ross is actually here at MIT. And with their paper, they looked at modeling-- rather than looking at these factors directly, including those in the model, they looked at transforming these factors into surprise factors. So rather than having interest rates just as a simple factor directly plugged into the model, it would be the change in interest rates. And additionally, not only just the change in interest rate, but the unanticipated change in interest rates given market information. So their paper goes through modeling different macroeconomic variables with vector autoregression models, and then using those to specify unanticipated changes in these underlying factors. And so that's where the power comes in. And that highlights how when you're applying these models, it does involve some creativity to get the most bang for the buck with your models. And the idea they had of incorporating unanticipated changes was really a very good one and is applied quite widely now. Now with this setup, one basically-- if one has empirical data over times 1 through capital T, then if one wants to specify these models, one has their observations on the x_i process. You basically have observed all the returns historically. We also, because the factors are observable, have the F matrix as a set of observed variables. So we can basically use those to estimate the beta_i's and also estimate the variances of the residual terms with simple regression methods. So implementing these is quite feasible, and basically applies methods that we have from before. So what this slide now discusses is how we basically estimate the underlying parameters. We need to be a little bit careful about the Gauss-Markov assumptions. You'll remember that if we have a regression model where the residual terms are uncorrelated and constant variance, then the simple linear regression estimates are the best ones. If there is unequal variances of the residuals, and maybe even covariances, then we need to use generalized least squares. So the notes go through those computations and the formulas, which are just simple extensions of our regression model theory that we had in previous lectures. Let me go through an example. With common factor variables that are using either fundamental or asset-specific attributes, there's the approach of-- well, it's called the BARRA Approach. This is from Barr Rosenberg. Actually, I have to say, he was one of the inspirations to me for going into statistical modeling and finance. He was a professor at UC Berkeley who left academics very early to basically apply models in trade money. As an anecdote, his current situation is a little different. I'll let you look that up. But anyway, this approach basically provided the BARRA Approach for factor modeling and risk analysis, which is still used extensively today. So with common factor variables using asset-specific attributes-- in fact, the factor realizations are unobserved but are estimated in the application of these models. So let's see how that goes. Oh, OK, this slide talks about the Fama-French approach, which concerns-- OK, Fama and French, Fama of course we talked about him in the last lecture. He got the Nobel Prize for his work in modeling asset price returns and market efficiency. Fama and French found that there were some very important factors affecting asset returns in equities. Basically, returns tended to vary depending upon the size of firms. So if you consider small firms versus large firms, small firms tended to have returns that were more similar to each other. Large firms tended to have returns that were more similar to each other. So there's basically a big versus small factor that is operating in the market. Sometimes the market prefers small stocks, sometimes it prefers large stocks. And similarly, there's another factor which is value versus growth. Basically, stocks that are considered good values are stocks which are cheap, basically, for what they have. So you're basically getting a stock at a discount if you're getting a good value. And value stocks can be measured by looking at the price to book equity. If that's low, then the price you're paying for that equity in the firm is low, and it's cheap. And that compares with stocks for which the price relative to the book value is very, very high. Why are people willing to pay a lot for stocks? In that case, well it's because the growth prospects of those stocks is high, and there's an anticipation basically that the current price is just reflecting a projection of how much growth potential there is. Now the Fama French approach is for each of these factors to basically rank order all the stocks by this factor and divide them up into quintiles. So say this is market cap. We can divide up all the stocks in-- basically consider a histogram, or whatever, of all the market caps of all the stocks in our universe. And then divide it up into basically the bottom fifth, the next fifth, and then-- it probably needs to go up-- the top fifth. And the Fama-French approach says, well, let's look at an equal-weighted average of the top fifth. And basically, buy that and sell the bottom fifth. And so that would be the big versus small market factor of Fama and French. Now, if you look at their work, they actually do the bottom minus the top, because the value tends to outperform the other. So they have a factor whose more positive values and associated more generally with positive returns. But that factor can be applied and used to correlate with individual asset returns as well. Now, with the BARRA Industry Factor-- this is just getting back to the BARRA Approach-- the simplest case of understanding the BARRA industry factor models is to consider looking at dividing stocks up into different industry groups. So we might expect that, say oil stocks will tend to move together and have greater variability or common variability. And that could be very different from utility stocks, which tend to actually be quite low-risk stocks. Utility companies are companies which are very highly regulated. And the profitability of those firms is basically overlooked by the regulators. They don't want the utilities to gouge consumers and make too much profit from delivering power to customers. So utilities tend to have fairly low volatility but very consistent returns, which are based on reasonable, from a regulatory standpoint, levels of profitability for those companies. Well with an industry factor model, what we can do is associate factor loadings, which basically are loading each asset in terms of which industry group it's in. So we actually know the beta values for these stocks, but we don't know the underlying factor realizations for these stocks. But in terms of the betas, with these factors we can basically get a well defined beta vectors and B matrix for all the stocks. And the problem then is, how do we specify the realization of the underlying factors? Well the realization of the underlying factors basically is just estimated with a regression model. And so if we have all of our assets x_i for different times t, those would have a model given by factor realizations corresponding to the k industry groups with known beta_(i,j) values. And the estimation of these, we basically have a simple regression model where the realizations of the factor returns f_t are given by essentially a regression coefficient in this regression, where we have the asset returns x_t, the known factor loadings B, the unknown factor realizations f_t. And just plugging into the regression, if we do it very simply we get this expression for f hat t, which is the simple linear regression model estimating those realizations. Now this particular estimate of the factor realizations is assuming that the variability of the components of x have the same variance. This is like the linear regression estimates under normal Gauss-Markov assumptions. But basically the epsilon_i's will vary across the different assets. The different assets will have different variabilities, different specific variances. So that's actually going to be heteroscedasticity in these models. So this particular vector of industry averages should actually be extended to accommodate for that. So we have the estimation of the covariance matrix of the factors can then be estimated using these estimates of the realizations. And our estimation of the residual covariance matrix can then be estimated. So I guess an initial estimate of the covariance matrix sigma hat is given by this known matrix B times our sample estimate of the factor realizations plus the diagonal matrix C hat. And a second step in this process can incorporate information about there being heteroscedasticity along the diagonal of the psi's to adjust the regression estimates. So we basically get a refinement of the estimates that does account for the non-constant variability. Now this property of heteroscedasticity verses homoscedasticity in estimating the regression parameters, it may seem like this is a nicety of the statistical theory that we just have to try and check, but it's not too big a deal. But let me highlight where this issue comes up again and again. With portfolio optimization, we went through last time-- for mean-variance optimization, we want to consider a weighting of assets that basically weights the assets by the expected returns, pre-multiplied by the inverse of the covariance matrix. And so we basically in portfolio allocation want to allocate to assets with high return, but we're going to penalize those with high variance. And so the impact of discounting values with high variability arises in asset allocation. And then of course arises in statistical estimation. Basically with signals with high noise, you want to normalize by the level of noise before you incorporate the impact of that variable on the particular model. So here are just some notes about the inefficiency of estimates due to heteroscedasticity. We can apply generalized least squares. A second bullet here is that factor realizations can be scaled to represent factor mimicking portfolios. Now with the Fama-French factors, where we have say big versus small stocks or value versus growth stocks, it would be nice to know, well what's the real value of trading that factor? If you were to have unit weight to trading that factor, would you make money or not? Or under what periods would you make money? And the notion of factor mimicking portfolios is important. Let's go back to the specification of the factor realizations here. f hat t, the t-th realization of the factors, their k factors, is given by essentially the regression estimate of those factors from the realizations of the asset returns. And if we're doing this in the proper way, we'd be correcting for the heteroscedasticity. Well this realization of the factor returns is a weighted average or a weighted sum of the x_t. So we have basically f_t is equal to a matrix times x_t, where this is B B prime toe the minus one, B prime. So our k-dimensional realizations-- let's see, this is basically k by 1. Each of these k factors is a weighted sum of these x's. Now the x's, if these are returns on the underlying assets, then we can consider normalizing these factors. Or basically normalizing this matrix here so that the row weights sum to 1, say, for a unit of capital. So if we were to invest a net unit of one capital unit in these assets, then this factor realization would give us the return on a portfolio of the assets that is perfectly correlated with the factor realization. So factor mimicking portfolios can be defined that way. And they have a good interpretation in terms of the realization of potential investments. So let's go back. The next subject is statistical factor models. This is the case where we begin the analysis with just our collection of outcomes for the process x_t. So basically our time series of asset returns for m assets over T time units. And we have no clue initially what the underlying factors are, but we hypothesize that there are factors that do characterize the returns. And factor analysis and principal components analysis provide ways of uncovering those underlying factors and defining them in terms of the data themselves. So we'll first talk about factor analysis. Then we'll turn to principal components analysis. Both of these methods are efforts to model the covariance matrix. And the underlying covariance matrix for the assets x can be estimated with sample data in terms of the sample covariance matrix. So here I've just written out in matrix form how that would be computed. And so with this m by T matrix x, we basically take that matrix, take out the means by computing the means with multiplying by this matrix, and then take the sum of deviations about the means for all the m assets individually and across each other, and divide that by capital T. Now, the setup for statistical factor models is exactly the same as before, except the only thing that we observe is x_t. So we're hypothesizing a model where alpha is basically a vector that is basically the vector of mean returns of the individual assets. B is a matrix of factor loadings on k factors f_t. And epsilon_t is white noise with mean 0, covariance matrix given by the diagonal. So the setup here is the same basic setup as before, but we don't have the matrix B or the vector f_t. Or, of course, alpha. Now in this setup, it's important that there is an indeterminacy of this model, because for any given specification of the matrix B or the factors f, we can actually transform those by a k by k invertible matrix H. So for a given specification of this model, if we transform the underlying factor realizations f by the matrix H, which is k by k, then if we transform the factor loadings B by H inverse, we get the same model. So there is an indeterminacy here, or a-- OK, there's an indeterminacy of these particular variables, but there's basically flexibility in how we define the factor model. So in trying to uncover a factor model with statistical factor analysis, there is some flexibility in defining our factors. We can arbitrarily transform the factors by an invertible transformation in the k space. And I guess it's important to note that what changes when we do that transformation? Well the linear function stays the same in terms of the covariance matrix of the underlying factors. Well, if we have a covariance matrix for those underlying factors, we need to accommodate the matrix transformation H in that. So that has an impact there. But one of the things we can do is consider trying to define a matrix H, that diagonalizes the factors. So in some settings, it's useful to consider factor models where you have uncorrelated factor components. And it's possible to define linear factor models with uncorrelated factor components by a choice of H. So with any linear factor model in fact, we can have uncorrelated factor components if that's useful. So this first bullet highlights that point that we can get orthonormal factors. And we can also have zero mean factors by adjusting the data to incorporate the mean of these factors. And if we make these particular assumptions, then the model does simplify to just being the covariance matrix sigma_x is the factor loadings B times its transpose plus a diagonal matrix. And just to reiterate, the power of this is basically no matter how large m is, as m increases the B matrix just increases by k for every increment in m. And we also have an additional diagonal entry on the psi. So as we add more and more assets to our modeling, the complexity basically doesn't increase very much. With all of our statistical models, one of the questions is how do we specify the particular parameters? Maximum likelihood estimation is the first thing to go through, and with normal linear factor models we have normal distributions for all the underlying random variables. So the residuals epsilon_t are independent and identically distributed, multivariate normal dimension m with diagonal matrix psi given by the individual elements' variances. f_t, the realization of the factors, the k-dimensional factors can have mean 0, and just to have the identity covariance we can scale them and make them uncorrelated. And then x_t will be normally distributed with mean alpha and covariance matrix sigma_x given by the formulas in the previous slide. With these assumptions, we can write down the model likelihood. The model likelihood is the joint density of our data given the unknown parameters. And the standard setup actually for statistical factor modeling is to assume independence over time. Now we know that there can be time series dependence. We won't deal with that at this point. Let's just assume that they are independent across time. Then we can consider this as simply the product of the conditional density of x_t given alpha and sigma, which has this form. This form for the density function of a multivariate normal should be very familiar to you at this point. It's basically the extension of the univariate normal distribution to m-variate. So we have 1 over the square root of pi to the m power. There are m components. And then we divide by the square root of the individual variance or the determinant of the covariance matrix. And then the exponential of this term here, which for the t-th case is a quadratic form in the x's. So this multivariate normal x, we take off its mean and look at the quadratic form of that with the inverse of its covariance matrix. So there's the log-likelihood function. It reduces to this form here. And maximum likelihood estimation methods can be applied to specify all the parameters of B and psi. And there's an EM algorithm, which is applied in this case. I think I may have highlighted it before, but the EM algorithm is a very powerful estimation methodology for maximum likelihood in statistics. When one has very complicated models which can be simplified-- well, models that are complicated by the fact that we have hidden variables-- basically the hidden variables lead to very complex likelihood functions. A simplification of the EM algorithm is that if we could observe some of the hidden variables, then our likelihood functions are very simple and can be computed directly. And the EM algorithm alternates estimating the hidden variables, assuming the hidden variables are known doing the simple estimates with the observed hidden variables, and then estimating the hidden variables again, and just iterating that process again and again. And it converges. And their paper demonstrates that this applies in many, many different application settings. And it's just a very, very powerful estimation methodology that is applied here with statistical factor analysis. I indicated that for now we could just assume independence over time of the data points in computing its likelihood. You recall our discussion a couple of lectures back about the state-space models, linear state-space models. Essentially, that linear state-space model framework can be applied here as well to incorporate time dependence in the data as well. So that simplification is not binding in terms of holding us up in estimating these models. Let me go back here, OK. So the maximum likelihood estimation process will give us estimates of the B matrix and the psi matrix. So applying this EM algorithm, a good computer can actually get estimates of B and psi and the underlying alpha, of course. Now from these we can estimate the factor realizations f_t. And these can be estimated by simply this regression formula, using our estimates for the factor loadings B hat, our estimates of alpha, we can actually estimate the factor realizations. So with statistical factor analysis, we use the EM algorithm to estimate the covariance matrix parameters. Then the next step, we can estimate the factor realizations. So as the output from factor analysis, we can work with these factor realizations. And those realizations or those estimates of the realizations of the factors can then be used basically for risk modeling as well. So we could do a statistical factor analysis of returns in, say, the commodities markets. And identify what factors are driving returns and covariances in commodity markets. We can then get estimates of those underlying factors from the methodology. We could then use those as inputs to other models. Certain stocks may depend on significant factors in the commodity markets. And what they depend on, well we can use statistical modeling to identify where the dependencies are. So getting these realizations of the statistical factors is very useful, not only to understand what happened in the past with the process and how these underlying factors vary, but you can also use those as inputs to other models. Finally, let's see, there was a lot of interest with statistical factor analysis on the interpretation of the underlying factors. Of course, in terms of using any model, it's once confidence rises when you have highly interpretable results. One of the initial applications of statistical factor analysis was in measuring IQ. And how many people here have taken an IQ test? Probably everybody or almost everybody? Well actually if you want to work for some hedge funds, you'll have to take some IQ tests. But basically in an IQ test there are 20, 30, 40 questions. And they're trying to measure different aspects of your ability. And statistical factor analysis has been used to try and understand what are the underlying dimensions of intelligence. And one has the flexibility of considering different transformations of any given set of estimated factors by this H matrix for transformation. And so there has been work in statistical factor analysis to find rotations of the factor loadings that make the factors more interpretable. So I just raise that as there's potential to get insight into these underlying factors if that's appropriate. In the IQ setting, the effort was actually to try and find what are interpretations of different dimensions of intelligence? We previously talked about factor mimicking portfolios. The same thing applies. One final thing is with likelihood ratio tests, one can test for whether the linear factor model is a good description of the data. And so with likelihood ratio tests, we compare the likelihood of the data where we fit our unknown parameters, the mean vector alpha and covariance matrix sigma, without any constraints. And then we compare that to the likelihood function under the factor model with, say, k factors. And the likelihood ratio tests are computed by looking at twice the difference in log likelihoods. If you take an advanced course in statistics, you'll see that basically this difference in the likelihood functions under many conditions is approximately a chi squared random variable with degrees of freedom equal to the difference in parameters under the two models. So that's why it's specified this way. But anyway, one can test for the dimensionality of the factor model. Before going into an example of factor modeling, I want to cover principal components analysis. Actually, principal components analysis comes up in factor modeling, but it's also a methodology that's appropriate for modeling multivariate data and considering dimensionality reduction. You're dealing with data in very many dimensions. You're wondering is there a simple characterization of the multivariate structure that lies in a smaller dimensional space? And principle components analysis gives us that. The theoretical framework for principal components analysis is to consider an m-variate random variable. So this is like a single realization of asset returns in a given time, which has some mean and covariance matrix sigma. The principal components analysis is going to exploit the eigenvalues and eigenvectors of the covariance matrix. Choongbum went through eigenvalues and singular value decompositions. So here we basically have the eigenvalue/eigenvector decomposition of our covariance matrix sigma, which is the scalar eigenvalues times the eigenvector gamma_i times its transpose. So we actually are able to decompose our covariance matrix with eigenvalues, eigenvectors. The principal component variables are to consider taking away the mean from the random vector x, alpha. And then consider the weighted average of those de-meaned x's given by the i-th eigenvector. So these are going to be called the principal component variables, where gamma_1 is the first one corresponding to the largest eigenvalue. Gamma m is going to be the m-th, or last, corresponding to the smallest. The properties of these principal component variables is that they have mean 0, and their covariance matrix is given by the diagonal matrix of eigenvalues. So the principal component variables are a very simple sort of affine transformation of the original variable x. We translate x to a new origin, basically to the 0 origin, by subtracting the means off it. And then we multiply that de-meaned x value by an orthogonal matrix gamma prime. And what does that do? That simply rotates the coordinate axes. So what we're doing is creating a new coordinate system for our data, which hasn't changed the relative position of the data or the random variable at all in the space. Basically, it just is using a new coordinate system with no change in the overall variability of what we're working with. In matrix form, we can express this principal component variables p. Let's consider partitioning p into the first k elements and the last m minus k elements p_2. Then our original random vector x has this decomposition. And we can think of this as being approximately a linear factor model. We can consider from principal components analysis essentially if p_1, the principle component variables, correspond to our factors, then our linear factor model would have B as given by gamma_1, F as given by p_1. And our epsilon vector would be given by gamma_2 p_2. So the principal components decomposition is almost a linear factor model. The only issue is this gamma_2 p_2 is an m-vector, but it may not have a diagonal covariance matrix. Under the linear factor model with a given set of factors k less than m, we always are assuming that the residual vector has covariance matrix equal to a diagonal. With a principal components analysis, that may or may not be true. So this is like an approximate factor model, but that's why this is called principal components analysis. It's not called principal factor analysis yet. The empirical principal components analysis now. We've gone through just a description of theoretical principal components, where if we have a mean vector alpha, covariance matrix sigma, how we would define these principle component variables. If we just have sample data, then this slide goes through the computations of the empirical principal components results. So all we're doing is substituting in estimates of the means and covariance matrix, and computing the eigenvalue/eigenvector decomposition of that. And we get sample principal component variables which are-- we basically compute x, the de-meaned vector, or matrix of realizations and pre-multiply that by gamma hat prime, which is the matrix of eigenvectors corresponding to the eigenvalue/eigenvector decomposition of the sample covariance matrix. This slide goes through the singular value decomposition. You don't have to go through and compute variances, covariances to derive estimates of the principal component variables. You can work simply with the singular value decomposition. I'll let you go through that on your own. There's an alternate definition of the principal component variable though that's very important. If we consider a linear combination of the components of x, x_1 through x_m, given by w, if we consider a linear combination of that which maximizes the variability of that linear combination subject to the norm of the coefficients w equals 1, then this is the first principal component variable. So if we have in two dimensions the x_1 and x_2, if we have points that look like an ellipsoidal distribution, this would correspond to having alpha 1 there, alpha 2 there, a sort of degree of variability. The principal components analysis says, let's shift to the origin being at (alpha_1, alpha_2). And then let's rotate the axes to align with the eigenvectors. Well the first principal component variable finds the dimension at which the coordinate axis at which the variability is a maximum. And basically along this dimension here, this is where the variability would be the maximum. And that's the first principal component variable. So this principal components analysis is identifying essentially where's there the most variability in the data, where it's the most variability without doing any change in the scaling of the data? All we're doing is shifting and rotating. Then the second principal component variable is basically the direction which is orthogonal to the first, which has the maximum variance. And continuing that process to define all m principal component variables. In principal components analysis, there's discussions of the total variability of the data and how well that's explained by principal components. If we have a covariance matrix sigma, the total variance can be defined and is defined as the sum of the diagonal entries. So it's the trace of a covariance matrix. We'll call that the total variance of this multivariate x. That is equal to the sum of the eigenvalues as well. So we have a decomposition of the total variability into the variability of different principal component variables. And the principal component variables themselves are uncorrelated. You remember the covariance matrix of the principal component variables was the lambda, the diagonal matrix of eigenvalues. So the off-diagonal entries are 0. So the principal component variables are uncorrelated, and have variability lambda_k, and basically decompose the variability. So principal components analysis provides this very nice decomposition of the data into different dimensions, with highest to lowest information content as given by the eigenvalues. I want to go through a case study here of doing factor modeling with the U.S. Treasury yields. I loaded in data into R, which ranged from the beginning of 2000 to the end of May 2013. And here are the yields on constant maturity U.S. Treasury securities ranging from 3 months, 6 months, up to 20 years. So this is essentially the term structure of US Government [INAUDIBLE] of varying levels of risk. Here's a plot of [INAUDIBLE] over that period. So starting in the [INAUDIBLE], we can see this, the rather dramatic evolution of the term structure over this entire period. If we wanted to have set any [INAUDIBLE]. If we wanted to do a principal components analysis of this, well if we did the entire period we'd be measuring variability of all kinds of things. When things go down, up, down. What I've done in this note is just initially to look at the period from 2001 up through 2005. So we have five years of data on basically the early part of this period that I want to focus on and do a principal components analysis of the yields on this data. So here's basically the series over that five year period. Beginning of this analysis, this analysis is on the actual yield changes. So just as we might be modeling say asset prices over time and then doing an analysis of the changes, the returns, here we're looking at yield changes. So first, you can see there's-- basically, the average daily value for the different yield tenors ranging from 3 months up to 20, those are actually all negative. That corresponds to the time series over that five year period. Basically the time series were all at lower levels from beginning to end on average. The daily volatility is the daily standard deviation. Those vary from 0.0384 up to .0698 for-- is that the three year? And this is the standard deviation of daily yield changes where 1 is like 1%. And so basically it's between three and six basis points a day are the variation in the yield changes. So that's something that's reasonable. When you look at the news or a newspaper and see how interest rates change from one day to the next, it's generally a few basis points from one day to the next. This next matrix is the correlation matrix of the yield changes. If you look at this closely, which you can when you download these results, you'll see that near the diagonal the values are very high, like above 90% for correlation. And as you move across, away from the diagonal, the correlations get lower and lower. Mathematically that is what is happening. We can look at these things graphically, which I always like to do. Here is just a graph, a bar chart of the yield changes and the standard deviations of the yield changes, daily volatilities ranging from very short yields to long-tenor yields, up to 20 years. So there's variability there. Here is a pairs plot of the data. So what I've done is just looked at basically for every single tenor, this is say the 5 year, 7 year, 10 year, 20 year. I basically plotted the yield changes of each of those against each other. We could do this with basically all nine different tenors, and we'd have a very dense page of a pairs plot. So I split it up into looking just at the top and bottom block diagonals. But you can see basically how the correlation between these yield changes is very tight and then gets less tight as you move further away. With the long tenors-- let's see, the short tenors-- one, one more. Here the short tenors, ranging from 3 year, 2 year, 1 year, 6 month, and so forth. So here you can see how it gets less and less correlated as you move away from a given tenor. Well the principal components analysis gives us-- if you conduct a principal components, basically the standard output is first a table of how the variability of the series is broken down across the different component variables. And so there's basically the importance of components for each of the nine component variables where it's measured in terms of the relative squared standard deviations of these variables relative to the sum. And the proportion of variance explained by the first component variable is 0.849. So basically 85% of the total variability is explained by the first principal component variable. Looking at the second row, second in, 0.0919, that's the percentage of total variability explained by the second principal component variable. So 9%. And then for third it's around 3%. And it just goes down closer to 0, There's a scree plot for principal components analysis, which is just a plot of the variability of the different principal component variables. So you can see whether the principal components is explaining much variability in the first few components or not. Here there's a huge amount of variability explained by the first principal component variable. I've plotted here the standard deviations of the original yield changes in green, versus the standard deviations of the principal component variables in blue. So we basically are modeling with principal component variables most of the variability in the first few principal components. Now let's look at the interpretation of the principal component variables. There's the loadings matrix, which is the gamma matrix for the principal components variables. Looking at numbers is less informative for me than looking at graphs. Here's a plot of the loadings on the different yield changes for the first principal component variable. So the first principal component variable is a weighted average of all the yield changes, giving greatest weight to the five year. What's that? Well that's just a measure of a level shift in the yield curve. It's like, what's the average yield change across the whole range? So that's what the first principal component variable is measuring. The second principal component variable gives positive weight to the long tenors, negative weight to the short tenors. So it's looking at the difference between the yield changes on the long tenors verses the yield change on the short tenors. So that's looking at how the spread in yields is changing. Then the third principal component variable has this structure. And this structure for the weights is like a double difference. It's looking at the difference between the long tenor and medium tenor, minus the medium tenor, minus the short tenor. So that's giving us a measure of the curvature of the term structure and how that's changing over time. So these principal component variables are measuring the level shift for the first, the spread for the second, and the curvature for the third. With principal components analysis, many times I think people focus just on the first few principal component variables and then say they're done. The last principle component variable, and the last few, can be very, very interesting as well, because these are the variables of the original scales, the linear combinations which have the least variability. And if you look at the ninth principle component variable-- there were nine yield changes here-- it's basically looking at a weighted average of the 5 and 10 year minus the 7 year. So this is like the hedge of the 7 year yield with the 5 and 10 year. So that difference in yield change is-- that combination of yield change is going to have the least variability. The principal component variables have zero correlation. Here's just a pairs plot of the first three principal component variables and the ninth. And you can see that those have been transformed to have zero correlations with each other. One can plot the cumulative principal component variables over time to see how the evolution of these underlying factors has changed over the time period. And you'll recall that we talked about the first being the level shift. Basically from 2001 to 2005, the overall level of interest rates went down and then went up. And this is captured by this first principal component variable accumulating from 0 down to minus 8, back up to 0. And the scale of this change from 0 to minus 8 is the amount of greatest variability. The second principal component variable accumulates from 0 up to less than 6, back down to 0. So this is a measure of the spread between long and short rates. So the spread increased, and then it decreased over the period. And then the curvature, it varies from 0 down to minus 1.5 back up to 0. So how the curvature changed over this entire period was much, much less, which is perhaps as it should be. But these graphs indicate basically how these underlying factors evolved over the time period. In the case note I go through and fit a statistical factor analysis model to these same data and look at identifying the number of factors. And also comparing the results over this five year period with the period from 2009 to 2013, and comparing those different results. They are different, and so it really matters over what period one specifies these models to. And fitting these models is really just a starting point where you want to ultimately model the dynamics in these factors and their structural relationships. So we'll finish there.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
25_Ross_Recovery_Theorem.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PETER CARR: So, I welcome comments or questions at any point during this talk. We have an hour and a half, and I have only 50 slides. So we should be OK. So, this is joint work with Jiming Yu, who's a colleague of mine in my group at Morgan Stanley. I head up the global market modeling team at Morgan Stanley. It's a group of about 70 PhDs, mostly, spread around the world. There's about 30 of us in New York, some in London, quite a few Budapest, and a few in Beijing. The title of this talk is Can We Recover? And it's meant as a triple entendre. So, it could refer to either the systemic risk arising from the credit crisis, or the main result in a recent paper by a professor here at MIT named Steve Ross in the Sloan School, or it could actually be the academic and practitioner reaction to this result. So, it's really about two and three. So, it's not about can we recover from the crisis. There's a professor at Sloan School named Stephen Ross. And he's very well-known in academic finance. Your professor was kind enough to mention that I won Financial Engineer of the Year. And that was two years ago. I was like the 20th winner. He was the second winner. The first winner was another MIT professor, Bob Merton. So anyway, he wrote a paper a couple years ago, and it's only now about to be published. So this is like typical in academic circles. It takes a long time for a paper to come out. And this paper is coming out in Journal of Finance. That's what JF stands for. And Journal of Finance is the main journal for the academic finance community. And the title of the paper is The Recovery Theorem. And that's also the title of the theorem one in his paper. And that theorem one we'll go over. And it gives a sufficient set of conditions under which, what Professor Ross calls "natural probabilities," at a point in time can be determined from-- OK mathematically, from exact knowledge of Arrow-Debreu security prices, which you probably don't know what they are. But less mathematically, we'll just say from market prices of derivatives. OK, so derivatives you've heard of, I'm sure-- things like options, for example, on stocks or stock indices, could be on currencies. So, imagine that you look at Bloomberg. Bloomberg publishes a whole bunch of prices. And the idea is that you take this information, and from it you're learning what the market believes are the probabilities concerning the future. And so, if the option is on S&P 500 stock index, then you're learning from options prices what the market believes are the likelihoods of various possible levels for the S&P 500. So, we take this information on Bloomberg and, truth be told, we use it along with some assumptions to extract these implied market probabilities. So, I want to tell you what those assumptions are. And so, the actual output of this analysis is a probability transition matrix. Or, if you do it in continuous time, you'd call it a-- in continuous state space, you'd call it a transition probability density function. So the key word there is "transition." And what transition means is you're getting not only the probabilities going from, say, the current S&P level to any one of several levels, but even the probabilities of going from some other level than we're presently at today to that range of levels. You could say, for example, the market believes that given that we're here now with S&P at, say, 1,500, that the probability of more than doubling is one half, for example-- which would be really high. But you know, I'm just picking numbers randomly here. And you can even say that if S&P were to drop instantaneously to half its level, that the probability of more than doubling from there is, say, one-third. So, you can answer questions like that. That's The output of this type of thinking. So there'll be three probability measures that we can be thinking about. And we'll call them P, Q, and R. And I'd like to tell you what each of them means. So P stands for physical probability measures. So the P is for physical. And think of that as the actual objective reality of future states for, say, S&P 500. So let's say God knows that, for example, the probability that S&P is up by the end of the year is one half. And we, unfortunately, not being God, don't know that. But let's say the philosophy is that. There is some sort of true probability of S&P being up at the end of the year. And let's say I used a half. Maybe it's 60%. If it is 60%, then the probability of S&P being down at the end of the year is 40%. And the point is P is meant to indicate the frequencies with which S&P 500 in my example takes on various values. Now, there's another probability measure that people in derivatives spend a lot of time working with. And that's called risk-neutral probability measure, and it's often denoted by a letter Q. So we'll denote it by Q. And the concept of a risk-neutral probability measure was also actually proposed by Steve Ross many years ago. And it's called risk neutral because when you're working with it, if you think about how fast prices appreciate over time, then they grow randomly. But on average, under this risk-neutral measure Q, the grow at the same rate as your bank balance would grow. So your bank balance, let's say, nowadays is growing at best at the rate of 1%. And when you look at how fast, historically, stocks have grown, it's actually much higher, on average, than 1%. It's more like about 9%. So we would call the difference between 9% and 1%-- we call that 8% differential risk premium. And let me just pretend there's no dividends to keep life simple when I say this. So now, this risk-neutral measure is kind of a fictitious probability measure in the sense that it's not describing the actual probabilities or frequencies of transitions, it's more a device, or a tool, or a trick that's handy. And one of its properties that causes it to earn the name risk-neutral probability measure is that when you look at how fast, say, S&P grows on average under this risk-neutral probability measure Q, it would be growing nowadays at 1%-- so the same as your bank balance is growing at. So the word risk-neutral is meant to indicate that the growth rate under this measure is consistent with investors in the economy being risk-neutral, meaning that they require no premium for bearing risk. Now there's a third probability measure that we're going to be talking about today that actually you won't find any literature on. And we're going to call it R. It seems like a natural letter to pick, having already gone through P and Q. And you can think of the R as standing for recovered probability measure. And it's going to be the probability measure that we get from market prices as I was talking about earlier. And the operational meaning of this R measure is it's capturing the market's beliefs regarding the future. But we allow for the possibility that the market could be wrong. So we're applying this to say houses and housing prices in, say, 2005-- it may well be that if we looked at Bloomberg and got prices of mortgage-backed securities, that we would extract an R probability measure that says housing prices are going to continue on their incessant upward trajectory. And, you know, we're going to keep growing at the rate of, say, 15% a year each year for the next 10 years, or something like that. So, that could be what the market's beliefs were back in 2005. And we know now that those beliefs were wrong, if that was what the market was inferring. So, I want to allow for at least the theoretical possibility that the market could be wrong. And so, that's why I'm drawing a distinction, let's say, between the R probability measure that captures the market's beliefs and the P probability measure that captures physical reality. So now, there's a lot of people in finance who simply cannot accept the possibility that the market could be wrong. And for those people-- the sort of true believers in market efficiency-- they are free to set R to P every time they see an R. But I want to allow for the possibility that what we recover is not physical probabilities, but simply the market beliefs. And anyway, it's kind of semantics. It's good semantics if the probability measure we recover is the one Ross said we should get. R stands for Ross. So Ross calls the probability measure that we recover-- he calls them natural probability measures. And well, let's say, that suggests that the risk-neutral probability measures are unnatural, which I think is fair actually. Because when you hear the word probability, you tend to think about frequencies with which events occur. And the risk-neutral probability measures do not give you the frequencies with which events occur. What the risk-neutral probability measures give you is instead prices of so-called Arrow-Debreu securities. So, let me give you a sense of what that means. So say I tell you that the risk-neutral probability of S&P 500 being up at the end of the year is 40%. Then how should you interpret that? Well, you should simply interpret it as this. Imagine that you can agree now to buy a security that pays $1 just if S&P 500 is up at the end of the year. And usually when you and I buy things, we buy them in a spot market. So we pay now for things. But sometimes your credit is good, and you can actually agree now to pay later. So, we're going to be thinking that you're agreeing now to pay later some fixed amount in return for the security that's going to pay $1 just if S&P 500 is up at the end of the year. And if I tell you that the risk-neutral probability of S&P 500 being up by the end of the year is $0.40, what that means financially is that you agree now to pay $0.40 at the end of the year for the security. So, you can imagine there'd be another security that pays $1 just if S&P 500 is down by the end of the year. And the only possible price that that security could have in an arbitrage-free world would be $0.60. Because if you were to buy both securities, then you get paid a total of $0.40 and $0.60. So you're agreeing now to pay $1 at the end of the year. And then having both securities, either S&P is up, or S&P is down. And so, you collect $1 from one of them and not the other. So if, for example, the one paying if S&P is up cost $0.40, while the one paying if S&P is down only cost $0.50, then there would be an arbitrage, which we would buy both securities, agree now to pay $0.90. And then get $1 for sure at the end of the period. So we'd be up $0.10 by the end of the year. Question-- AUDIENCE: These are similar to digital options? PETER CARR: Yes. It's more than similar. They are digital options. Yeah. So, that's right. So, that's another term, which I'll actually use on the next slide. So, that's exactly right. So, digital options is just too good a term. So economists, in order to obfuscate and look smart, call them Arrow-Debreu securities. So, continuing with the obfuscation, I want to tell you about a world with a representative agent. So, economists are fond of trying to formally model the market. You read the newspaper. Every day, you'll read something like market thought that stocks were no longer a good investment. So there was a sell-off. Market is a nice, short word to capture what people are thinking. And so economists, rather than say the market, will say there's a world where the representative agent-- So this representative agent is a fictitious investor who has all the mathematical properties that we give an investor, such as utility, function, and an endowment, and so on. And what makes this particular investor a representative agent is that this agent sort of finds that current prices are such that it's optimal to hold exactly what's available in the amount that is available. So if what's on offer is, let's say, some Google shares, and some Apple shares, and some IBM shares. And if we take the total market cap of Google, total market cap of Apple, total market cap of IBM, and, let's say, Apple's biggest. I don't actually know whether Google's bigger or IBM, but let's say it's Google, and then IBM. So let's just say Apple's biggest, then Google, then IBM. Well, this investor would actually find that it's optimal for him to have most of his money in Apple, second most of his money in Google, third most amount of his money in IBM, that's the representative agent. So, he's acting in the way the whole economy is acting. Well, I've been working in Wall Street now since 1996. I have yet to hear a trader tell me about a representative agent. Anyway, so although I understand what the words mean, and even the math, I wanted to present this material in a way that, let's say, at least quantitative traders could understand it. So I tried to get away from representative agents and present these ideas in the language that at least quants on Wall Street are familiar with. So, I won't be talking about a representative agent, and I will be talking instead about something that's probably not too familiar to you, but at least quants have heard of. And that would be something called numeraire portfolio. And it also goes by other names. Another name is growth optimal portfolio. And it even has a third name, which is called natural numeraire. And these are three different phrases that all describe the same mathematical object. And this mathematical object is a portfolio-- and more precisely, it's the value of a portfolio that has some nice properties. So the growth optimal portfolio indicates one of its properties. This portfolio has a very nice property, which is that in the long run-- meaning over an infinite horizon-- the growth rate of this portfolio is, first of all, random. But second, if you take the mean of that random growth rate, that mean is actually the largest possible among all portfolios. So, starting with Kelly in 1956, this particular portfolio with the largest mean growth rate over an infinite horizon receives a lot of attention. It's actually quite humorous, some of this attention that it's received. So, Kelly was a physicist who worked at Bell Labs. And he was actually a colleague of Shannon's at Bell Labs. So Shannon did his seminal work at Bell Labs, but actually came here after that. And his ideas really caught on-- and especially, I'd say, started the field of information science, we'll call it-- whatever. But Kelly was applying these ideas to finance. And certain financial economists were less than enthused about the application information of theory to finance. So, in particular, there was a financial economist here named Paul Samuelson who championed, I guess, the opposition to this Kelly criterion it's called. And so, I'll just tell you a short story. AUDIENCE: Excuse me. PETER CARR: Yeah. AUDIENCE: If I could just interject-- PETER CARR: Yeah, sure. AUDIENCE: We had mentioned in an earlier class the book Fortune's Formula. And this book goes into a lot of background and storytelling about this whole era and exchanges. PETER CARR: That's true. It's a fantastic book. I read it. I loved it. Especially if you're at MIT, you should definitely read this book. It talks about a lot of MIT professors, some of whom are still here, like Bob Merton. It's a quick, easy read. You don't even have to have a background in finance to really enjoy it. So you can read about the story I'm going to tell you now in that book. So the story is Samuelson grew a little tired, I guess, with trying to explain to these dumb information theorists that this Kelly criterion was not so great. So he published an article in a journal called Journal of Banking and Finance-- that's actually a finance journal-- where he explained why it wasn't necessarily such a good idea to hold this portfolio. And in this article, every word he used was of one syllable, except the very last word of the article, where he managed to say that he has-- I can't even do it in one syllable-- OK, so just ignore my multi-syllabic words. But anyway, he says, I have managed to write an article with all words with just one syllable, except for this last syllable-- OK, I lost it-- sorry. But anyway, the last word in his thing was syllable itself, which is multi-syllabic-- or whatever. So anyway, it was kind of insane. So, let's move on. So this talk-- it has six parts. And we have an hour to go. So let's say we'll try to spend 10 minutes on each. AUDIENCE: [INAUDIBLE] PETER CARR: Yes. Well, that's a good question. So, it does have risk, first of all. It does have a lot of risk. It's not the riskiest, though. So some risk does not carry with it expected return. And so that's why it's not the riskiest-- but it's risky. So Samuelson's objections were precisely what you're getting at, that this is a fairly risky strategy. So, I'm glad you brought that up. OK. So there's six parts to the talk. I'm going to go over what Arrow-Debreu security prices are-- so again, they're digital options prices-- and their connection to market beliefs. I'll talk about this Ross recovery theorem. So in Ross's paper, which you can get on SSRN, he does everything in a setting that's called finite state Markov chains. And so that's mathematically simpler than what we use in practice. And I totally agree that when you try and introduce something, you do it in the simplest mathematical setting. So now that he's done that, I wanted to do it in a more familiar setting, which is a diffusion setting. A diffusion has an uncountably infinite number of states. And I still want to keep things as simple as possible while going beyond finite state Markov chains. So I work in a univariate diffusion setting. So there's only one source of uncertainty, which is the same as in Ross. And our technique is to get these results. It's based on something called change of numeraire. So numeraire is a technical term, actually, that describes an asset whose value is always positive. So there are securities whose values can have either sign. So, swaps are a classical example. So a swap is a security which at inception has zero value, actually. And then the moment after inception, the world changes, and the swap value either becomes positive or becomes negative. So a swap would not be eligible to be a numeraire because of that property that its value is real. On the other hand, if you take a stock, its price is always positive-- well, that's debatable actually-- so let's say let's not do stock. Let's do a treasury bond. A treasury bond-- US Treasury bond-- its price is always positive. The reason I want to shy away from stocks is because we take Lehman Brothers stock, for example. It's price was positive, then became zero. And actually, because Lehman's price became zero, Lehman's share you could not be a numeraire. So when I say that the numeraire value has to be positive, I mean strictly positive. And so anyway, there's this literature about how to change numeraire, how to go from one asset with positive value to another asset with positive value. And it's useful for understanding how this Ross recovery works. So, we apply it when we have a so-called time-homogeneous diffusion-- and I'll tell you what that means-- over a bounded state space. So bounded state space means that the set of values that the diffusion can take is in some finite interval. So if you're thinking about the uncertainty being, for example, S&P 500, then the natural lower bound for S&P 500 would be zero. And you have to accept that there's a finite upper bound in order to apply our results. Now you know, personally, I have no problem saying the S&P 500 is bounded above by 20 trillion. OK, but some economists have actually said this is ridiculous. and challenged my work, and stuff like that for that assumption. So, because of those challenges. I have actually been trying to extend our work to an unbounded state space, where, let's say, the largest possible value for S&P 500 would be infinity. And I've found, actually, that it's not that easy. And so sometimes, I can make it work, and sometimes I cannot. So, when we get there, I'll explain some examples that work and some examples that don't. So this last section is kind of incomplete, this sixth section. And so, basically, I've got examples that fail, examples that succeed. But I don't have a general theory. So there'll be different assumptions in different parts of the talk. But within a section, there's only one set of assumptions operating, AUDIENCE: Excuse me. PETER CARR: Yeah. AUDIENCE: [INAUDIBLE] the value of anything is [INAUDIBLE]. [INTERPOSING VOICES] PETER CARR: That's been my response too. So the universe is bounded. And it's growing, but it's bounded. So, I agree. You know, I'm on your side on this. I'm just telling you what I've been told. Yeah. So, I'm working on it anyway, just so they can shut up. But, anyway-- AUDIENCE: Actually, I have some comments on the issue of the numeraire. You'll tell me how connected this is-- but with the Kelly criterion, one of the origins of that is if you have a gambling opportunity where it's favorable, how much of your bankroll should you bet on that gamble? And basically, the Kelly criterion tells you what proportion of your bankroll you should invest at all times. You should never bet everything. And if you do bet everything, you lose everything, and you're done. So, the issue with the numeraire portfolio and never being able to go down to zero, in the sense that you can never go bankrupt. And so, assumptions of being able to always rebalance your portfolio-- PETER CARR: So, just give you a flavor of what this numeraire portfolio is-- you're betting a constant fraction of your wealth in every security. So let's just keep it simple. There's only two securities. One is risky, and the other's riskless. And so you might be betting putting 40% of your wealth in the risky one, and 60% then in the riskless one. And that's when you start. So you have $100, and you put $40 in the risky one, and $60 in the riskless one. And then, time moves forward. And let's say the price of the risky one changes. Then when you revalue using the new price, it's unlikely that 40% of your wealth is in the risky one. So in fact, if the price went up of the risky one, you'll have more than 40% of your wealth in that risky one. So you need to sell some of that risky one. And then the money you get, you put into the riskless one. And so, every time the price changes, you need to trade, theoretically, in order to maintain a constant fraction of 40% of your wealth invested in this risky asset. So we assume zero transactions cost when we do this analysis. Because there are positive transactions cost. One should take that into account. And there is literature on how to do that. So, I won't be formally entertaining transactions cost in this talk. There's work here at MIT, actually, on doing that. For the question of how should you invest, it feels like it's a complication that won't change anything qualitative about-- it'd definitely change how frequently you trade, but it wouldn't, let's say, it's unclear how it would change your initial investment across bets. So, let's begin with part one. So we have the digital options, or also called binary options. That's another term. And they trade, actually, in FX markets-- so foreign exchange. And they pay one unit of some currencies, so say dollar-- If an event comes true. So it might be that you're looking at dollar/euro. And if by the end of the year, dollar/euro exceeds 2, then you get $1. Otherwise, you get $0. So there would be a price in the FX markets. And it would be a spot price typically-- so meaning you have to pay now for it. Let's let A, for arrow, be the price today of such a security. And the subscripts on A are j given i. So, the idea is that you can think of yourself as in a finite-state setting. There's various discrete levels of say, dollar/euro that we have that can be possible today. And there's also various discrete levels for dollar/euro by the end of the year. And i indicates the state we're in. So maybe dollar euro is $2 per euro right now. And j indicates the state we can go to. Maybe we can go to $3 per euro. So in my example, A_(3|2) would be the price of an Arrow-Debreu security, given that the current dollar euro exchange rate is $2 per euro, and it pays $1 just if dollar/euro transitions from $2 per euro to $3 per euro. So the idea is we have discrete states. And let's say these are values that are possible at the end of the year. And the example I just went through-- you're getting $1 just if it's $3 per euro at the end of the year. So the height of that vertical line is one. Now, I'll just comment that this is a slightly exotic option, in the sense that-- let's call it exotic. It's slightly exotic. So in contrast with exotics, there's this term "vanilla." OK, and it actually indicates a flavor of ice cream. So, we have this terminology which you get used to after awhile. And you can't understand when you talk to a man on the street why they don't understand what a vanilla option is. So a vanilla option is a payoff that looks like this-- so it's a hockey stick payoff. And that's the payoff from a call option. And it turns that there is a portfolio involving options at three different strikes that can perfectly replicate the payoff to this Arrow-Debreu security. And so, here is a payoff from a single option struck at two. And I'll just say that if I had changed the strike to, say, be three, then it would look like that. Now, you can combine options in your portfolio. So you could, for example, buy a call struck at two. And then you can furthermore sell two calls struck at three. So if you sell, on top of that, two calls struck at three, you end up creating a portfolio that goes like this. And so, they can go negative in value. So if you not only buy one call struck at two, sell two calls struck at three, but furthermore, buy one call struck at four, then you end up with this payoff, which the payoff is called a butterfly spread payoff. Because the picture is meant to remind you of a butterfly. And notice that if the only possible values for the FX rate were $1 per euro or $2 per euro, or $3 or $4, or $5, if that were the world. Then notice that when you formed that portfolio, the only positive payoff you can get from it is $1 just if dollar euros at 3. You can synthesize a Arrow-Debreu security using a butterfly spread. So, this was pointed out many years ago. So even if the FX market were, let's say, not directly giving us the prices of digital options, we could from vanilla options extract the implicit price of a digital. And what you would learn from vanilla options is what the market is charging for the digital, given that, let's say, we're presently at $2 per euro. And what you would not learn from these options prices is what the price of the security will be should we today have the exchange rate change to some other value. However, you can make assumptions as have what the options prices will be were today's exchange rate different. So, that's commonly done in practice. So a common assumption, for example, is that the probability of transitioning from two to three-- so moving up by half-- so you're moving up by half of two to three-- is the same if you were at any other level. So for example, if you were at four, then the probability of going to six would be whatever the probability is of going from two to three. Because if you're at four, the probability of going up by half of four to six-- that's the assumption. OK, so that's called sticky delta, and it's a common assumption. So if you make that assumption, then you can take the information at just today's level. And like, let's say, you know all the digitals from two, and you can make that assumption. Let's say the probability of a given percentage change is invariant to the starting level. And then you can, from that, figure out what the probability of going from four-- a different level than we're at today-- is to all these different levels. So you can go from a vector bit of information that the market is giving you to a matrix. And that matrix is called transition matrix. And so, we're going to, in this talk, assume that somebody's made such an assumption. And so, you actually know this matrix. So you actually know, as a starting point, what the prices are of these Arrow-Debreu securities or binary options starting from any level and going to any level. I think in order to get through my whole talk, I'm going to skip these slides. Because they're kind of like just being very precise about what some terms mean that aren't going to be that important for the overall story. So OK, let's go to this slide. So we think of there being just a single source of uncertainty X, which could be dollar/euro. And we imagine that we have this matrix of Arrow-Debreu security prices. We know every number in this matrix. And we ask what does the market believe about transitions from any place to any place? What does the market believe is the frequency of these transitions? Now, suppose that the number that's indicating the price of the Arrow-Debreu security going from two to three-- suppose that number is, say, 0.1. Now what does it mean? It just means that you pay $0.10 today for security paying $1 just if you go from two to three. That's all it means. Now you can ask what is the frequency with which you go from two to three? It need not be 10%. There's at least two reasons why the $0.10 price could differ from the probability of going from where you are to where you get paid. One such reason is simply time value of money. So if you were to buy all these Arrow-Debreu securities, the one paying off-- one for every state, you'll find that the total cost is less than 1, even though the payoff, for sure, is one from the portfolio. And that's simply because of the time value of money. So when you put $1 in the bank today, you actually get more than $1 back when you pull out at the end of the year. And if you do the inverse problem-- how much do you have to put in the bank today in order to have $1 at the end of the year? It might be $0.95. So that's called time value of money. And so, just the fact that you have to pay now for the Arrow-Debreu security. And you only get paid off at the end of the year. That causes this price of $0.10 to be lower. So that's just discounting for time. The interest rates are positive. So that's one effect. Now there's another effect, which is called risk aversion. So risk aversion is the thought that even if the interest rate was 0, to abstract away from the effect I just described, that it still may be the case that a $0.10 price paid for an Arrow-Debreu security transitioning from two to three is different from the probability of such a transition, the real-world probability of such a transition. Because, for example, it may be quite desirable to get money in that state, in which case $0.10 is over the real-world probability. Or it could be the opposite that maybe it's not desirable to get money in that state, in which case $0.10 is under the real-world probability. So give you a more concrete example-- let's say something that is maybe a little closer to home is-- let's say this is S&P 500, and I know the values are very different than the numbers I've indicated here-- but let's just forget about the actual numbers. So the point is let's suppose that it's equally likely, in terms of true probabilities, to go from two to three as it is to go from two to one. So we have two Arrow-Debreu securities. One struck at one. The other struck at three. And I'm telling you that it's equally likely that you go up by one as it is to go down by one. Now you can ask the question does it necessarily mean that the prices of these securities that pay $1 are the same? And the answer is no, not necessarily. And actually, the sort of standard thinking in financial academic circles is that for S&P 500, it would cost more to buy this Arrow-Debreu security than it would cost to buy that one, even though everyone agrees that it's equally likely to get paid from each of them. And the reason that it's thought to cost more to buy this one than it is to buy that one is because this one has an insurance value. So the thinking is that on average, people are long the stocks in the stock market, and that that means that they're really upset when the stocks fall. And so they really like this one that ends up paying should the stock market fall from two to one, whereas this one, while it's nice to get money, let's say you're already fairly wealthy from the fact that you're owning stocks and the stock market went up. So you'll pay a positive amount for this security, but not as much as you pay for this one. So that's called risk aversion. So what we want to do is go from the prices that are contaminated, let's say, by time value of money effects and by risk aversion effects. And we want to cleanse them of that contamination and try to extract what the market believes are the frequencies of the future states. So I'll tell you that this was thought to be impossible before the Ross paper, and in fact, without making assumptions, it is impossible. So all Ross did is make some assumptions that are thought to be fairly mild by some, including me. And so he essentially, in essence, showed the power of some assumptions. That's one way of thinking about it. So again, let's denote by R the recovered probability measure which will tell us the market beliefs about the frequencies future state. And we don't know R when we start. What we do know is these Arrow-Debreu security prices, I'm assuming. And we'll denote those by A for Arrow. So what Ross's paper does is it says, you know A. And if you're willing to make the following assumptions, then you'll know R. So what are the assumptions? Well, before I tell you assumptions, I have to tell you some terminology so that you understand the assumptions. So he'll work with a pricing matrix A, which we've actually been going through. So that's the Arrow-Debreu security prices index by starting state and final state, which we'll call x is starting state, y is final state. Then there'll be the desired output from this analysis, which he calls natural probability transition matrix. So these are the markets beliefs for every starting value x and for every final value y. And then there'll be something called pricing kernel, which is literally the ratio of these Arrow-Debreu security prices to these output natural probabilities. So, if you want to get an understanding of what this pricing kernel is, you can think of it as an attempt to capture the effects from time value of money and from risk aversion. So think of it as a normalization. You start with A, and A is actually affected by three things. It's affected by the unknown real world probabilities-- or at least markets beliefs of them. A is also affected by a second thing, which is time value of money. And A is affected by a third thing, which is risk aversion. So if we take A and divide by P, then we're normalizing for the first effect, the frequencies. And so we're left with just the combined effect from time value of money and from risk aversion. And so, let's say, if interest rate were zero and people were risk neutral, then we would actually expect A to equal P. And so this ratio would be just constant. So Ross talks about a world with a representative investor. And essentially, this is an assumption-- this equation you're seeing here. It's an assumption on the form that a function of two variables takes. So phi, first of all, is a positive function. So phi is positive, as opposed to-- so phi cannot take negative values because both A and P are positive. And phi is a function of two variables-- x and y. And what this assumption is doing is it's saying, well, let's put structure on this function phi because it'll help us to find it if we put the structure. So this is the first key assumption actually-- that the function of two variables x and y actually has the form on the right, which, for a moment, just ignore the delta for a moment. And then you can see that what you have on the right if you ignore delta, if you think of delta as one, is you have a function of y. And then you have the same function of x. So it's written in a convoluted way with this U prime, and c, and all that stuff. But if delta's one, then you have a fraction whose numerator is a function of y and whose denominator is the same function, but of x. In essence, what that does is it reduces the dimensionality of the thing we're searching for by a lot. So we started by searching for a function phi of two variables. And we, by this assumption, reduced the search to a function of one variable, which is, say, the function in the numerator, which is the same as the function in the denominator. So, now let's bring back delta. And delta's a scalar here, and it's a positive scalar. And so we need to search for that as well. So in the end, we reduce the search to a function of one variable and a scalar delta. So the economic meaning of, first of all, the function of one variable is-- it's called marginal utility. And it's meant to indicate how much happiness you get from each additional unit of consumption. So it's the typical-- what we think it looks like as a function of c-- U prime as a function of c is thought to typically look like that. So it's positive, meaning every unit of consumption makes you happy. And it's actually declining, meaning the first unit of consumption makes you real happy. Then the next unit of consumption still brings some happiness, but not as much, and so on. So that's the kind of function we're looking for. U prime as a function of c. He won't actually find U prime as a function of c. He'll find the composition of U prime with a function c of y. Keep that in mind. Then, there's that delta. And that's, again, a positive scalar. And it's meant to capture time value of money. And so, that's like the y is the state at the end of the period. And x is the state at the beginning of the period. And so, that's why delta's associated with the numerator, not the denominator. So delta would be a number like 0.9. And that indicates how much discount you give to, let's say happiness received in the future, rather than now. Now, here's a quote from Ross's paper that is his Theorem 1. That's called the recovery theorem. And the only thing is I changed the letters to conform with the letters I'm using, rather than the ones he used. And that's because his choice of letters is completely unnatural to me and most people. So I don't even want to tell you what he used. So anyway, whereas I tried to choose letters that make sense. So I used A for Debreu-- So anyway, he says, you have a world with a representative agent. So that's actually this restriction that we talked about on the last slide. And then he says, if the pricing matrix-- which is the Arrow-Debreu security prices-- is positive-- which means that all entries in it are strictly above zero-- or irreducible-- which means that some entries have zeros, with the rest being positive, and there's some structure, which we need not get into where the zeros are-- then there exists a unique solution of the problem of finding P, which P is actually market beliefs. And I've been calling that R often. So anyway, I slipped a bit there and called it P. So anyway, that's market beliefs about the frequencies of future states. He'll also get as an output the delta, which is the positive scalar telling you the market's time value of money. And finally, this pricing kernel phi, which is the ratio of A to P. So, what you're supposed to realize, even though he didn't say it, is that as a result-- well, OK. So he did say it actually. You're finding P. I think that's the main thing. He's actually saying, if you make these assumptions, surprisingly, there's only one possible real world or market beliefs that are consistent with the data and the assumptions made. To give you a sense of what the importance of this result is-- so prior to his paper-- I mean, people have been interested in trying to infer from market prices what the market believes. But they always thought that you had to supply some parameters that capture market risk aversion. So for example, common approach is to assume that you have a representative investor. And that they have a particular type of utility function called constant relative risk aversion. And there's a parameter in that utility function. And you had to specify the numerical value that parameter takes before you could learn the market's beliefs from prices. And no one ever felt very comfortable specifying that parameter. So what Ross essentially did is he managed to essentially do the identification non-parametrically, where you don't have to supply any parameters. And so you essentially just have to buy his assumptions. You don't have to do any work to actually go from market prices to market's beliefs. OK, so let's skip these remarks. Yeah. AUDIENCE: Can you elaborate on the fact that risk aversion does enter in? PETER CARR: Yeah. So the exact statement is you don't have to supply a parameter that describes the amount of the market's risk aversion. Rather you have to accept this assumption-- and I'll show you-- this assumption about the structure of phi. OK, so if you just accept that this function of two variables doesn't have the full amount of degrees of freedom that an arbitrary function two variables has, it has a reduced number of degrees of freedom implicit on the right-hand side. So remembering that x is actually just a vector of finite length and so is y, then think of the left-hand side as having degrees of freedom n squared. And on the right-hand side, you're looking for the numerator function is just a vector of length n. And the denominator function is the same function, so the same vector. And then there's also this delta. So let's say on the left-hand side, you're describing something that without restriction is of order n squared. So let's say n is 10, so it has 100 degrees of freedom. And on the right-hand side, you're describing a vector of length 10 along with a scalar-- so 11 degrees of freedom. So you have to accept that you're willing to before you place any restriction, it's 100 degrees of freedom. Now you make your restriction. It's 11. You have to accept that. And if you do, then he'll tell you the 11 entries. That's it. So you don't have to supply anything. So I haven't told you how we'll find them. That's probably what you're asking-- how the hell will you get to 11? OK, so I haven't shown you that. Yes? AUDIENCE: Just really quickly, the c change as a function of time and spot price? PETER CARR: c is not a function of time, to answer your question. And then, the argument of c could be a price. It's allowed to be a price. OK? So, that's how you should think of it. So there's a lot of time homogeneity in everything he does here. So he'll never let anything depend on time, actually, to answer your question. So, I still haven't shown you how he did it. He uses Perron-Frobenius theorem. I don't actually have slides on how you actually calculate the 11 entries. So I think I just have to refer you to the paper. But he relies on something called Perron-Frobenius theorem. And I'm going to show you how we-- my co-author and I-- actually calculate the analog of that 11-dimensional unknown. So we're going to work in a continuous setting, where instead of looking for a vector and a scalar, we're going to look for a function, and a scalar, and a function of one variable. So you'll get a sense of how to do it from ours. And essentially, if you discretize what we do, you'll get what he did. Let's forget these remarks, and let's forget these. And so now, we'll get into some theory about changing numeraire. So this is a backdrop to how my co-author and I proceed. So again, a numeraire is a portfolio whose value is always strictly positive. And there is a well-developed theory in derivatives pricing about how to change the numeraire. We're going to use that theory to understand what Ross did. So we start with an economy with a so-called money market account. And so that's a theoretical construct that's pretty familiar to most of us, and it's a bank account. So we're going to be working now in continuous time. So imagine that time, which is continuous, is on this axis. And then we're sitting here today, and we put some money into the bank. And being poor, we only put $1 in. So then we ask, looking forward, how will this money in our bank change? Well, they do still pay a positive interest rate, and it's awfully small, but it's positive. And so it'll go up. And they change the rate actually. So now, maybe it's 0.5%, but next week, Chase might decide to give you 1%, in which case it goes up faster. And then they might the week after give you 2%, it goes up faster. Then they might go back to 0.5%. So that's one possible path for your money market account balance. And we don't know the future. We know how much we're getting over this first little bit of time. But they could actually decide to pay less over the second period, and then the third, or something like that. OK, so it's increasing and it's random. So that's the money market account balance. It's considered as an increasing, random process. And actually, there's nothing in the math that requires it to be increasing if some really cheap bank-- like Bank of America tried this actually-- charge a negative rate. Then it would actually go down with a negative rate. But it wouldn't go negative. So it's still counts as a numeraire. So anyway, that's allowed, as an aside. OK, so we've got this money market account. So the growth rate is called r, and that's just real-valued. And then we also have risky asset. So we'll have a total of n risky assets. And then we're going to say there's no arbitrage between the n risky assets and the one money market account. The idea is that we look at Bloomberg's prices for these n plus 1 assets, we're able to extract the Arrow-Debreu security prices. That's the idea. What I'm assuming is that what we're extracting is consistent with the idea that the uncertainty that's driving everything here is a diffusion, meaning that the uncertainty has sample paths that are continuous, but they're allowed to be fairly jagged. So diffusions actually have continuous but non-differentiable sample paths. And we're going to assume that. So this is a common assumption. This basically got its start here at MIT. And diffusions were first used in a finance context back in 1965 when both Samuelson and McKean were here. So McKean is a probabilist. He's now at NYU where I teach, and he's still active. And diffusions are widely used. So they really got a big boost in 1973 when Black-Scholes and Merton, who were all here, used the diffusion to describe the price of a stock underlying an option. And since then, they've just been used extensively in finance. So Merton, who's here, really, I'd say, pioneered the use of them in finance. So there's this uncertainty X is probably mysterious to you, hence the name X. So it's like, you get to choose what it is, is kind of the idea. So this is theory. And it's not trying to be overly specific so that you can apply it in different contexts. But you'd like to know at least some examples, I'm sure. So one example would be X is the level of S&P 500. A different example would be X is actually an interest rate. So let's say the benchmark 30 year yield. X could instead be a shorter-term interest rate, something called OIS-- overnight index swap-- is a possible choice for x. When I apply Ross's stuff, that's how I choose X, as a short-term interest rate. In general, let's say I developed a theory that says the short-rate of some function of X. And when I actually apply it, the function is the identity map. The mathematics says that if there's no arbitrage, then there exists-- as we're assuming-- then there exists this so-called risk-neutral probability measure that I talked about earlier and denoted by Q. It's related but not equal to the Arrow-Debreu security prices. So if you were to just imagine that instead of buying these Arrow-Debreu security prices in a spot market, if you instead bought them in a forward market where you actually pay when they mature, then those Arrow-Debreu security prices in the forward market would be Q. So Q and A are really close. So the measure A need not integrate to one, and that's just due to the time value of money, and that's because you're paying in the spot market. If you're actually paying in the forward market, then you don't have to worry about time value of money. And so then, the measure Q does integrate to one. So that's why we call it a probability measure. Under this probability measure Q, the expected return on all assets is the risk-free rate. So that's what that actually says, although you're probably not seeing that this is literally the expected return-- well more precisely, it's expected price change. So the expected price change is-- what that means is expected price change is the risk-free rate times the price. That's what that says. So if you divide both sides by the spot price when it's positive, then you'll get the expected return is equal to risk-free rate. And we're doing things in continuous time here. So we're working with diffusions. And you may or may not have been introduced to diffusions at this stage in your mathematical career. But mathematically, one way to describe diffusion is via the infinitesimal generator. So this is a differential operator that's first order in time, second order in space. And let's just say this is formally how mathematicians think about this type of thing. What I've drawn here is a single sample path of diffusion. There's definitely possibility of other sample paths. These actually are an infinite number of paths. But they're all continuous and nowhere differentiable. I want to just kind of give you a flavor of how you change numeraires. So we started with the numeraire being the money market account-- this guy. And the idea is we're going to switch to a different numeraire. What we're mainly interested in figuring out is what are the drifts of assets when we measure their values in a different numeraire. So I've kind of given you a sense of what this is about. So you could hold IBM, and every time you get a gain, you could put that gain in your local bank-- Chase-- and see how fast your bank balance grows as you're putting all your gains in IBM in the bank. And you'll get a certain growth rate from that strategy. Now, you could try a different strategy where you take your gains from IBM. And you actually ship them off over to a British bank, which is denominated in pounds, and see how fast that bank balance grows. And there's no reason that the two bank balances-- the American one and the British one-- need to grow at the same rate. Because they're denominated in different currencies. So we're basically interested to know, given that we know how fast, let's say, the American bank balance would grow, we want to know how fast the British bank balance would grow. And what affects the growth rate of the British bank balance is the covariance, actually, between the dollar/pound exchange rate and IBM. So remember, we're investing in IBM and we're putting gains in either an American bank or a British bank. So IBM stock prices in dollars. And so there's no issues with putting IBM's gains in an American bank. But there's actually a subtle effect that happens when you put IBM's gains in a British bank, which the subtle effect is there's this random exchange rate dollars per pound. And suppose that there's some correlation, for whatever reason, between dollars per pound and IBM. So suppose the correlation's the following form-- every time IBM goes up, the dollar gets weaker against the pound. So in other words, what happens is IBM goes up, you go hooray, I'm rich. I got all these dollars. I'm going to go put them in a British bank account. But suppose, unluckily for you, every time IBM goes up, the dollar weakens against the pound. And so, you cannot buy so many pounds as a result. So contrast that with the opposite situation where when IBM goes up, the dollar strengthens as opposed to weakens. Then you can buy lots and lots of pounds with your IBM gains. So the correlation between the dollar pound exchange rate and IBM affects how fast your British bank balance would grow. And that's actually like the key point. So this would be well-known to anybody-- especially an FX client. So what we're actually going to do is find a numeraire such that the growth rate of the balance in that numeraire is actually the real-world drift of the underlying. So the idea is let's say that I told you at the beginning of this talk that historically stocks grow at 9% on average. Our starting point here in this part of the talk is that we're starting from this risk-neutral measure Q, which, by definition, is the property that stocks would grow only at 1%. So what we're actually going to do is go find some numeraire which will be correlated with the stocks, such that when we put our stock gains in that numeraire, we end up growing at 9%, rather than 1%. That's the way we think about things. And the key is to find that numeraire that has that property. I'm going to go fast now-- there's a paper by John Long where he shows that that numeraire that converts a risk-free growth rate into the real-world growth rate always exists. And he gave it a name, and he called it numeraire portfolio. It has another name-- growth optimal portfolio-- that Kelly was talking about. So there's a reference if you're interested in following up on this material. So the theory says that there always exists this numeraire called John Long's numeraire portfolio, such that if you park your gains in this numeraire, you end up growing at the real-world drift. And so, let's say all we got to do to find that real-world drift is go find this special numeraire. So this part of the talk is about making some assumptions that lead to an identification of that particular numeraire-- John Long's numeraire. We're going to continue to work with diffusions. And now we're going to also impose time homogeneity like Ross was doing. So let's say when I was just talking about numeraire, I was allowing time inhomogeneity. But now we're going to go time homogeneous. I haven't really been introducing the notation, but a(x, t) is the diffusion coefficient of the state variable x. And now it's just being assumed to be a function of x only. So b^Q(x, t) was the drift coefficient of x. And now it's a function of x only. r(x, t) was the function linking the short interest rate to the state variable x. And now, it's a function of x only. And finally, sigma_L(x, t) was the volatility of John Long's numeraire portfolio. And again, that's a function of x only. So anyway, another assumption that we're going to impose now in order to determine uniquely what this numeraire portfolio value is is to require that the diffusion that's driving everything live in a bounded interval. So essentially, the sample paths all have to be bounded below by some constant, which could be negative, and have to be bounded above by some constant, which again could be negative. We make all those assumptions, and we move on. And so in the end, what have we been assuming? So we're assuming that there's a single source of uncertainty X. And that it's a time-homogeneous diffusion. So that's this middle equation here. And so that says changes in X have a predictable part, which is b^Q(X) dt. And they have an unpredictable part, which is a of X dW. So W there is standard Brownian motion. And since I'm big on mnemonics, you might ask why does W stand for standard Brownian motion? And that's because W actually stands for Wiener process-- Norbert Wiener being an MIT mathematician. And the W is a standard notation for this kind of thing. As an aside, when Bob Merton was here working out all this stuff for the first time in the late '60s, he knew the standard notation for standard Brownian motion was W. But it turns out in finance, the standard notation for wealth is also W. And he wanted to work on stochastic wealth dynamics. And so he had to choose should I use the letter W for wealth, or should I use the letter W for Wiener process? And he chose W for wealth, which meant he had to pick a different letter for Wiener process. And so he actually chose the letter Z. And you'll have to ask him why he chose that letter, because it doesn't stand for anything as far as I know, except that actually the sample paths of a Wiener process look very jagged, so if you turn your head, you might be able to see a Z. So another assumption is that we're going to restrict the possible dynamics of the numeraire portfolio's value. So we're going to let L denote the value of this numeraire portfolio. And the mnemonic here is that John Long invented this concept, so we're calling it L for Long. Now it's unfortunate that the inventor of this concept was named Long, actually. Because in finance, the word "long" indicates that for a security with a non-negative payoff, if you're long, then you're going to be receiving that payoff. As you pay money now, you're going to receive that payoff. It's the opposite of short, where if you're short a security with a non-negative payoff, then actually you get money now and you have to deliver that payoff later. So as it happens, this numeraire portfolio has multiple positions in it. And the signs of the positions are allowed to be real-- so positives and negatives. So it's kind of a misnomer. I say Long's numeraire portfolio, and everyone thinks the positions in them are all positive. It's not true-- so they're real-valued. The kind of problem here is that we've put the structure on the value L, John Long's numeraire portfolio, namely that L is a continuous process, but it's not quite a diffusion in itself. The only thing you can say is that the pair X and L are a bivariate diffusion. If you bring this L over to the side, you can see the coefficients for dL depend on L and X-- and same thing with the volatility part. So anyway, we place the structure. And the idea is that we know, from looking at Bloomberg, what the risk-neutral drift of X is-- that's b^Q(X). We know that function. We know what the diffusion coefficient of X is. That's the function A of X. We know what the risk-neutral drift of L is-- that's that function r of X. But we don't know the volatility of John Long's numeraire portfolio. That's the function sigma_L of X. And if only we could find it, we would actually know how to determine the real-world drift. And remember I was saying if IBM and you could put an American bank account, and let's say there was certain growth rate there. And then if instead you were putting those gains in a British bank account, you'd achieve a different growth rate. And I was stressing that the correlation of dollar/pound with IBM was important for determining that growth rate. And I stand by that. When you're in a one-factor world, that correlation can only be one. And so that's what's happening here. We're in a one-factor world, and that correlation is one. And the other thing that affects the growth rate, though, of your British bank account balance is actually the volatility of the exchange rate. So what actually matters is the covariance between the British exchange rate and IBM. That covariance depends on both the correlation and the volatility of the FX rate. So you can think of the FX rate as here John Long's numeraire portfolio. And so that sigma_L is sort of the key. It's like we've set things up so we know the correlation, but we still don't know the covariance. And that's what's actually relevant. So as soon as we get the sigma_L, we'll know the covariance. So we'll be in shape. So we got to find that volatility function sigma_L. And now I know many of you have classes, so I'm going to have to start moving. AUDIENCE: Now, Peter, people will have access to these slides afterwards. And so, I'm just seeing you've got another 15 slides left. PETER CARR: Yes, well actually, you'll be glad to know that five of those are disclaimers. If I could move along-- AUDIENCE: But the point is to what-- PETER CARR: The key is towards the end. Yes, absolutely. We're very close. OK. So I'll be done in two minutes. So basically, where we are now is we're going to make one more assumption that the value of John Long's numeraire portfolio is a function of X and D. OK then, let's say we've made all our assumptions. And where it goes is that the assumptions imply that this value function splits into an unknown positive function of x, and an unknown positive function of time. And when you kind of further analyze, you find that the unknown function of time is an exponential function of time. And the unknown function of x solves an ordinary differential equation of this kind. So this is called a Sturm-Liouville problem. And it turns out that Sturm and Liouville were the only mathematicians I've mentioned in this talk who were not at MIT. And they actually solved this problem. And one of the things they show is that when you're searching for functions pi and scalars lambda that solve this problem, there's only one solution that delivers you a positive function pi. And so this is how you get uniqueness. Remember I was saying back with 11-- so we're searching for like a 10-vector and a scalar. Now the 10 vector is a function. And that function is pi, and the scalar's lambda. So the point is is that the math implies there's a unique solution to the problem. So we learn the volatility of the numeraire portfolio in the end. And then we learn the drifts of everything you want to know under the market's beliefs. So that's the gist of it. So then there's been work on trying to extend to unbounded intervals. And basically, in the famous Black-Scholes model, this effort fails, whereas in the less famous but still important Cox-Ingersoll-Ross model, this effort succeeds. So the sort of punchline is that when it comes to unbounded state space, the theory's open. So if there's a grad student in the room who wants a good dissertation problem, this is it. OK. So that's all I wanted to say today. Thanks.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
6_Regression_Analysis.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today's topic is regression analysis. And this subject is one that we're going to cover it today covering the mathematical and statistical foundations of regression and focus particularly on linear regression. This methodology is perhaps the most powerful method in statistical modeling. And the foundations of it, I think, are very, very important to understand and master, and they'll help you in any kind of statistical modeling exercise you might entertain during or after this course. And its popularity in finance is very, very high, but it's also a very popular methodology in all other disciplines that do applied statistics. So let's begin with setting up the multiple linear regression problem. So we begin with a data set that consists of data observations on different cases, a number of cases. So we have n cases indexed by i. And there's a single variable, a dependent variable or response variable, which is the variable of focus. And we'll denote that y sub i. And together with that, for each of the cases, there are explanatory variables that we might observe. So the y_i's, the dependent variables, could be returns on stocks. The explanatory variables could be underlying characteristics of those stocks over a given period. The dependent variable could be the change in value of an index, the S&P 500 index or the yield rate, and the explanatory variables can be various macroeconomic factors or other factors that might be used to explain how the response variable changes and takes on its value. Let's go through various goals of regression analysis. OK, first it can be to extract or exploit the relationship between the dependent variable and the independent variable. And examples of this are prediction. Indeed, in finance that's where I've used regression analysis most. We want to predict what's going to happen and take actions to take advantage of that. One can also use regression analysis to talk about causal inference. What factors are really driving a dependent variable? And so one can actually test hypotheses about what are true causal factors underlying the relationships between the variables. Another application is for just simple approximation. As mathematicians, you're all very familiar with how smooth functions can be-- that are smooth in the sense of being differentiable and bounded. Those can be approximated well by a Taylor series if you have a function of a single variable or even a multivariable function. So one can use regression analysis to actually approximate functions nicely. And one can also use regression analysis to uncover functional relationships and validate functional relationships amongst the variables. So let's set up the general linear model from a mathematical standpoint to begin with. In this lecture, OK, we're going to start off with discussing ordinary least squares, which is a purely mathematical criterion for how you specify regression models. And then we're going to turn to the Gauss-Markov theorem which incorporates some statistical modeling principles there. They're essentially weak principles. And then we will turn to formal models with normal linear regression models, and then consider extensions of those to broader classes. Now we're in the mathematical context. And a linear model is basically attempting to model the conditional distribution of the response variable y_i given the independent variables x_i. And the conditional distribution of the response variable is modeled simply as a linear function of the independent variables. So the x_i's, x_(i,1) through x_(i,p), are the key explanatory variables that relate to the response variables, possibly. And the beta_1, beta_2, beta_i, or beta_p, are the regression parameters which would be used in defining that linear relationship. So this relationship has residuals, epsilon_i, basically where there's uncertainty in the data-- whether it's either due to a measurement error or modeling error or underlying stochastic processes that are driving the error. This epsilon_i is a residual error variable that will indicate how this linear relationship varies across the different n cases. So OK, how broad are the models? Well, the models really are very broad. First of all, polynomial approximation is indicated here. It corresponds, essentially, to a truncated Taylor series approximation to a functional form. With variables that exhibit cyclical behavior, Fourier series can be applied in a linear regression context. How many people in here are familiar with Fourier series? Almost everybody. So Fourier series basically provide a set of basis functions that allow you to closely approximate most functions. And certainly with bounded functions that possibly have a cyclical structure to them, it provides a complete description. So we could apply Fourier series here. Finally, time series regressions where the cases i one through n are really indexes of different time points can be applied. And so the independent variables can be variables that are observable at a given time point or known at a given time. So those can include lags of the response variables. So we'll see actually when we talk about time series that there's autoregressive time series models that can be specified. And those are very broadly applied in finance. All right, so let's go through what the steps are for fitting a regression model. First, one wants to propose a model in terms of what is it that we have to identify or be interested in a particular response variable. And critical here is specifying the scale of that response variable. Choongbum was discussing problems of modeling stock prices. If, say, y is the stock price? Well, it may be that it's more appropriate to consider modeling it on a logarithmic scale than on a linear scale. Who can tell me why that would be a good idea? AUDIENCE: Because the changes might become more percent changes in price rather than absolute changes in price. PROFESSOR: Very good, yeah. So price changes basically on the percentage scale, which log changes would be, may be much better predicted by knowing factors than the absolute price level. OK, and so we have to have a collection of independent variables, which to include in the model. And it's important to think about how general this set up is. I mean, the independent variables can be functions, lag values of the response variable. They can be different functional forms of other independent variables. So the fact that we're talking about a linear regression model here is it's not so limiting in terms of the linearity. We can really capture lot of nonlinear behavior in this framework. So then third, we need to address the assumptions about the distribution of the residuals, epsilon, over the cases. So that has to be specified. Once we've set up the model in terms of identifying the response of the explanatory variables and the assumptions underlying the distribution of the residuals, we need to specify a criterion for judging different estimators. So given a particular setup, what we want to do is be able to define a methodology for specifying the regression parameters so that we can then use this regression model for prediction or whatever our purpose is. So the second thing we want to do is define a criterion for how we might judge different estimators of the progression parameters. We're going to go through several of those. And you'll see those-- least squares is the first one, but there are actually more general ones. In fact, the last section of this lecture on generalized estimators will cover those as well. Third, we need to characterize the best estimator and apply it to the given data. So once we choose a criterion for how good an estimate of regression parameters is, then we have to have a technology for solving for that. And then fourth, we need to check our assumptions. Now, it's very often the case that at this fourth step, where you're checking the assumptions that you've made, you'll discover features of your data or the process that it's modeling that make you want to expand upon your assumptions or change your assumptions. And so checking the assumptions is a critical part of any modeling process. And then if necessary, modify the model and assumptions and repeat this process. What I can tell you is that this sort of protocol for how you fit models is what I've applied many, many times. And if you are lucky in a particular problem area, the very simple models will work well with small changes in assumptions. But when you get challenging problems, then this item five of modify the model and/or assumptions is critical. And in statistical modeling, my philosophy is you really want to, as much as possible, tailor the model to the process you're modeling. You don't want to fit a square peg in a round hole and just apply, say, simple linear regression to everything. You want to apply it when the assumptions are valid. If the assumptions aren't valid, maybe you can change the specification of the problem so a linear model is still applicable in a changed framework. But if not, then you'll want to extend to other kinds of models. But what we'll be doing-- or what you will be doing if you do that-- is basically applying all the same principles that are developed in the linear modeling framework. OK, now let's see. I wanted to make some comments here about specifying assumptions for the residual distribution. What kind of assumptions might we make? OK, would anyone like to suggest some assumptions you might make in a linear regression model for the residuals? Yes? What's your name, by the way? AUDIENCE: My name is Will. PROFESSOR: Will, OK. Will what? [? AUDIENCE: Ossler. ?] PROFESSOR: [? Ossler, ?] great. OK, thank you, Will. AUDIENCE: It might be-- or we might want to say that the residual might be normally distributed and it might not depend too much on what value of the input variable we'd use. PROFESSOR: OK. Anyone else? OK. Well, that certainly is an excellent place to start in terms of starting with a distribution that's familiar. Familiar is always good. Although it's not something that should be necessary, but we know from some of Choongbum's lecture areas that Gaussian and normal distributions arise in many settings where we're taking basically sums of independent, random variables. And so it may be that these residuals are like that. Anyway, a slightly simpler or weaker condition is to use the Gauss-- what are called in statistics the Gauss-Markov assumptions. And these are assumptions where we're only concerned with the means or averages, statistically, and the variances of the residuals. And so we assume that there's zero mean. So on average, they're not adding a bias up or down to the dependent variable. And those have a constant variance. So the level of uncertainty in our model doesn't depend on the case. And so indeed, if errors on the percentage scale are more appropriate, then one could look at, say, a time series of prices that you're trying to model. And it may be that on the log scale, that constant variance looks much more appropriate than on the original scale, which would have-- And then a third attribute of the Gauss-Markov assumptions is that the residuals are uncorrelated. So now uncorrelated does not mean independent or statistically independent. So this is a somewhat weak condition, or weaker condition, than independence of the residuals. But in the Gauss-Markov setting, we're just setting up basically a reduced set of assumptions that we might apply to fit the model. If we extend upon that, we can then consider normal linear regression models, which Will just suggested. And in this case, those could be assumed to be independent and identically distributed-- IID is that notation for that-- with Gaussian or normal with mean 0 and variance sigma squared. We can extend upon that to consider generalized Gauss-Markov assumptions where we maintain still the zero mean for the residuals, but the general-- we might have a covariance matrix which does not correspond to independent and identically distributed random variables. Now, let's see. In the discussion of probability theory, we really haven't talked yet about matrix-valued random variables, right? But how many people in the class have covered matrix-value or vector-valued random variables before? OK, just a handful. Well, a vector-valued random variable, we think of the values of these n cases for the dependent variable to be an n-valued, an n-vector of random variables. And so we can generalize the variance of individual random variables to the variance covariance matrix of the collection. And so you have a covariance matrix characterizing the variance of the n-vector which gives us the-- the (i, j) element gives us the value of the covariance. All right, let me put the screen up and just write that on the board so that you're familiar with that. All right, so we have y_1, y_2, down to y_n, our n values of our response variable. And we can basically talk about the expectation of that being equal to mu_1, mu_2, down to mu_n. And the covariance matrix of y_1, y_2, down to y_n is equal to a matrix with the variance of y_1 in the upper 1, 1 element, and the variance of y_2 in the 2, 2 element, and the variance of y_n in the nth column and nth row. And in the (i,j)-th row, (i, j), we have the covariance between y_i and y_j. So we're going to use matrices to represent covariances. And that's something which I want everyone to get very familiar with because we're going to assume that we are comfortable with those, and apply matrix algebra with these kinds of constructs. So the generalized Gauss-Markov theorem assumes a general covariance matrix where you can have nonzero covariances between the independent variables or the dependent variables and the residuals. And those can be correlated. Now, who can come up with an example of why the residuals might be correlated in a regression model? Dan? OK. That's a really good example because it's nonlinear. If you imagine sort of a simple nonlinear curve and you try to fit a straight line to it, then the residuals from that linear fit are going to be consistently above or below the line depending on where you are in the nonlinearity, how it might be fitting. So that's one example where that could arise. Any other possibilities? Well, next week we'll be talking about some time series models. And there can be time dependence amongst variables where there are some underlying factors maybe that are driving the process. And those ongoing factors can persist in making the linear relationship over or under gauge the dependent variable. So that can happen as well. All right, yes? AUDIENCE: The Gauss-Markov is just the diagonal case? PROFESSOR: Yes, the Gauss-Markov is simply the diagonal case. And explicitly if we replace y's here by the residuals, epsilon_1 through epsilon_n, then that diagonal matrix with a constant diagonal is the simple Gauss-Markov assumption, yeah. Now, I'm sure it comes as no surprise that Gaussian distributions don't always fit everything. And so one needs to get clever with extending the models to other cases. And there are-- I know-- Laplace distributions, Pareto distributions, contaminated normal distributions, which can be used to fit regression models. And these general cases really extend the applicability of regression models to many interesting settings. So let's turn to specifying the estimator criterion in two. So how do we judge what's a good estimate of the regression parameters? Well, we're going to cover least squares, maximum likelihood, robust methods, which are contamination resistant. And other methods exist that we will mention but not get into really in the lectures, are Bayes methods and accommodating incomplete or missing data. Essentially, as your approach to modeling a problem gets more and more realistic, you start adding more and more complexity as it's needed. And certainly issues of-- well, robust methods is where you assume most of the data arrives under normal conditions, but once in a while there may be some problem with the data. And you don't want your methodology just to break down if there happens to be some outliers in the data or contamination. Bayes methodologies are the technology for incorporating subjective beliefs into statistical models. And I think it's fair to say that probably all statistical modeling is essentially subjective. And so if you're going to be good at statistical modeling, you want to be sure that you're effectively incorporating subjective information in that. And so Bayes methodologies are very, very useful, and indeed pretty much required to engage in appropriate modeling. And then finally, accommodate incomplete or missing data. The world is always sort of cruel in terms of you often are missing what you think is critical information to do your analysis. And so how do you deal with situations where you have some holes in your data? Statistical models provide good methods and tools for dealing with that situation. OK. Then let's see. In case analyses for checking assumptions, let me go through this. Basically when you fit a regression model, you check assumptions by looking at the residuals, which are the basically estimates of the epsilons, the deviations of the dependent variable from their predictions. And what one wants to do is analyze these to determine whether our assumptions are appropriate. OK, but the Gauss-Markov assumptions would be, do these appear to have constant variance? And it may be that their variance depends on time, if the i is indexing time. Residuals might depend on the other variables as well, and one wants to determine that that isn't the case. There are also influence diagnostics identifying cases which are highly influential. It turns out that when you are building a regression model with data, you treat all the cases as if they're equally important. Well, it may be that certain cases are really critical to estimated certain factors. And it may be that much of the inference about how important a certain factor is is determined by very small number of points. So even though you have a massive data set that you're using to fit a model, it could be that some of the structure is driven by a very small number of cases. So influence diagnostics give you a way of analyzing that. In the problem set for this lecture, you'll be deriving some influence diagnostics for linear regression models and seeing how they're mathematically defined. And I'll be distributing a case study which illustrates fitting linear regression models for asset prices. And you can see how those play out with some practical examples. OK, finally there's outlier detection. With outliers, it's interesting. The exceptions in data are often the most interesting. It's important in modeling to understand whether certain cases are unusual. And sometimes their degree of idiosyncrasy can be explained away so that one essentially discards those outliers. But other times, those idiosyncrasies lead to extensions of the model. And so outlier detection can be very important for validating a model. OK, so with that introduction to regression, linear regression, let's talk about ordinary least squares. Ah. OK, the least squares criterion is for a given a regression parameter, beta, which is considered to be a column vector-- so I'm taking the transpose of a row vector. The least squares criterion is to basically take the sum of square deviations from the actual value of the response variable from its linear prediction. So y_i minus y hat i, we're just plugging in for y hat i the linear function of the independent variables and the squaring that. And the ordinary least squares estimate, beta hat, minimizes this function. So in order to solve for this, we're going to use matrices. And so we're going to take the y vector, the vector of n values of the dependent variable, or the response variable, and X, the matrix of values of the independent variable. It's important in this set up to keep straight that cases go by rows and columns go by values of the independent variable. Boy, this thing is ultra sensitive. Excuse me. Do I turn off the touchpad here? OK. So we can now define our fitted value, y hat, to be equal to the matrix x times beta. And with matrix multiplication, that results in the y hat 1 through y hat n. And Q of beta can basically be written as y minus X beta transpose y minus X beta. So this term here is an n-vector minus the product of the X matrix times beta, which is another n-vector. And we're just taking the cross product of that. And the ordinary least squares estimate for beta solves the derivative of this criterion equaling 0. Now, that's in fact true, but who can tell me why that's true? Say again? AUDIENCE: Is that minimum? PROFESSOR: OK. So your name? AUDIENCE: Seth. PROFESSOR: Seth? Seth. Very good, Seth. Thanks, Seth. So if we want to find a minimum of Q, then that minimum will have, if it's a smooth function, will have a minimum at slope equals 0. Now, how do we know whether it's a minimum or not? It could be a maximum. AUDIENCE: [INAUDIBLE]? PROFESSOR: OK, right. So in fact, this is a-- Q of beta is a convex function of beta. And so its second derivative is positive. And if you basically think about the set-- basically, this is the first derivative of Q with respect to beta equaling 0. If you were to solve for the second derivative of Q with respect to beta, well, beta is a p-vector. So the second derivative is actually a second derivative matrix, and that matrix, you can solve for it. It will be X transpose X, which is a positive definite or semi-definite matrix. So it basically had a positive derivative there. So anyway, this ordinary least squares estimates will solve this d Q of beta by d beta equals 0. What does d Q beta by d beta_j? Well, you just take the derivative of this sum. So we're taking the sum of all these elements. And if you take the derivative-- well, OK, the derivative is a linear operator. So the derivative of a sum is the sum of the derivatives. So we take the summation out and we take the derivative of each term, so we get 2 minus x_(i,j), then the thing in square brackets, y_i minus that. And what is that? Well, in matrix notation, if we let this sort of bold X sub square j denote the j-th column of the independent variables, then this is minus 2. Basically, the j-th column of X transpose times y minus X beta. So this j-th equation for ordinary least squares has that representation in terms-- in matrix notation. Now if we put that all together, we basically can define this derivative of Q with respect to the different regression parameters as basically the minus twice the j-th column stacked times y minus X beta, which is simply minus 2 X transpose, y minus X beta. And this has to equal 0. And if we just simplify, taking out the two, we get this set of equations. It must be satisfied by the ordinary least squares estimate, beta. And that's called the normal equations in books on regression modeling. So let's consider how we solve that. Well, we can re-express that by multiplying through the X transpose on each of the terms. And then beta hat basically solves this equation. And if X transpose X inverse exists, we get beta hat is equal to X transpose X inverse X transpose y. So with matrix algebra, we can actually solve this. And matrix algebra is going to be very important to this lecture and other lectures. So if this stuff is-- if you're a bit rusty on this, do brush up. This particular solution for beta hat assumes that X transpose X inverse exists. Who can tell me what assumptions do we need to make for X transpose X to have an inverse? I'll call you in a second if no one else does. Somebody just said something. Someone else. No? All right. OK, Will. AUDIENCE: So X transpose X inverse needs to have full rank, which means that each of the submatrices needs to have [INAUDIBLE] smaller dimension. PROFESSOR: OK, so Will said, basically, the matrix X needs to have full rank. And so if X has full rank, then-- well, let's see. If X has full rank, then the singular value decomposition which was in the very first class can exist. And you have basically p singular values that are all non-zero. And X transpose X can be expressed as sort of a, from the singular value decomposition, as one of the orthogonal matrices times the square of the singular values times that same matrix transpose, if you recall that definition. So that actually is-- it basically provides a solution for X transpose X inverse, indeed, from the singular value decomposition of X. But what's required is that you have a full rank in X. And what that means is that you can't have independent variables that are explained by other independent variables. So different columns of X have to be linear-- or they can't linearly depend on any other columns of X. Otherwise, you would have reduced rank. So now if beta hat doesn't have full rank, then our least squares estimate of beta might be non-unique. And in fact, it is the case that if you are really interested in just predicting values of a dependent variable, then having non-unique least squares estimates isn't as much of a problem, because you still get estimates out of that. But for now, we want to assume that there's full column rank in the independent variables. All right. Now, if we plug in the value of the solution for the least squares estimate, we get fitted values for the response variable, which are simply the matrix X times beta hat. And this expression for the fitted values is basically X times X transpose X inverse X transpose y, which we can represent as Hy. Basically, this H matrix in linear models and statistics is called the hat matrix. It's basically a projection matrix that takes the linear vector, or the vector of values of the response variable, into the fitted values. So this hat matrix is quite important. The problem set's going to cover some features, go into some properties of the hat matrix. Does anyone want to make any comments about this hat matrix? It's actually a very special type of matrix. Does anyone want to point out what that special type is? It's a projection matrix, OK. Yeah. And in linear algebra, projection matrices have some very special properties. And it's actually an orthogonal projection matrix. And so if you're interested in that feature, you should look into that. But it's really a very rich set of properties associated with this hat matrix. It's an orthogonal projection, and it's-- let's see. What's it projecting? It's projecting from n-space into what? Go ahead. What's your name? AUDIENCE: Ethan. PROFESSOR: Ethan, OK. AUDIENCE: Into space [INAUDIBLE] PROFESSOR: Basically, yeah. It's projecting into the column space of X. So that's what linear regression is doing. So in focusing and understanding linear regression, you can think of, how do we get estimates of this p-vector? That's all very good and useful, and we'll do a lot of that. But you can also think of it as, what's happening in the n-dimensional space? So you basically are representing this n-dimensional vector y by its projection onto the column space. Now, the residuals are basically the difference between the response value and the fitted value. And this can be expressed as y minus y hat, or I_n minus H times y. And it turns out that I_n minus H is also a projection matrix, and it's projecting the data onto the space orthogonal to the column space of x. And to show that that's true, if we consider the normal equations, which are X transpose y minus X beta hat equaling 0, that basically is X transpose epsilon hat equals 0. And so from the normal equations, we can see that what they mean is they mean that the residual vector epsilon hat is orthogonal to each of the columns of X. You can take any column in X, multiply that by the residual vector, and get 0 coming out. So that's a feature of the residuals as they relate to the independent variables. OK, all right. So at this point, we've gone through really not talking about any statistical properties to specify the betas. All we've done is talked-- we've introduced the least squares criterion and said, what value of the beta vector minimizes that least squares criterion? Let's turn to the Gauss-Markov theorem and start introducing some statistical properties, probability properties. So with our data, y and X-- yes? Yes. AUDIENCE: [INAUDIBLE]? PROFESSOR: That epsilon-- AUDIENCE: [INAUDIBLE]? PROFESSOR: OK. Let me go back to that. It's that X, the columns of X, and the column vector of the residual are orthogonal to each other. So we're not doing a projection onto a null space. This is just a statement that those values, or those column vectors, are orthogonal to each other. And just to recap, the epsilon is a projection of y onto the space orthogonal to the column space. And y hat is a projection onto the column space of y. And these projections are all orthogonal projections, and so they happen to result in the projected value epsilon hat must be orthogonal to the column space of X, if you project it out. OK? All right. So the Gauss-Markov theorem, we have data y and X again. And now we're going to think of the observed data, little y_1 through y_n, is actually an observation of the random vector capital Y, composed of random variables Y_1 up to Y_n. And the expectation of this vector conditional on the values of the independent variables and their regression parameters given by X, beta-- so the dependent variable vector has expectation given by the product of the independent variables matrix times the regression parameters. And the covariance matrix of Y given X and beta is sigma squared times the identity matrix, the n-dimensional identity matrix. So the identity matrix has 1's along the diagonal, n-dimensional, and 0's off the diagonal. So the variances of the Y's are the diagonal entries, those are all the same, sigma squared. And the covariance between any two are equal to 0 conditionally. OK, now the Gauss-Markov theorem. This is a terrific result in linear models theory. And it's terrific in terms of the mathematical content of it. I think it's-- for a math class, it's really a nice theorem to introduce you to and highlight the power of, I guess, results that can arise from applying the theory. And so to set this theorem up, we want to think about trying to estimate some function of the regression parameters. And so OK, our problem is with ordinary least squares-- it was, how do we specify the regression parameters beta_1 through beta_p? Let's consider a general target of interest, which is a linear combination of the betas. So we want to predict a parameter theta which is some linear combination of the regression parameters. And because that linear combination of the regression parameters corresponds to the expectation of the response variable corresponding to a given row of the independent variables matrix, this is just a generalization of trying to estimate the means of the regression model at different points in the space, or to be estimating other quantities that might arise. So this is really a very general kind of thing to want to estimate. It certainly is appropriate for predictions. And if we consider the least squares estimate by just plugging in beta hat one through beta hat p, solved by the least squares, well, it turns out that those are an unbiased estimator of the parameter theta. So if we're trying to estimate this combination of these unknown parameters, you plug in the least squares estimate, you're going to get an estimator that's unbiased. Who can tell me what unbiased is? It's probably going to be a new concept for some people here. Anyone? OK, well it's a basic property of estimators in statistics where the expectation of this statistic is the true parameter. So it doesn't, on average, probabilistically, it doesn't over- or underestimate the value. So that's what unbiased means. Now, it's also a linear estimator of theta in terms of this theta hat being a particular linear combination of the dependent variables. So with our original response variable y, in the case of y_1 through y_n, this theta hat is simply a linear combination of all the y's. And now why is that true? Well, we know that beta hat, from the normal equations, is solved by X transpose X inverse X transpose y. So it's a linear transform of the y vector. So if we take a linear combination of those components, it's also another linear combination of the y vector. So this is a linear function of the underlying-- of the response variables. Now, the Gauss-Markov theorem says that, if the Gauss-Markov assumptions apply, then the estimator theta has the smallest variance amongst all linear unbiased estimators of theta. So it actually is like the optimal one, as long as this is our criteria. And this is really a very powerful result. And to prove it, it's very easy. Let's see. Actually, these notes are going to be distributed. So I'm going to go through this very, very quickly and come back to it later if we have more time. But you basically-- the argument for the proof here is you consider another linear estimate which is also an unbiased estimate. So let's consider a competitor to the least squares value and then look at the difference between that estimator and theta hat. And so that can be characterized as basically this vector, f transpose y. And this difference in the estimates must have expectation 0. So basically, if we look at-- if theta tilde is unbiased, then this expression here is going to be equal to zero, which means that f-- the difference in these two estimators, f defines the difference in the two estimators-- has to be orthogonal to the column space of x. And with this result, one then uses this orthogonality of f and d to evaluate the variance of theta tilde. And in this proof, the mathematical argument here is really something-- I should put some asterisks on a few lines here. This expression here is actually very important. We're basically looking at the decomposition of the variance to be the variance of b transpose y, which is the variance of the sum of these two random variables. So the page before basically defined d and f such that this is true. Now when you consider the variance of a sum, it's not the sum of the variances. It's the sum of the variances plus twice the sum of the covariances. And so when you are calculating variances of sums of random variables, you have to really keep track of the covariance terms. In this case, this argument shows that the covariance terms are, in fact, 0, and you get the result popping out. But that's really a-- in an econometrics class, they'll talk about BLUE estimates of regression, or the BLUE property of the least squares estimates. That's where that comes from. All right, so let's now consider generalizing from Gauss-Markov to allow for unequal variances and possibly correlated nonzero covariances between the components. And in this case, the regression model has the same linear set up. The only difference is the expectation of the residual vector is still 0. But the covariance matrix of the residual vector is sigma squared, a single parameter, times let's say capital sigma. And we'll assume here that this capital sigma matrix is a known n by n positive definite matrix specifying relative variances and correlations between the observations. OK. Well, in order to solve for regression estimates under these generalized Gauss-Markov assumptions, we can transform the data Y, X to Y star equals sigma to the minus 1/2 y and X to X star, which is sigma to the minus 1/2 x. And this model then becomes a model, a linear regression model, in terms of Y star and X star. We're basically multiplying this regression model by sigma to the minus 1/2 across. And epsilon star actually has a covariance matrix equal to sigma squared times the identity. So if we just take a linear transformation of the original data, we get a representation of the regression model that satisfies the original Gauss-Markov assumptions. And what we had to do was basically do a linear transformation that makes the response variables all have constant variance and be uncorrelated. So with that, we then have the least squares estimate of beta is the least squares, the ordinary least squares, in terms of Y star and X star. And so plugging that in, we then have X star transpose X star inverse X star transpose Y star. And if you multiply through, that's how the formula changes. So this formula characterizing the least squares estimate under this generalized set of assumptions highlights what you need to do to be able to apply that theorem. So with response values that have very large variances, you basically want to discount those by the sigma inverse. And that's part of the way in which these generalized least squares work. All right. So now let's turn to distribution theory for normal regression models. Let's assume that the residuals are normals with mean 0 and variance sigma squared. OK, conditioning on the values of the independent variable, the Y's, the response variables, are going to be independent over the index i. They're not going to be identically distributed because they have different means, mu_i for the dependent variable Y_i, but they will have a constant variance. And what we can do is basically condition on X, beta, and sigma squared and then represent this model in terms of the distribution of the epsilons. So if we're conditioning on x and beta, this X beta is a constant, known, we've conditioned on it. And the remaining uncertainty is in the residual vector, which is assumed to be all independent and identically distributed normal random variables. Now, this is the first time you'll see this notation, capital N sub little n, for a random vector. It's a multivariate normal random variable where you consider an n-vector where each component is normally distributed, with mean given by some corresponding mean vector, and a covariance matrix given by a covariance matrix. In terms of independent and identically distributed values, the probability structure here is totally well-defined. Anyone here who's taken a beginning probability class knows what the density function is for this multivariate normal distribution because it's the product of the independent density functions for the independent components, because they're all independent random variables. So this multivariate normal random vector has a density function which you can write down, given your first probability class. OK, here I'm just highlighting or defining the mu vector for the means of the cases of the data. And the covariance matrix sigma is this diagonal matrix. And so basically sigma_(i,j) is equal to sigma squared times the Kronecker delta for the (i,j) element. Now what we want to do is, under the assumptions of normally distributed residuals, to solve for the distribution of the least squares estimators. We want to know, basically, what kind of distribution does it have? Because what we want to be able to do is to determine whether estimates are particularly large or not. And maybe there's no structure at all and the regression parameters are 0 so that there's no dependence on a given factor. And we need to be able to judge how significant that is. So we need to know what the distribution is of our least squares estimate. So what we're going to do is apply moment generating functions to derive the joint distribution of y and the joint distribution of beta hat. And so Choongbum introduced the moment generating function for individual random variables for single-variate random variables. For n-variate random variables, we can define the moment generating function of the Y vector to be the expectation of e to the t transpose Y. So t is an argument of the moment generating function. It's another n-vector. And it's equal to the expectation of e to the t_1 Y_1 plus t_2 Y_2 up to t_n Y_n. So this is a very simple definition. Because of independence, the expectation of the products, or this exponential sum is the product of the exponentials. And so this moment generating function is simply the product of the moment generating functions for Y_1 up through Y_n. And I think-- I don't know if it was in the first problem set or in the first lecture, but e to the t_i mu_i plus a half t_i squared sigma squared was the moment generating function for the single univariate normal random variable, mean mu_i and variance sigma squared. And so if we have n of these, we take their product. And the moment generating function for y is simply e to the t transpose mu plus 1/2 t transpose sigma t. And so for this multivariate normal distribution, this is its moment generating function. And this happens to be-- the distribution of y is a multivariate normal with mean mu and covariance matrix sigma. So a fact that we're going to use is that if we're working with multivariate normal random variables, this is the structure of their moment generating functions. And so if we solve for the moment generation function of some other item of interest and recognize that it has the same form, we can conclude that it's also a multivariate normal random variable. So let's do that. Let's solve for the moment generation function of the least squares estimate, beta hat. Now rather than dealing with an n-vector, we're dealing with a p-vector of the betas, beta hats. And this is simply the definition of the moment generating function. If we plug in for basically what the functional form is for the ordinary least squares estimates and how they depend on the underlying Y, then we basically-- OK, we have A equal to, essentially, the linear projection of Y. That gives us the least squares estimate. And then we can say that this moment generating function for beta hat is equal to the expectation of e to the t transpose Y, where little t is A transpose tau. Well, we know what this is. This is the moment generating function of X-- sorry, of Y-- evaluated at the vector little t. So we just need to plug in little t, that expression A transpose tau. So let's do that. And you do that and it turns out to be e to the t transpose mu plus that. And we go through a number of calculations. And at the end of the day, we get that the moment generating function is just e to the tau transpose beta plus a 1/2 tau transpose this matrix tau. And that is the moment generation function of a multivariate normal. So these few lines that you can go through after class basically solve for the moment generating function of beta hat. And because we can recognize this as the MGF of a multivariate normal, we know that that's-- beta hat is a multivariate normal, with mean the true beta, and covariance matrix given by the object in square brackets there. OK, so this is essentially the conclusion of that previous analysis. The marginal distribution of each of the beta hats is given by beta hat-- by a univariate normal distribution with mean beta_j and variance equal to the diagonal. Now at this point, saying that is like an assertion. But one can actually prove that very easily, given this sequence of argument. And can anyone tell me why this is true? Let me tell you. If you consider plugging in the moment generating function, the value tau, where only the j-th entry is non-zero, then you have the moment generating function of the j-th component of beta hat. And that's a Gaussian moment generating function. So the marginal distribution of the j-th component is normal. So you get that almost for free from this multivariate analysis. And so there's no hand waving going on in having that result. This actually follows directly from the moment generating functions. OK, let's now turn to another topic. Related, but it's the QR decomposition of X. So we have-- with our independent variables X, we want to express this as a product of an orthonormal matrix Q which is n by p, and an upper triangular matrix R. So it turns out that any matrix, n by p matrix, can be expressed in this form. And we'll quickly show you how that can be accomplished. We can accomplish that by conducting a Gram-Schmidt orthonormalization of the independent variables matrix X. And let's see. So if we define R, the upper triangular matrix in the QR decomposition, to have 0's off the diagonal below and then possibly nonzero value along the diagonal into the right, we're just going to solve for Q and R through this Gram-Schmidt process. So the first column of X is equal to the first column of Q times the first element, the top left corner of the matrix R. And if we take the cross product of that vector with itself, then we get this expression for r_(1,1) squared-- we can basically solve for r_(1,1) as the square root of this dot product. And Q_Q_[1] is simply the first column of X divided by that square root. So this first element of the Q matrix and the first element r, this can be solved for right away. Then let's solve for the second column of Q and the second column of the R matrix. Well, X_X_[2], the second column of the X matrix, is the first column of Q times r_(1,2), plus the second column of Q times r_(2,2). And if we multiply this expression by Q_Q_[1] transpose, then we basically get this expression for r_(1,2). So we actually have just solved for r_(1,2). And Q_Q_[2] is solved for by the arguments given here. So basically, we successively are orthogonalizing columns of X to the previous columns of X through this Gram-Schmidt process. And it basically can be repeated through all the columns. Now with this QR decomposition, what we get is a really nice form for the least squares estimate. Basically, it simplifies to the inverse of R times Q transpose y. And this basically means that you can solve for least squares estimates by calculating the QR decomposition, which is a very simple linear algebra operation, and then just do a couple of matrix products to get the-- well, you do have to do a matrix inverse with R to get that out. And the covariance matrix of beta hat is equal to sigma squared X transpose X inverse. And in terms of the covariance matrix, what is implicit here but you should make explicit in your study, is if you consider taking a matrix, R inverse Q transpose times y, the only thing that's random there is that y vector, OK? The covariance of a matrix times a random vector is that matrix times the covariance of the vector times the transpose of the matrix. So if you take a matrix transformation of a random vector, then the covariance of that transformation has that form. So that's where this covariance matrix is coming into play. And from the MGF, the moment generating function, for the least squares estimate, this basically comes out of the moment generating function definition as well. And if we take X transpose X, plug in the QR decomposition, only the R's play out, giving you that. Now, this also gives us a very nice form for the hat matrix, which turns out to just be Q times Q transpose. So that's a very simple form. So now with the distribution theory, this next section is going to actually prove what's really a fundamental result about normal linear regression models. And I'm going to go through this somewhat quickly just so that we cover what the main ideas are of the theorem. But the details, I think, are very straightforward. And these course notes that will be posted online will go through the various steps of the analysis. OK, so there's an important theorem here which is for any matrix A, m by n, you consider transforming the random vector y by this matrix A. It is also a random normal vector. And its distribution is going to have a mean and covariance matrix given by mu_z and sigma_z, which have this simple expression in terms of the matrix A and the underlying means and covariances of y. OK, earlier we actually applied this theorem with A corresponding to the matrix that generates the least squares estimates. So with A equal to X transpose X inverse, we actually previously went through the solution for what's the distribution of beta hat. And with any other matrix A, we can go through the same analysis and get the distribution. So if we do that here, well, we can actually prove this important theorem, which says that with least squares estimates of normal linear regression models, our least squares estimate beta hat and our residual vector epsilon hat are independent random variables. So when we construct these statistics, they are statistically independent of each other. And the distribution of beta hat is multivariate normal. The sum of the squared residuals is, in fact, a multiple of a chi-squared random variable. Now who in here can tell me what a chi-squared random variable is? Anyone? AUDIENCE: [INAUDIBLE]? PROFESSOR: Yes, that's right. So a chi-squared random variable with one degree of freedom is a squared normal zero one random variable. A chi-squared with two degrees of freedom is the sum of two independent normals, zero one, squared. And so the sum of n squared residuals is, in fact, an n minus p chi-squared random variable scale it by sigma squared. And for each component j, if we take the difference between the least squares estimate beta hat j and beta_j and divide through by this estimate of the standard deviation of that, then that will, in fact, have a t distribution on n minus p degrees of freedom. And let's see, a t distribution in probability theory is the ratio of a normal random variable to an independent chi squared random variable, or the root of an independent chi squared random variable. So basically these properties characterize our regression parameter estimates and t statistics for those estimates. Now, OK, in the course notes, there's a moderately long proof. But all the details are given, and I'll be happy to go through any of those details with people during office hours. Let me just push on to-- let's see. We have maybe two minutes left in the class. Let me just talk about maximum likelihood estimation. And in fitting models and statistics, maximum likelihood estimation comes up again and again. And with normal linear regression models, it turns out that ordinary least squares estimate are, in fact, our maximum likelihood estimates. And what we want to do with a maximum likelihood is to maximize. We want to define the likelihood function, which is the density function for the data given the unknown parameters. And this density function is simply the density function for a multivariate normal random variable. And the maximum likelihood estimates are the estimates of the underlying parameters that basically maximize the density function. So it's the values of the underlying parameters that make the data that was observed the most likely. And if you plug in the values of the density function, basically we have these independent random variables, Y_i, whose product is the joint density. The likelihood function turns out to be basically a function of the least squares criterion. So if you fit models by least squares, you're consistent with doing something decent in at least applying the maximum likelihood principle if you had a normal linear regression model. And it's useful to know when your statistical estimation algorithms are consistent with certain principles like maximum likelihood estimation or others. So let me, I guess, finish there. And next time, I will just talk a little bit about generalized M estimators. Those provide a class of estimators that can be used for finding robust estimates and also quantile estimates of regression parameters which are very interesting.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
13_Commodity_Models.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So what I would like to discuss today is just touch upon what kind of problems the quantitative analysts are solving in the commodity world, the problems that are somewhat different from the other markets. And I'm sure you have the whole year of lectures here from people in different markets. And you will judge for yourself that the models we're building are somewhat, if not completely, different. So this is my goal today is just for you to have some taste of what kind of models we're looking at. So let's start. Let's start with the following abstract from the Dow Jones publication, the dispatch which announced in 2009 that the Trafigura-- which is the biggest trader, one of the biggest commodity and energy trader-- is potentially on track to post its best results ever in fiscal 2009 on lower oil prices and contango markets. Remember, 2008, just the year before that, the oil prices shot to the highest possible level, to $159 per barrel. And a lot of people blamed the traders for the high oil prices. And yet, in 2008, when the prices dropped-- in 2009, I'm sorry, next year-- when the prices dropped to $30 per barrel, as you remember, to the lowest possible level that we remember, Trafigura is about to make the biggest profit ever. So it sounds like a contradiction. So they're making money on the low oil prices and contango. And contango, which I have to explain to you what it is-- I assume that everybody here knows what the futures contract is. If not, just let me know, and I will explain. It's very simple concept, so don't be shy and just tell me. So I would like to show you how Trafigura will make the record profit in the year when the prices are at the record low level, but they are in contango. So this is the graph of futures contracts, futures prices, oil futures prices on January 15, 2009. Futures contract is simply the contract that allows you to buy today-- let's say, in January-- the barrel of oil for delivery in some future time. That's all. You always know what the price will be by looking into Wall Street Journal, Section C. Open, you will see. If you want to have delivery in August, the price will be around, what? $55? If you want delivery on February of next year-- so February, 2010-- the price will be $60. If you want delivery now, the price will be $35. So if you want to buy, basically, the spot price-- for your knowledge, there is no spot price, as such. And whatever you see on CNBC, for example, on TV, when they give you the spot price of oil, it's the price of the first future contract-- so this one-- of the nearby, the most nearby futures contract. So as you can see, this is the curve. And recall that the curve is in contango if the prices are monotone increasing. If they are decreasing, it's called backwardation. That's all. So just a useful term for you to know. So at that time, January 15th, the prices indeed were monotone increasing. They're going from $35 to February and to $60 in February. So this is February of 2009. This is February of 2010. And we can see the prices were going from $35 to $60. So let's now see how Trafigura made money. Can you guess, by the way? Yes. AUDIENCE: They borrowed money, bought the spot and then sold the futures contract? PROFESSOR: Exactly. So that's precisely what they've done. What is needed for that, though? There's one little thing that's required. AUDIENCE: Low interest rates and ability to buy the spot? PROFESSOR: This price is the ability to buy. But you need a little bit more. You need-- AUDIENCE: [INAUDIBLE] PROFESSOR: I'm sorry? AUDIENCE: [INAUDIBLE] PROFESSOR: No, no, you lock in. You bought at $35. AUDIENCE: Storage. PROFESSOR: Storage-- you need the storage. You need to be able to wait one year, because you already sold at $60 next year, in February. So you locked in a massive profit of-- so this is your strategy. Just borrow 35. Buy one barrel. Store it. Then immediately sell, so short means just go and sell the one barrel for next year at $60. So you have-- you made $25-- I'm sorry, you made $25 just on commodity. You have to-- on February, you will get this-- February next year you will get this $25. You'll pay the interest, which will be maybe, let's say 10%. So it will be $3.5. So you made total profit $21. If, like Trafigura, you have 50, 60 million barrels of storage, you can easily calculate how much money they have without any risk they made in this particular year. So whenever you hear that traders are benefiting from the high prices, it's not. Actually, if the same situations existed in the high prices, then the interest would be substantially lower. They could drop their profit by 50%, if, let's say, the same where the prices $125 to $135 to $160. Right? You would have the enormous interest payment. So in reality, you need the low prices, but you would like to have a contango. You really don't care, price is low or high. So to summarize, we need to have-- I mean, just a little thing here-- that the strategy works like a charm. The only thing you need is storage. OK, let me now invert the question. Let me ask you this question-- let's say we have the same curve. We are on January 1st, 2009, and we are asked to get the storage. You want to have your bosses calling you and saying, I mean, I need storage, oil storage from August to December. So in August, we have the price is, what, $55, in December $58. You go and get me-- here's your credit card. Just go and get it. Well, storage is usually-- you get it at the auctions. So a lot of people come to the auction, and they'll bid. So now you're going with the boss's credit card, right? And you have the following dilemma-- if you borrow too little, you will never get it. If you bid too much, you'll have the winner's curse-- namely, you will never be able to recover the money that you paid for that storage. So you have to have-- before you bid, you have to have a plan. How do you-- or strategy-- how do you recover the money that you pay for the storage through some foolproof, riskless activity-- some strategy like Trafigura. Remember, they have the riskless strategy. They lock in the profit. They can go to sleep for a year, and then vacation, and then come back and get this profit back. How do you-- in this particular case, you have, whenever you bid something, you also have to have a plan in mind how to recover this money and even more, to get a little bit of profit. So my question to you-- you need storage from August to December. How much would you bid? These are the prices. If you need more prices, they're all here. So you need the storage from August to December. AUDIENCE: Do the same thing? Do the same thing they did before? PROFESSOR: So how much would you bid for the storage? You don't have the storage, so you have to-- when you get it, then you do what we did before, because before what we did was assume that we already have it. Right now, you don't, but if you win it, you will have it. But if you win it, you have to have the strategy how to recover this money. Yes? AUDIENCE: Depends on how much profit you want to make. PROFESSOR: Well, it's not, because remember, you are competing against other people. Right? So if you become too greedy, they will outbid you. So let's say bid one penny, right? Assuming that you'll make-- so it should be-- what is the highest you can bid? AUDIENCE: $3? PROFESSOR: Uh-huh. You say $3. All right, give me the strategy that recovers that $3. AUDIENCE: But we do the same process. We borrow money, buy-- PROFESSOR: You don't have to borrow, because it's futures contract. You don't need-- you just go long the August. So buy August, using the futures contract, and sell December using the futures contract. You don't even need to borrow money. You will need to borrow, maybe, when you get to August, right? Then when you have to pay $55, then you'll borrow, right? So you bid $3, you recovered $3. No profits. So most likely, you will bid $2.99, right? That's the highest probably, can go. You have to give yourself at least a penny of a profit, right? AUDIENCE: The interest. PROFESSOR: I'm sorry? AUDIENCE: Also the interest. PROFESSOR: Let's forget about interest for a moment, just for simplicity. In reality, of course, you never should forget about it, but in order not to make our discussions too complicated, no interest. So you're basically will be at $2.99, it seems to me, right? You get a penny of profit, and you have a strategy if you get the storage. You know what to do. You immediately go buy August. Using futures contract, buy August, and sell December, locking in $3. Pay $2.99, if you manage to win, if everybody is not as smart as you are, and just bid-- or maybe they want bigger profit. You will win the storage. You will lock in. OK, and everybody agrees, right? This is actually a standard strategy. People use it all the time. That's what I would call-- that's what the trader would do. That's kind of the trader, so a business guy would do that. This is a common strategy-- was very, very common, let's say in the '90s. What the quant will do-- and that's where the added value of the quant is to the organization-- they will do something completely different. What they will do-- they will, on January 1st, they will sell something that is called August-December spread option. You heard the word option, right? Option is characterized by the payout at expiration. So we have expiration. We have the payout. This is not your typical options, not like IBM option. This is something different. The payout is determined at the expiration, which is, let's say, July 31st, right before the beginning of August, right? You will look again at the Wall Street Journal and look at the December and August prices on July 31st. And if the difference is positive-- this little plus sign means that if it's positive-- than you pay to the owner of the option this difference. If it's negative, you pay zero. You're more familiar with the options where one of this thing is just a fixed strike. It's called the fixed strike. Here, there's no strike. It simply the difference between December and August contract. But it's a two-dimensional object, so it's a little bit more complicated to value it. But there is a whole methodology developed, the same way for the regular options. This is for the spread options. So the quant will sell this spread option on August 1st. Why is it better? Well, first of all, we have to discuss how much will I get for this option? When I sold it, I give it to you just for you to be confident that I'm not trying to deceive you or anything like that. You will get-- this is the formula that is used to compute the value of that option. And when I substituted all the parameters that are necessary, I get that the value of the option is 447. So I immediately, on January 1st, got 447. Everybody else is going ready to bid $3. I have in my pocket 447. I can bid, let's say, 420. Right? Guaranteed if I know that everybody else is bidding $30, I will win the storage. Plus my profit margin is not a penny anymore. My profit margin is $0.27, right? If I can do even 410, even increase my profit margin if I really want to be greedy in this particular case. But clearly, I can have a bigger margin. I can have a bigger-- you can guess, by the way, without even looking at the formula, why the value of this option was the payout like this-- was this payout. Why is it bigger than $3? It's always bigger than $3. AUDIENCE: Discount? PROFESSOR: No, forget about, again, interest rates. It's not the discount. What is the-- on January 1st, what is the intrinsic value? If there's no volatility at all, then the intrinsic value of the options-- so at zero volatility, the value of the option is exactly the difference between December and August on January 1st, which is $3. So $3 is the intrinsic value of that option. And we all know that the value of the option is greater than the intrinsic value if there is a volatility. Because there is a volatility, the value is greater than $3. And actually, it's substantially greater than $3, because the volatility in the energy markets is very high. It's much, much higher than what you see in the interest rates, or FX or equities-- indices, I mean. Yes? AUDIENCE: So you're getting money from taking on more risk, basically? PROFESSOR: Let's-- we'll get to that. You are asking exactly the correct question, because yes, I am proud. I got 447. I paid 420. I mean, I was bidding 420. Of course I won. I got my storage. I bring it back home. Now let's see what kind of risk I brought home. I have $0.27 in my pocket left. That's my profit. But now let's assume on July 31st, let's say December price goes to $80 and the August price goes to $55. I sold this option. How much do I owe on July 31st-- option is exercised. How much do I owe to the owner of the option if December, on July 31st, 80 and August is 55? Yes, $25. I have only $0.27 in my pocket, right? So what do I do? That's my risk, as you're telling me. This is a risk that all of a sudden, I owe astronomical amount of money to the person to whom I sold this option. So what do I do? Do I run to Venezuela or what, I'm interested to know. Where? Can you guess? Remember, I have storage. And on July 31st, the August price is $55, and the December is $80. So you already told me what to do in this situation. That's what traders would do. They will buy immediately the August at $55, immediately sell December, because I have the storage. Now I can extract this $25 using my physical asset. So this is the beauty of the physical or real options-- that I can, by doing certain things, I can extract the payout of the option. So I'm completely protected. So it seems to me that my storage and that spread option is the same thing. Because I'm fully hedged with my storage. Right, is that clear? Do you understand how I managed to escape terrible predicament? Yes? AUDIENCE: What happens when the value of what you have in storage falls greater than what you received from the auction? PROFESSOR: But it's an auction, right? So I received 447. Everybody bids $3 or $2.99 and then 420. There's no value of the storage, except for the value what people are bidding for. So if my bid is the highest, that's it. I received it. So is that your question, or are you asking me something else? AUDIENCE: I want to visualize the payoff. It's when the-- PROFESSOR: Oh, you're talking about what happens if vice versa, December not goes to $80, but to $20, right? And this one still remains at $55. I do nothing, because the payout of the option is equal to zero, because the difference is negative. So I owe nothing to the owner of the option, and I do nothing with my storage. It's a fully hedged proposition. So the storage and this option is one and the same. OK? Any questions? So conceptually, we're on the same page, right? OK, so that was of course of the caricature of the real situation. So reality is somewhat more complicated, as always. In reality, let's say I go and bid for two years. I need the storage for two years. There are many spread options I can sell, right? In this, my example-- again, in the caricature example, I sold August-December. I could have sold August-November, August-October, right? Or I can sell-- if I have for two years, I can say May, November, June, September, and so on. If I have two years, it's 24. You can understand how many options I can sell. I mean, just options where-- options to put the oil into the tank today, and extract it six months later with some profit, or maybe three months later. And so there are a lot of these options. So first of all I have to determine-- so I can sell a lot of this option against the storage. I have to determine A: what is the most valuable portfolio of options I can sell against the storage? This is-- our strategy starts shaping up. So we want to optimize something. And what are we optimizing? We're optimizing the value of the portfolio that we can sell against the storage. Value is-- remember this cash that I get from selling the option? I want to get not just 447. I want to get 447,000-- just whatever. just I want to maximize, and bring it to the highest possible level based on the information-- price information, volatility information-- that exists at this particular moment. I want to determine what kind of portfolio options. Now, this is optimization. What are the constraints? And now the constraints are requiring that you have some technical, contractual, legal, and environmental understanding of what's going on. The simplest constraints I can give you is that you cannot put the gas in the ground or oil in the tank as quickly as you want. There are certain constraints on the injection rate. There are certain constraints on the withdrawal rate. You cannot instantaneously extract gas from the ground, or oil from the tank. You can never have a situation-- remember that whenever the option expires, you have to do something to extract. Remember, to extract the value, whenever you owe somebody-- the option holder-- $15 or $25, you have to put the oil in the tank, wait six months, extract it. Under no circumstances-- so your option portfolio should be sold in such a way against the storage that under no circumstances, you are in the situation when the option is in the money. So option is positive. You have to do something to extract the value. So you have to inject, and there's no space, because you've done something before, because some other options expired before, and you put already oil in the tank. And the tank is full. And now a new option expires, and tells you put more into the tank, and there's no space. In the opposite situation, when you need to sell from the tank and the tank is empty, because some other options from that portfolio managed to completely deplete the oil from the tank. Questions? So let's try to make an attempt to write this optimization problem. Just for fun, so that you have an idea what's going on. So first of all, let's see what we're trying to optimize. Let's start from the end. This contract simply tells me that I wanted some particular-- F is the price of the futures contract with expiration, let's say, the month of June. So this particular term of the sum tells me that how much I will have to pay in June if I buy-- this is the volume. Let's say right now is January. If I buy in June this amount of oil using my futures contract, how much I have to pay? This is kind of cash from my pocket. Right? I mean, it's clear. The same thing here. There is option here. This is just straightforward buying and selling, buying into the storage and selling from the storage. Agreed? A little bit more complicated is this list. This is the option as I told you before-- the option that injects oil into the tank at one month, and extract it in month i, and extract it in the month j, which is later. So you extract in June, extract in November. So this is the value of the option. This is the volume associated with this option. I sold this option. Therefore, I have the positive cash flow. So this is 447 in my pocket. This is another option that I sold. This is an option which is kind of opposite, which is first you extract, then you inject, which is typical situation when you receive your storage with already oil in the tank. The curve is not contango, but backwardation. So it's more profitable for you to sell it now because the prices are high, and then replace it later. So it's kind of symmetric situation to this one, to the options that we already discussed. It's just it works when the curve is in the opposite direction. So you try to make money no matter how. Just the prices look. You're trying to make this money. So you try to optimize this portfolio. OK? Now, let's talk about constraints. To determine the constraints, I introduce right now the Boolean variable, which is 1 or 0, which basically tells me if, when I come to the exercise of this option-- like in our case on July 31st, in our first example-- and I see the option is in the money or not. So do I have to do something or not? Do I have to inject into the storage? And so is it $15 or it's minus $15? If it's minus $15, the difference between the prices, I don't do anything. So then it's 0. If it's plus $15, I have to do something, because the owner of the option asks me $15, right? And so I have to put oil into the tank, and extract it sometime later. So that's one. The same thing with the option which is opposite-- first withdrawal, then injection. So then injection constraints will be quite simple. So at the expiration, I look. If this one is 1, then it means that I have to do something. And what I have to do, I have to inject-- at the month i, I have to inject this volume x_(i,j), which was withdrawn later. So that is injection. If some option before that was in the minus-- so I have to inject before and withdraw now, that will cancel my injection. So if simultaneously, I have injection, because one option is in the money, and withdrawal, because the option that was exercised before is in the money, they kind of cancel each other. So that's why this sign is plus and minus. The same thing with others options, and the same thing with pure futures contract. And that should be less than the injection rate. The same thing with withdrawal rate, right? I mean, it's a similar consideration. Now, maximum capacity constraints, it tells me that if I start with this inventory in my tank at time 0, and this is basically how much I will get at any time i, for any month i, that's how much I will have in my tank by that time i. You have to believe me, because it's not a trivial thing. But the same thing with minimum constraints. What you really have to understand, that unknowns are x, y, z, v, and unfortunately, omega is also unknown. Omega is just-- it's basically these are the volumes. This is the kind of control variable, the price. And the price can be-- the spread can be positive, can be negative, can be 0. And this is becoming a very ugly, non-linear problem, very quickly. Very big, so you have a lot of variables. For two years, you have, what, 12 times 11, which is 132. Let me just-- no, 24. I'm sorry. It's 24 times 23, so it's 12 times-- so it's a big number of variables, plus an equally big number of constraints, and the constraints are non-linear. So the problem is pretty hard. I leave it up to you to decide how to solve it. That's why they take optimization courses here. I can suggest that there are several approaches people would take. There's approximation, where this problem is approximated, let's say by linear programming, or quadratic programming, or whatever you want. You can do it through Monte-Carlo simulations. Or there is an interesting approach through the stochastic control. I recommend you a paper by Carmona and Ludkovski, exactly how to do this-- how to make a decision of injecting oil or withdrawing oil based on the stochastic control, stochastic optimization methodology. But it is quite an interesting paper. So either of these approaches can be used. And they are used, and give you sometimes different results, but you can look. Any questions so far? AUDIENCE: Is the stochastic control solution an optimal solution or exact solution? Is it giving the solution that the Monte-Carlo simulation's approximating? PROFESSOR: Let me put it this way. None of these solutions is better, or prettier, or whatever, because of the parameters that go into the problem. And you can have the most precise methodology, but if your parameters are only 50% accurate-- in reality, all those parameters necessarily for Monte-Carlo simulations with stochastic control, we really don't know what they are. We can only can guess through some implied market parameter volatility, and so on. And if you're wrong, you're wrong. Even your method can be absolutely precise. I like it, because it's really very nice mathematics there. We personally don't do that. I'm not going to tell you what we're using, but we do something different. And our methodology is chosen to be robust. We chose it because we want the methodology to be extremely robust. So we don't want small changes-- we don't want to have the situation where small changes, which usually comes when you overparametrize. I mean, people like to go and later-- in the second half-- I will discuss some of the models people use. For example, they see the richness of the behavior of the prices, and they want to introduce a lot of parameters to capture this richness. By that, they sacrifice the stability and robustness. So little changes in the parameters and the value can change substantially. So we prefer a different approach, where we maybe a sacrifice some of the value, but we will gain the robustness, stability. And this is the most important thing. And most importantly, everything that we use in the model can be verified by the outside regulators and controllers, which in this day and age is extremely important. Because these days, to have a model that is calibrated-- even the word calibrated causes a lot of antennas just going up. So we try not to calibrate anything. Everything that we do can be verified-- every little brick that goes into the model can be traded in the market, and can be verified. This is very, very important in this day and age. Yes? AUDIENCE: [INAUDIBLE]. What do you mean by calibration [INAUDIBLE]? PROFESSOR: We'll get to that, but very quick answer is this-- just you have a lot of parameters in the model. I'll show some of the models people use. These parameters are not-- you don't see them in the market. It's the result of your model. It's because you chose this particular model. Black-Scholes uses one parameter. If you use Black-Scholes with jump-diffusion term, and some other, you introduce another ten parameters. These parameters are not observable, so you have to somehow calibrate it to some market data. And the calibration becoming a pretty interesting, because calibration usually is done through some kind of least squares approach. So you try to look at the observable data, and then adjust your parameters using some least squares approach to match that data. Least squares is notoriously bad-- problem, so you may have many solutions. The solution set can be very flat, so you can stop very early. So there are problems with least squares. It's non-linear optimization, so by itself, it's very difficult. So calibration can be very, very unstable. Any more? OK. Now I have to tell you the secret-- there is no Santa Claus. Just remember, I was telling you that I go and I sold the option to get 447. I've got the option. In reality there is no option market. Nobody will buy this option from you. The market for this option-- non-existent. Now what do we do in this particular situation? That's where the whole beauty of Black-Scholes comes into play. Black-Scholes didn't get the Nobel Prize for integrating some very simple payoff function that you can do in first year of school. They got the Nobel Prize-- or they got kind of appreciation for what they've done, because they showed that, with this value, they also have the strategy that allows you by doing something every day or every half a day, you can replicate this value. So if you paid for the option $5, then you can show what to do every day. They tell you what to do every day in order to get back this $5 at the end. So that's what their main achievement. Or if I sold an option, and received $5, then I will do something that-- I will use this $5 to replicate any payout I owe at the end. So I sold it to you, for example. So at the end, I owe you $50. But they showed how, through the dynamic hedging, and using this $5 that I receive up front, I can be able to meet my obligations to you. So they showed how to replicate the payout of the option. So in reality, I don't have to have the market. If I sold somebody an option for 47, I don't have to have anybody who buys it. What I will do, I will simply use this dynamic hedging strategy that Black-Scholes advised me, adopt it to the spread options, which at the end will produce for me 447. That's it. But here again, this is another task that quants will have to do. Not only they will find the value. They will produce every day the strategy that allows you to extract this 447. So we already showed what to do with the storage. But now you have to do on the next level of complexity. You have to tell to the trader what to do every day in order to, in reality, get this 447 from selling this option. So you told this trader, I'm selling this portfolio for $100,000 for the storage. OK, so, but there's nobody to sell it. And you'll say, don't worry. I will do it through dynamic hedging. At the end, you will get your $100,000. So he will believe you. And then he will do it. He will get $100,000. Then, on the next level you have to work with the guy who operates the storage. And you will tell the person when to inject oil, and when to withdraw, and to get all of this magic which we've discussed before. So I think it'll be a very logical point for me right now to stop. And in the next half, we'll discuss how to model the power plant. So we're going to the next topic. So far, we have covered how to model a physical asset named storage. The same approach can be used to model practically everything that we're interested in. For example, we can use the same thing to model the tankers, power plants, these storage refineries, anything, power lines, pipelines, everything can be modeled using this methodology. Of course, the nature of the beast will be different. And you have to understand the nuances and the-- as I said, the constraints. This modeling-- for example, tanker requires understanding of all the routes, and the constraints in the ports, and all these things. But the conceptual philosophical approach is the same. You have to find some optionality and that's some additional value. So I obviously don't have time to go over the whole thing. But let's, again, for you to have a taste of this, let me just-- let's decide how to model the power plant. Assume that you are the manager of the merchant power plant. Merchant means that you decide when to run it, or not run it. You run it to maximize the profit of this power plant. So how do you decide? Let's say you decide once a day, in the morning, whether to turn it on or turn it off, or not to turn it on. So how do you decide? Well, sounds complicated. In reality, it's very simple. You wake up in the morning. You look, first of all, at the newspaper and find out what's the price of electricity today. And price of electricity you know for each hour-- sometimes for each 15 minutes. It's determined. So you know for each hour, or maybe the daily price, enough for the whole day. So that's the price. If you sell it, that's how much money you will get. On the other hand, to produce one megawatt-hour of power, you have to do something. You have to turn on your turbines. So you have to bring some fuel. You have to put some fuel into the turbines, just to make them move, and produce electricity. Fuel can be anything. Let's say it's natural gas, but some fuel. Now you have to determine-- so you know how much money you'll get for one megawatt-hour of power, right? But how much it will cost you to produce it? So how much fuel you need to determine how much-- first of all, how much it costs you in terms of fuel to produce this one megawatt-hour, you have to know the efficiency of the plant. Because efficiency of the plant is telling you how much units of fuel-- let's say how much MmBTUs, so millions British thermal units of natural gas, that's how it's measured-- how much MmBTUs of natural gas you have to burn to produce one megawatt-hour. Well, this measure of efficiency for the power plant is something called heat rate. So heat rate is exactly this coefficient by which you multiply the-- so if I say that heat rate is 7, it means that I need 7 MmBTUs of natural gas to produce one megawatt-hour of power-- that's all. So in our case, there's nothing to be concerned about. It's just simply some constant that is given. It's some constant between 7 to 20. 20 is a very inefficient plant, very rarely run. Seven is right now more kind of a standard constant. That's the constant corresponding to the natural gas power plant, which is-- right now the majority of the new plants is the natural gas plants. And there are some other costs associated with producing one megawatt-hour-- just air-conditioning, labor costs, and so on and so forth. Typically, they are not the biggest component. So if, let's say, this is seven, what is the price of natural gas right now? Do you remember? AUDIENCE: $3.20. PROFESSOR: That's-- I wish, but it's not. It's around $4 right now, let's say, per MmBTU. So you need $28. This is probably around $3, so you need $31 to produce one megawatt-hour. AUDIENCE: I was thinking of gasoline. PROFESSOR: Oh, no, not that gas-- natural gas. Sorry about that, but just even gasoline is right now around $5. AUDIENCE: Really? PROFESSOR: Yes. Or maybe you are filling your tank someplace which you'll share with us. [LAUGHS] So is that clear? That's basically-- so you wake up. You look at the price of the power. Let's say it's $50. You know your plant is, let's say, heat rate 8. You look at the price of natural gas. It's $4, variable cost $3. Will we run the plant? What will be your profit? So $4, 8, 3, so it's $35 it will cost you to produce one megawatt-hour. This is $50. You made $15 profit. You turn it on. It runs. If, on the other hand, the price of power will be $30-- you look in the newspaper-- you go back to sleep. And nothing happens. You get zero for that day. So this is the maximum between this price and zero. Agreed? Well now, before I go to the next slide, I have to-- together, we'll have to determine what is-- if I want to buy a plant-- if I want to buy a power plant, how much I'm willing to pay for that. Well what I'm willing to pay-- I know that every day I'm getting this thing, right? But I don't know. Each of these things is a random number. So I have to kind of buy-- I have to construct the distribution of this price, of this price, maybe correlation between them. It's a two-dimensional distribution, so correlation, find some expected value. And that's how much I will want to pay for the power plant. So now we come to an interesting question. So now, real work starts. So I know that this is the value that I have to integrate, but I don't know with respect to which distribution. So I have to now construct the model of the power prices, fuel prices. And then take this two-dimensional distribution and try to find the expectation. And that's how much I'm willing to pay for this power plant-- agreed? Well, so let's start with kind of building these components. That's what, again, quants will do using already the experience from other markets, and so on. You already heard the other lectures. So what we'll do, we'll first look at the distribution of power prices to see. Because I don't know how to model the evolution of power prices. I know if I use simply Brownian motion-- let's say it's my first idea. Well, let's see. Brownian motion gives us the distribution which is, as you remember, at any point of time is kind of normally distributed. If I look at the terminal distribution of price, which is driven by Brownian motion-- if I assume that the price is driven by Brownian motion, then at the end, I'll get the normal distribution. So if I look at the, say equity-- this is S&P 500-- indeed it's-- we all know that S&P has fat tails, right? But as we can see in a moment, this is not a fat tail. This is very close to the normal distribution. For the equity guys, of course, for the guys who trade the stocks, it's an enormously fat tail. But from the commodity point of view, this is just perfect, normal distribution. This is the distribution that we deal with. This is-- the tails here are just-- so the normal distribution just out of the window immediately. So Brownian motion out of the window-- we cannot construct this distribution of the terminal prices of power using that. That's one thing. So we have very fat tails. And this is the kind of the parameters that specify the distribution. If I look at the, let's say the equity index, the kurtosis is very close to the kurtosis of the normal distribution, which is 3 exactly. But if I look at the Nord Pool, which is the power prices in Scandinavian countries, kurtosis is 26. If I look at the one-hour price, it's 76. So it is, as you can see, as far from normal as possible. And we've seen it in the picture. And these are the numbers corresponding to that. Moreover, look at the behavior of the prices. This is the price in Texas, for example, power prices in Texas. What immediately jumps-- I mean, look at the prices, and say, wow, how is it different from what you see, let's say, in the equity world, the stock market, for example? What is it that immediately jumps at you? What is different here? Go ahead. There's no right or wrong answer. AUDIENCE: Spikes. PROFESSOR: Spikes-- that's the key word. Like no other market, we have spikes here. That's a major, major issue for us from the modeling point of view. You take any standard, say Brownian motion, it will never exhibit the spikes. Not only that, the volatility, of course, of the prices, as we already expect, is huge. It's hundreds. S&P volatility right now is 10% percent. This is 100, 200, sometimes 1,000, right? All the intuition you have about the behavior of the prices, the behavior of the random variables, is mostly evolved, all this intuition, under this 10, 15, 20% volatility assumption. When the variables behave in a completely different way when they have volatility of 100% or 200% or 300%, so that's another thing that will be challenging us, spikes and high volatility. It's just the same thing. It's everywhere, so I just wanted to bring-- a different region of the country, it will be exactly the same thing. So it's common thing. So if you look, and start to summarize what we're trying to capture is-- mean reversion and spikes are more or less the same thing, so we know if we go far away, it will go back-- high kurtosis, regime switching, I will talk about that in a moment-- and non-stationarity. That goes without saying. That's true for most of the markets. And now the problem that we have to capture-- or not the problem but another phenomenon that we have to capture-- that the power and the fuel, natural gas, exhibit a very particular structure of the correlation. So correlation is not a number. It's dependent the heat rate in the market. Remember heat rate is the efficiency of the power plant that are running in the market. Depending on the efficiency, you may have very high correlation or low correlation. We'll discuss that at the end of this talk, which is why this here. Right now, I'm giving you something that we observe, and our models, preferably, should capture that as well. So requirements for our model are pretty intense, and pretty difficult. I don't think that you ever assume the requirement of the correlation should have some particular structure, just depending on some parameters, and so on. OK, any questions? So the models typically-- the first thing people do, they take the models that they know, and they try to apply. Let's just very quickly go through the models. I mean, you've seen these models already in the previous lectures, right? Let's start with the straightforward geometric Brownian motion, which out of the window right away, right? I mean you agree, because no spikes, no high-- no correlation structure, nothing. So it's clear that in order to-- we want to kind of start with Brownian motion, geometric Brownian motion, try to maybe modify it a little bit. So what is spike? Spike means that just things go up, and then they're pulled back. So there should be some mean reversion, right? So mean reversion is good, right? But unfortunately, mean reversion, if you-- first of all, it's pretty strong, so it goes, remember, if the price goes from $30 to $1,000, and go back to $30, mean reversion should be extremely strong, right? Second of all, what pushes it to the $1,000 level? You need to have a jump. So that's why people introduce the jumps. So we have mean reversion, jumps; jumps and mean reversion. Still it is becoming pretty clear that this will not work for the following reason: it goes from $30 to $1,000, and then it goes back, because it's a spike. It goes back within three or four days. It comes back. So the mean reversion should be extremely strong. The force of pulling back should be extremely strong. But if it's so strong that it moves you back from $1,000 to $30, imagine what it does when you're on the level of $30. Because it will be completely flat-- just nothing, you could not move there, right? So people observed that, and introduced, well, they figured out maybe the mean reversion will be different at the level of $1,000 than when you are at the normal level of $30, so the difference. So you introduce right now so-called regime switching. So it means that all the parameters change, so you introduce the kind of the high price level, low price level, parameters change. And now we're getting to the example that we discussed in the first half. You end up with the model with 10 parameters, 12 parameters, which is absolutely impossible to manage. I mean, I can give you another, probably, hour discussing why this approach in general is not good. And not just in commodities, but everywhere else. I mean, I'm not a big fan of making the models extremely complicated, because introduction of one complexity leads to introduction of another complexity. And you just cannot manage this thing. It's impossible to manage. Theoretically, it looks fantastic-- unmanageable. So we have to figure out something else. So all these methodologies-- and I put them there-- the methodologies people actually use, the stochastic volatility, the regime switching, multiple jumps, and so on. This-- it's all used, but it's not what I want to suggest. I want to suggest something completely different. These are the methodologies that typically are coming from fixed-income world or foreign exchange or equities and so on. I want to introduce something completely different. Any questions so far? So I want to introduce the methodology which is more suitable and more understandable from the commodity point of view. Because actually, what's the price of commodity? Price of commodity is driven by only two things. Can you guess which? AUDIENCE: Supply and demand. PROFESSOR: Exactly, supply and demand-- that's all it is. That's what we'll try to do. Maybe we can-- we have a hard time modeling the commodity prices using just these standard methodologies from different markets, which rely only on the prices themselves. Maybe if I introduce some fundamental modeling as well, I can do it without losing the most important part of this model, is it matches the market data. So I have to-- A: I want to model supply and demand. But I also would like to match every market data that I have. So maybe I introduce a completely different complexity, maybe not. So let's try to see if I can succeed here or not. Before we're going to go into the depths of this, let's discuss how the power prices are formed. Power prices are formed in the following way-- let's say that the market consists of two power plants. And there's Generator One and Generator Two. Every day, they get prices of power formed at the auction. So every time the generators will submit the bids, which says the following-- I can generate 50 megawatt for a given hour. I will generate 50 megawatt. If you ask me only to generate 50, I will generate them at $20. Why? Because I will run my most efficient power plant, which is the cheapest one to run. I will generate 50 megawatts. If you want more, if you want me to-- I'm bidding-- if you want me to do 100, then you'll have to pay me $25, because I will have to introduce less efficient power plants, and so on. 200 to $30, and if you want me to me generate 600, you will have to pay me $50. So that's my bid, so-called bid stack. That's what I'm sending to the auction. The other guy is sending similar things. The auctioneer-- there is an auctioneer, organization called independent system operator, ISO. They collect all these bids. They know what demand will be tomorrow. They collect, and then knowing this demand, they will-- first of all, they will sort out all the bids in the most optimal way. So they'll put it all together by sorting. So this is the auctioneer combined all these two bids, created this graph-- so basically based on how much they need to generate. So based on the demand, this will be the price. That will be the clearing price of the market. And then, because the auctioneer knows all the bids, the auctioneer will send the demand for this generator and that generator. So you will generate 60, you will generate 600, or whatever. The final price is the highest, basically highest price that is necessary to meet the demand. So if the demand is 600, so this will be the price. If it's 800, this will be the price. So the price is clearly the function of demand for any given day. Well, even if this is the case, then if I look at the-- if I plot, do the scatter-plot demand versus price, I'll have to see something similar. Well, let's see. When we take a particular market, and do the scattered graph, so demand versus the price, you see this is the graph that we expect to see. It's a little bit fat. Why is it fat? Why is it not a straight line? Why is it not the line as before, but it's fat? What is random here? Remember I told you that each generator will bid approximately how much it will cost them to generate at the particular day. That cost depends on the fuel price, and that is a random number. So what you can see here is a lot of these curves, but they are kind of randomly moving because of the fuel price affects the cost of running. But conceptually, philosophically, this is exactly what we discussed-- that these guys will bid this curve. And depending on the demand, the price will be simply the value on that curve. Are you with me? So far so good? So this is-- that's what we're trying to model. So if we can model that, if I can model for every day the bid curve that the auctioneer, the independent system operator, sees, I know the price at that particular day. Moreover, I don't want to get too complicated, but in reality, I don't even have to know precisely the curve. I kind of have to know precisely the distribution, because remember, for the value of the option, you don't need to know where the price will be at expiration. You just need to know how the price will be distributed. If you catch the distribution correctly through, again, dynamic hedging and so on-- that's what Black-Scholes tells you-- you can actually value the option correctly. So you can value the power plant correctly. But that's besides the point. The point is right now I'm trying to model, somehow, the randomness of this bid curve. So now, to summarize, the power price is the function of what? Of demand, clearly-- we already know it is that. But also the fuel prices, because the fuel prices determine the cost of generation and therefore, how much each generator will bid into the market. That's dependent on the cost. If the natural gas goes through the roof, the price of generating one megawatt will be very high. So the person will be-- the generator will be bidding very high prices. And that's what we see here. And also outages-- we have to model outages, because the market is only finite, kind of fleet of the power plants. And if a couple of them will go down, the price of power can be affected dramatically. The 1997, the Indiana-- the price went from $40 to $7,000 because tornado hit the nuclear plant. You take a big chunk of the generation from the stack, and all of a sudden, you have to run absolutely anything, including some very expensive diesels, and so on. So the price of power, obviously, goes very high. OK, so these are three things that we try to model. So how will we model that? Well, before even I get to the modeling of them, let me just again outline our modeling approach. If I know the fuel-- let's say there are no outages-- if I know the fuel price, I know then how much each of the generators-- how much it will cost for each of the generators in the market. Because I know everybody in the market. I know how much for each of them it will cost to generate. So fuel, if I know fuel price, I can generate so-called generation stack. It means the cost of generation for each of the generators, for each of the participants in the market. The outages will simply allow me to take out some of these participants out of the picture, basically. So I need to know that, but that's-- at any given day, if I know the outages, I know the fuel prices, I can construct the cost of generating for everybody. So it's close to the bid stack. It's something that they will bid-- their cost, maybe plus some profit. I don't know what their profit margins are. But what I know, I know the market prices. I know the options, how they trade, and so on. So the bid stack will, in general, follow the generation stack. It will be, more or less, the same thing, maybe some added profit requirements. But I can back these profit requirements from the market prices. That's where my supply-demand approach and the market will get together. I will adjust the generation stack in such a way, maybe moving it up and down in such a way, that I matched the prices, and I matched the option prices, so measured the volatility of the market. So now the circle is completely closed, if I can succeed in doing that. Questions? This is the key. Well, let's see if I can do that. So first I want to-- now if I manage to do that, then if I know how to model the evolution of fuel prices, if I know how the outages are modeled, and I model the demand, then I can determine how the power prices will be moving in time. So power price is the function of the bid stack and demand, as you remember. So if I know the evolution of the bid stack, the evolution of the demand, I can determine the evolution of the power price-- completely different approach. So let's start with the evolution of the fuel, with building the fuel model. Well, I told you that there is natural gas. In reality, we have to model all that stuff. Because each of them has a curve because of volatility. It looks a lot, but in reality it's not, because we have a pretty good idea. Unlike the power prices with the spikes and all this crazy behavior, we have a pretty good idea of the behavior of these things, because they're all storable. By the way, power prices are jumping because of non-storability of power prices, because they're not storable. Because they're non-storable, you cannot smooth out the changes in the demand. Just so you have-- your action's immediate. These are all storable commodities, so the behavior is much more regular. They're much closer, let's say, to the equity prices. And so we can use some standard models. So from the modeling point of view, they are not particularly difficult to model. So particularly just natural gas, heating oil and fuel oil, just the coal-- just we have a pretty good understanding of how these things are modeled. Outage, well, we take the standard model from-- and reliability theory provides us with a very well-developed mechanism and apparatus how to model these things. I mean, usually we do it through some kind of Poisson, or version of Poisson process. It is very well understood. There's a lot of literature on that. We can model that very easily. Where do we have the parameters for these Poissons? The government provides us with these. There's government data. Sometimes we get it directly from the power plants. Everybody keeps track of the frequency of the outages and so on and so forth. Demand-- how will we model demand? Well, we'll model demand through the temperature, typically. Why temperature? Because for temperature, we have a lot of data. It's statistically a very stable thing. So there are many different approaches to modeling temperature. It's up to you. There's a lot of literature, so I'm not going to go into detail. So we choose something. We model temperature. From that, we model the demand. And it works pretty well. So now we have-- we modeled the evolution of temperature, evolution of demand, evolution of fuel, evolution of the outages. Now we can construct the generation stack. So remember, the generation was this curve, this curve that was a function of demand, but also the fraction of the outages, variable costs, and so on, and the fuel, the vector fuels. And then there's these parameters alpha and these alpha parameters we choose to match the futures, the forward curve for the power prices and all other market parameters as we need. So that's how we get-- so we're matching the market, and we're matching the kind of supply and demand formation. Now, it is very clear why with this approach, I can capture spikes without effort, without any effort. And the reason is very simple. This is my-- remember the stack, my bid stack or generation stack, whatever. So for high generation volumes, it's becoming more and more expensive to generate. And after a certain time, you've exhausted all your cheap plants. You have to go to very, very expensive plants, like diesel plants and so on, which runs maybe once in a lifetime, maybe once a year, twice a year. And very expensive. But they determine the price. They are the ones that determined the clearing price at the market. Now, let's see. If we are somewhere here-- this is demand. Demand is driven by temperature. Temperature is typically a normal thing with maybe mean reversion. If it goes up, it typically reverses some mean. So this is the distribution of demand around here. Well, if demand moves left or right, up, down, the prices don't change much. So that's your normal regime. But let's assume demand is somewhere here. If you are to the left, the prices are very small. But the moment you move a little bit to the right, your temperature moves a little bit to the high region. You immediately spike into the $1,000 territory. But remember, temperature's mean reverting, so within five, six days, it comes back. You go back. Your price goes from high levels back to the normal level. That's your spikes. You get it completely for free. And I'll show you right now a couple of graphs. One is the actual graph. The other the simulated using this approach for the same market. So as you can see, this is the actual price for that particular period. These are the prices that we generate-- just plain Monte-Carlo simulation. As you can see, this can be easily this. Just from the point of view-- distribution of the spikes and so on, it's exactly the same thing. This is what we were after. Moreover, I'll tell you even more. If I knew the past of the fuel, then obviously my approach-- there are two graphs here. You cannot distinguish between them, because I substituted exactly the right price of the fuel, just actual price, which was historically. And therefore my price and the actual price were the same. So, but the reality, of course, I don't know the past. But as I explained to you, I don't need to know. I just need to know the distribution of fuel prices is correctly captured. Once I have the correctly captured distribution of the fuel prices, I have, according to this, correctly captured distribution of the power prices. Moreover, so as you can see, we have a very nice behavior. Now from the parameter's point of view, this is the model. Look at the kurtosis. This is summers. Kurtosis of the model and kurtosis of the empirical is very, very close. So the distributions are very close. I mean, the skewness and so on-- it's a very, very good capturing of the distribution. So this approach works very nicely. It's completely different from what you get used to, but it's really the one that works. Moreover, final benefit-- bonus point-- this is simulated correlation structure, remember? And this is the actual one-- very close. And the beauty of it is you don't have-- it's not an input. It's an output. I don't ever use the correlation as an input in my model. I got it. I got the distribution of power, got distribution of fuel, natural gas, put it all together, compute correlation and that's what I've got. And this means that this is really a correct approach. And I don't need to put it-- I don't need to look for the distribution that has this property. This property comes for free, just as a result of the completely alternative way of modeling this thing. Now, what is the negative side of this? The negative side is it's extremely difficult to do it and maintain it, because you have to maintain the information of every power plant and was built, was retired, and will be built, and so on. Because you have to look at the power prices 10 years, 20 years from now. You have to know what's going to be there, what kind of stack you have to forecast. That's a lot of information to keep. I mean, you have to have a big organization to work on that, to maintain it, to build the model. Because each region in this country has a different market. So you have to-- I mean, it's a massive undertaking. It takes years. So it's not like you can get it from the can. It's expensive. So I think that will be a good point for me to stop. If you have questions, please let me know. Questions? I think it's music to my ears. Thank you.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
18_Itō_Calculus.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's begin. Today we're going to continue the discussion on Ito calculus. I briefly introduced you to Ito's lemma last time, but let's begin by reviewing it and stating it in a slightly more general form. Last time what we did was we did the quadratic variation of Brownian motion, Brownian process. We defined the Brownian process, Brownian motion, and then showed that it has quadratic variation, which can be written in this form-- dB square is equal to dt. And then we used that to show the simple form of Ito's lemma, which says that if f is a function on the Brownian motion, then d of f is equal to f prime of dB_t plus f double prime of dt. This additional term was a characteristic of Ito calculus. In classical calculus we only have this term, but we have this additional term. And if you remember, this happened exactly because of this quadratic variation. Let's review it, and let's do it in a slightly more general form. As you know, we have a function f depending on two variables, t and x. Now we're interested in-- we want to evaluate our information on the function f(t, B_t). The second coordinate, we're planning to put in the Brownian motion there. Then again, let's do the same analysis. Can we describe d of f in terms of these differentiations? To do that, deflect this, let me start from Taylor expansion. f at a point t plus delta t, x plus delta x by Taylor expansion for two variables is f of t of x plus partial of f over partial of t at t comma x of delta t plus... x. That's the first-order terms. Then we have the second-order terms. Then the third-order terms, and so on. That's just Taylor expansion. If you look at it, we have a function f. We want to look at the difference of f when we change the first variable a little bit and the second variable a little bit. We start from f of t of x. In the first-order terms, you take the partial derivative, so take del f over del t, and then multiply by the t difference. Second term, you take the partial derivative with respect to the second variable-- partial f over partial x-- and then multiply by del x. That much is enough for classical calculus. But then, as we have seen before, we ought to look at the second-order term. So let's first write down what it is. That's exactly what happened in Taylor expansion, if you remember. If you don't remember, just believe me. This 1 over 2 times, take the second derivatives, partial. Let's write it in terms of-- yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Oh, yeah, you're right. Thank you. Is it good now? Let's write it as dt, all these deltas. I'll just write like that. I'll just not write down t and x. And what we have is f plus del f over del t dt plus del f over del x dx plus the second-order terms. The only important terms-- first of all, these terms are important. But then, if you want to use x equals B of t-- so if you're now interested in f t comma B of t. Or more generally, if you're interested in f t plus dt, f B_t plus d of B_t, then these terms are important. If you subtract f of t of B_t, what you get is these two terms. Del f over del t dt plus del f over del x-- I'm just writing this as a second variable differentiation-- at dB_t. And then the second-order terms. Instead of writing it all down, dt square is insignificant, and dt comma-- dt times dB_t also is insignificant. But the only thing that matters will be this one. This is dB_t square, which you saw is equal to dt. From the second-order term, we'll have this term surviving. 1 over 2 partial f over partial x second derivative, of dt. That's it. If you rearrange it, what we get is partial f over partial t plus 1/2 this plus-- and that's the additional term. If you ask me why these terms are not important and this term is important, I can't really say it rigorously. But if you think about dB_t square equals dt, then d times B_t is kind of like square root of dt. It's not a good notation, but if you do that-- these two terms are significantly smaller than dt because you're taking a power of it. dt square becomes a lot smaller than dt, dt to the 3/2 is a lot smaller than dt. But this one survives because it's equal to dt here. That's just the high-level description. That's a slightly more sophisticated form of Ito's lemma. Let me write it down here. And let's just fix it now. If f of t of B_t-- that's d of f is equal to-- Any questions? Just remember, from the classical calculus term, we're only adding this one term there. Yes? AUDIENCE: Why do we have x there? PROFESSOR: Because the second variable is supposed to be x. I don't want to write down partial derivative with respect to a Brownian motion here because it doesn't look good. It just means, take the partial derivative with respect to the second term. So just view this as a function f of t of x, evaluate it, and then plug in x equal B_t in the end, because I don't want to write down partial B_t here. Other questions? Consider a stochastic process X of t such that d of X is equal to mu times d of t plus sigma times d of B_t. This is almost like a Brownian motion, but you have this additional term. This is called a drift term. Basically, this happens if X_t is equal to mu*t plus sigma of B_t. Mu and sigma are constants. From now on, what we're going to study is stochastic process of this type, whose difference can be written in terms of drift term and the Brownian motion term. We want to do a slightly more general form of Ito's lemma, where we want f of t of X_t here. That will be the main object of study. I'll finally state the strongest Ito's lemma that we're going to use. f is some smooth function and X_t is a stochastic process like that. X_t satisfies... where B_t is a Brownian motion. Then df of t, X_t can be expressed as-- it's just getting more and more complicated. But it's based on this one simple principle, really. It all happened because of quadratic variation. Now I'll show you why this form deviates from this form when we replace B to an X. Remember here all other terms didn't matter, that the only term that mattered was partial square of f... of dx square. To prove this, note that df is partial f over partial t dt plus partial f over partial x d of X_t plus 1/2 of d of x squared. Just exactly the same, but I've replaced the dB-- previously, what we had dB, I'm replacing to dX. Now what changes is dX_t can be written like that. If you just plug it in, what you get here is partial f over partial x mu dt plus sigma of dB_t. Then what you get here is 1/2 of partials and then mu plus sigma dB_t square. Out of those three terms here we get mu square dt square plus 2 times mu sigma d mu dB plus sigma square dB square. Only this was survives, just as before. These ones disappear. And then you just collect the terms. So dt-- there's one dt here. There's mu times that here, and that one will become a dt. It's 1/2 of sigma square partial square... of dt. And there's only one dB_t term here. Sigma-- I made a mistake, sigma. This will be a form that you'll use the most, because you want to evaluate some stochastic process-- some function that depends on time and that stochastic process. You want to understand the difference, df. The X would have been written in terms of a Brownian motion and a drift term, and then that's the Ito lemma for you. But if you want to just-- if you just see this for the first time, it just looks too complicated. You don't understand where all the terms are coming from. But in reality, what it's really doing is just take this Taylor expansion. Remember these two classical terms, and remember that there's one more term here. You can derive it if you want to. Really try to know where it all comes from. It all started from this one fact, quadratic variation, because that made some of the second derivative survive, and because of those, you get these kind of complicated terms. Questions? Let's do some examples. That's too much. Sorry, I'm going to use it a lot, so let me record it. Example number one. Let f of x be equal to x square, and then you want to compute d of f at B_t. I'll give you three minutes just to try a practice. Did you manage to do this? It's a very simple example. Assume it's just the function of two variables, but it doesn't depend on t. You don't have to do that, but let me just do that. Partial f over partial t is 0. Partial f over partial x is equal to 2x, and the second derivative equal to 2 at t, x. Now we just plug in t comma B_t, and what you have is mu equals 0, sigma equals 1, if you want to write it in this formula. What you're going to have is 2 times B_t of dB_t plus 1 over 2 times 2dt. If you write it down. You can either use these parameters and just plug in each of them to figure it out. Or a different way to do it is really write down, remember the proof. This is partial f over partial t dt plus partial f over partial x dx plus 1/2-- remember this one. And x is dB_t here. That one is 0, that one was 2x, so 2B_t dB_t. Use it one more time, so you get dt. Make sense? Let's do a few more examples. And you want to compute d of f at t comma B of t. Let's do it this time. Again, partial f over partial t dt plus partial f over partial x dB_t. That's the first-order terms. The second-order term is 1/2 partial square f over partial x square of dB_t square, which is equal to dt. Let's do it. Partial f over partial t, you get mu times f. This one is just equal to mu times f. Maybe I'm going too quick. Mu times e to the mu t plus dx, dt. Partial f over partial x is sigma times e to the mu t plus dx, and then dB_t plus-- if you take the second derivative, you do that again, what you get is 1/2, and then sigma square times e to the mu t plus dx, dt. Yes? AUDIENCE: In the original equation that you just wrote, isn't it 1/2 times sigma squared, and then the second derivative? Up there. PROFESSOR: Here? AUDIENCE: Yes. PROFESSOR: 1/2? AUDIENCE: Times sigma squared. PROFESSOR: Oh, sigma-- OK, that's a good question. But that sigma is different. That's if you plug in X_t here. If you plug in X_t where X_t is equal to mu prime dt plus sigma prime d of B_t, then that sigma prime will become a sigma prime square here. But here the function is mu and sigma, so maybe it's not a good notation. Let me use a and b here instead. The sigma here is different from here. AUDIENCE: Yeah, that makes a lot more sense. PROFESSOR: If you replace a and b, but I already wrote down all mu's and sigma's. That's a good point, actually. But that's when you want to consider a general stochastic process here other than Brownian motion. But here it's just a Brownian motion, so it's the most simple form. And that's what you get. Mu plus 1/2 sigma square-- and these are just all f itself. That's the good thing about exponential. f times dt plus sigma times d of B_t. Make sense? And there's a reason I was covering this example. It's because-- let's come back to this question. You want to model stock price using Brownian motion, Brownian process, S of t. But you don't want S_t to be a Brownian motion. What you want is a percentile difference to be a Brownian motion, so you want this percentile difference to behave like a Brownian motion with some variance. The question was, is S_t equal to e to the sigma times B of t in this case? And I already told you last time that no, it's not true. We can now see why it's not true. Take this function, S_t equals e to the sigma B_t, that's exactly where mu is equal to 0 here. What we got here was d of S_t, in this case, is equal to-- mu is 0, so we get 1/2 of sigma square times dt plus sigma times d of B_t. We originally were targeting sigma times dB_t, but we got this additional term which we didn't want in the first hand. In other words, we have this drift. I wasn't really clear in the beginning, but our goal was to model stock price where the expected value is 0 at all times. Our guess was to take e to the sigma B_t, but it turns out that in this case we have a drift, if you just take natural e to the sigma of B_t. To remove that drift, what you can do is subtract that term somehow. If you can get rid of that term then you can see, if you add this mu to be minus 1 over 2 sigma square, you can remove that term. That's why it doesn't work. So instead use S of t equals e to the minus 1 over 2 sigma square t plus sigma of B_t. That's the geometric Brownian motion without drift. And the reason it has no drift is because of that. If you actually do the computation, the dt term disappears. Question? So far we have been discussing differentiation. Now let's talk about integration. Yes? AUDIENCE: Could you we do get this solution as [INAUDIBLE]. Could you also describe what it means? What does it mean, this solution of B_t? Does that mean if we have a sample path B_t, then we could get a sample path for S_t? PROFESSOR: Oh, what this means, yes. Whenever you have the B_t value, just at each time take the exponential value. Because-- why we want to express this in terms of a Brownian motion is, for Brownian motion we have a pretty good understanding. It's a really good process you understand fairly well, and you have good control on it. But the problem is you want to have a process whose percentile difference behaves like a Brownian motion. And this gives you a way of describing it in terms of Brownian motion, as an exponential function of it. Does that answer your question? AUDIENCE: Right, distribution means that if we have a sample path B_t, that would be the corresponding sample path for S of t? Is it a pointwise evaluation? PROFESSOR: That's a good question, actually. Think of it as a pointwise evaluation. That is not always correct, but for most of the things that we will cover, it's safe to think about it that way. But if you think about it path-wise all the time, eventually it fails. But that's a very advanced topic. So what this question is, basically B_t is a probability space, it's a probability distribution over paths. For this equation, if you just look at it, it looks right, but it doesn't really make sense, because B_t-- if it's a probability distribution, what is e to the B_t? Basically, what it's saying is B_t is a probability distribution over paths. If you take omega according to-- a path according to the Brownian motion sample probability distribution, and for this path it's well defined, this function. So the probability density function of this path is equal to the probability density function of e to the whatever that is in this distribution. Maybe it confused you more. Just consider this as some path, some well-defined function, and you have a well-defined function. Integral definition. I will first give you a very, very stupid definition of integration. We say that we define F as the integration... if d of F is equal to f dB_t plus-- we define it as an inverse of differentiation. Because differentiation is now well-defined-- we just defined integration as the inverse of it, just as in classical calculus. So far, it doesn't have that good meaning, other than being an inverse of it, but at least it's well-defined. The question is, does it exist? Given f and g, does it exist, does integration always exist, and so on. There's lots of questions to ask, but at least this is some definition. And the natural question is, does there exist a Riemannian sum type description? That means-- if you remember how we defined integral in calculus, you have a function f, integration of f from a to b according to the Riemannian sum description was, you just chop the interval into very fine pieces, a_0, a_1, a_2, a_3, dot, dot, dot-- and then sum the area of these boxes, and take the limit. And this is the limit of Riemannian sums. Slightly more, if you want, it's the limit as n goes to infinity of the function 1 over n times the sum of i equal zero to t-- I'll just call it 0 to b-- f of t*b over n minus f of t minus 1 over n. Does this ring a bell? Question? AUDIENCE: [INAUDIBLE] PROFESSOR: No, you're right. Good point, no we don't. Thanks. Does integral defined in this way have this Riemannian sum type description, is the question. So keep that in mind. I will come back to this point later. In fact, it turns out to be a very deep question and very important question, this question, because if you remember like I hope you remember, in the Riemannian sum, it didn't matter which point you took in this interval. That was the whole point. You have the function. In the interval a_i to a_(i+1), you take any point in the middle and make a rectangle according to that point. And then, no matter which point you take, when you go to the limit, you had exactly the same sum all the time. That's how you define the limit. But what's really interesting here is that it's no longer true. If you take the left point all the time, and you take the right point all the time, the two limits are different. And again, that's due to the quadratic variation, because that much of variance can accumulate over time. That's the reason we didn't start with Riemannian sum type definition of integral. But I'll just make one remark. Ito integral is the limit of Riemannian sums when always take the leftmost point of each interval. So you chop down this curve at-- the time interval into pieces, and for each rectangle, pick the leftmost point, and use it as a rectangle. And you take the limit. That will be your Ito integral defined. It will be exactly equal to this thing, the inverse of our Ito differentiation. I won't be able to go into detail. What's more interesting is instead, what happens if you take the rightmost point all the time, you get an equivalent theory of calculus. It's just like Ito's calculus. It looks really, really similar and it's coherent itself, so there is no logical flaw in it. It all makes sense, but the only difference is instead of a plus in the second-order term, you get minuses. Let me just make this remark, because it's just a theoretical part, this thing, but I think it's really cool. Remark-- there's this and equivalent version. Maybe equivalent is not the right word, but a very similar version of Ito calculus such that basically, what it says is d B_t square is equal to minus dt. Then that changed a lot of things. But this part, it's not that important. Just cool stuff. Let's think about this a little bit more, this fact. Taking the leftmost point all the time means if you want to make a decision for your time interval-- so at time t of i and time t of i plus 1, let's say it's the stock price. You want to say that you had so many stocks in this time interval. Let's say you had so many stocks in this time interval according to the values between this and this. In real world, your only choice you have is you have to make the decision at time t of i. Your choice cannot depend on the future time. You can't suddenly say, OK, in this interval the stock price increased a lot, so I'll assume that I had a lot of stocks in this interval. In this interval, I knew it was going to drop, so I'll just take the rightmost interval. I'll assume that I only had this many stock. You can't do that. Your decision has to be based on the leftmost point, because the time. You can't see the future. And the reason Ito's calculus works well in our setting is because of this fact, because it has inside it the fact that you cannot see the future. Every decision is made based on the leftmost time. If you want to make a decision for your time interval, you have to do it in the beginning. That intuition is hidden inside of the theory, and that's why it works so well. Let me reiterate this part a little bit more. It's the definition of these things where you're only allowed to-- at time t, you're only allowed to use the information up to time t. Definition: delta t is an adapted process-- sorry, adapted to another stochastic process X_t-- if for all values of time variables delta t depends only on X_0 up to X_t. There's a lot of vague statements inside here, but what I'm trying to say is just assume X is the Brownian motion underlying stock price. Your stock is changing. You want to come up with a strategy, and you want to say that mathematically this strategy makes sense. And what it's saying is if your strategy makes your decision at time t is only based on the past values of your stock price, then that's an adapted process. This defines the processes that are reasonable, that cannot see future. And these are all-- in terms of strategy, if delta_t is a portfolio strategy, these are the only meaningful strategies that you can use. And because of what I said before, because we're always taking the leftmost point, adaptive processes just also fit very well with Ito's calculus. They'll come into play altogether. Just a few examples. First, a very stupid example. X_t is adapted to X_t. Of course, because at time, X_t really depends on only X_t, nothing else. Two, X_(t+1) is not adapted to X_t. This is maybe a little bit vague, so we'll call it Y_t equals X_(t+1). Y_t is the value at t plus 1, and it's not based on the values up to time t. Just a very artificial example. Another example, delta t equals minimum... is adapted. And I'll let you think about it. The fourth is quite interesting. Suppose T is fixed, some large integer, or some large real number. Then you let delta t to be the maximum where X of s, where... It's not adapted. What is this? This means at time T, I'm going to take at it this value, the maximum of all value inside this part, the future. This refers to the future. It's not an adapted process. Any questions? Now we're ready to talk about the properties of Ito's integral. Let's quickly review what we have. First, I defined Ito's lemma-- that means differentiation in Ito calculus. Then I defined integration using differentiation-- integration was an inverse operation of the differentiation. But this integration also had an alternative description in terms of Riemannian sums, where you're taking just the leftmost point as the reference point for each interval. And then, as you see, this naturally had this concept of using the leftmost point. And to abstract that concept, we've come up with this adapted process, very natural process, which is like the real-life procedures, real-life strategies we can think of. Now let's see what happens when you take the integral of adapted processes. Ito integral has really cool properties. The first thing is about normal distribution. B_t has normal distribution of 0 up to t. So your Brownian motion at time t has normal distribution with 0, t. That means if your stochastic process is some constant times B of t, of course, then you have 0 and c square t. It's still a normal variable. That means if you integrate, that's the integration of some sigma. That's the integration of sigma of dB_t. If sigma is a fixed constant, when you take the Ito integral of sigma times dB_t, this constant, at each time you get a normal distribution. And this is like saying the sum of normal distribution is also normal distribution. It has this hidden fact, because integral is like sum in the limit. And this can be generalized. If delta t is a process depending only on the time variable-- so it does not depend on the Brownian motion-- then the process X of t equals the integration of delta t dB_t has normal distribution at all time, just like this. We don't know the exact variance yet; the variance will depend on the sigmas. But still, it's like a sum of normal variables, so we'll have normal distribution. In fact, it just gets better and better. The second fact is called Ito isometry. That was cool. Can we compute the variance? Yes? AUDIENCE: Can you put that board up? PROFESSOR: Sure. AUDIENCE: Does it go up? PROFESSOR: This one doesn't go up. That's bad. I wish it did go up. This has a name called Ito isometry. Can be used to compute the variance. B_t is a Brownian motion, delta t is adapted to a Brownian motion. Then the expectation of your Ito integral-- that's the Ito integral of your adapted process. That's the variance-- we take the square of it-- is equal to something cool. The square just comes in. Quite nice, isn't it? I won't prove it, but let me tell you why. We already saw this phenomenon before. This is basically quadratic variation. And the proof also uses it. If you take delta s equals to 1-- sorry, I was using Korean-- 1 at all time, then what we have is here you get a Brownian motion, B_t. So on the left you get like expectation of B_t square, and on the right, what you get is t. Because when delta s is equal to 1 at all time, when you have to get from 0 to t you get t, and you have t on the right hand side. That's what it's saying. And that was the content of quadratic variation, if you remember. We're summing the squares-- maybe not exactly this, but you're summing the squares over small intervals. So that's a really good fact that you can use to compute the variance. You have an Ito integral, you know the square, can be computed this simple way. That's really cool. And one more property. This one will be really important. You'll see it a lot in future lectures. It's that when is Ito integral a martingale? What's a martingale? Martingale meant if you have a stochastic process, at any time t, whatever happens after that, the expected value at time t is equal to 0. It doesn't have any natural tendency to go up or go down. No matter which point you stop your process and you see your future, it doesn't have a natural tendency to go up or go down. In formal language, it can be defined as where F_t is the events X_0 up to X_t. So if you take the conditional expectation based on whatever happened up to time t, that expectation will just be whatever value you have at that time. Intuitively, that just means you don't have any natural tendency to go up or go down. Question is, when is an Ito integral a martingale? Adapted to B of t, then... is a martingale. As long as g is not some crazy function, as long as g is reasonable-- one way can be reasonable if its L^2-norm is bounded. If you don't know what it means, you can safely ignore it. Basically, if g doesn't-- it's not a crazy function if it doesn't grow too fast, then in most cases this integral is always a martingale. If you flip it-- remember, integral was defined as the inverse of differentiation. So if dX_t is equal to some function mu, that depends on both t and B_t, times dt, plus sigma of dB_t, what this means is X_t is a martingale if that is 0 at all time, always. And if it's not 0, you have a drift, so it's not a martingale. That gives you some classification. Now, if you look at a differential equation of this stochastic-- this is called a stochastic differential equation-- if you know stochastic process, if you look at a stochastic differential equation, if it doesn't have a drift term, it's a martingale. If it has a drift term, it's not a martingale. That'll be really useful later, so try to remember it. The whole point is when you write down a stochastic process in terms of something times dt, something times dB_t, really this term contributes towards the tendency, the slope of whatever is going to happen in the future. And this is like the variance term. It adds some variance to your stochastic process. But still, it doesn't add or subtract value over time, it fairly adds variation. Remember that. That's very important fact. You're going to use it a lot. For example, you're going to use it for pricing theory. In pricing theory, you come up with this stochastic process or some strategy. You look at its value. Let's say X_t is your value of your portfolio over time. If that portfolio has-- then you match it with your financial-- let me go over it slowly again. First you have a financial derivative, like option of a stock. Then you have your portfolio strategy. Assume that you have some strategy that, at the expiration time, gives you the exact value of the option. Now you look at the difference between these two stochastic processes. Basically what the thing is, when your variance goes to 0, your drift also has to go to 0. So when you look at the difference, if you can somehow get rid of this variance term, that means no matter what you do, that will govern the value of your portfolio. If it's positive, that means you can always make money, because there's no variance. Without variance, you make money. That's called arbitrage, and you cannot have that. But I won't go into further detail because Vasily will cover it next time. But just remember that flavor. So when you write something down in a stochastic differential equation form, that term is a drift term, that term is a variance term. And if you don't have drift, it's a martingale. That is very important. Any questions? That's kind of the basics of Ito calculus. I will give you some exercises on it, mostly just basic computation exercises, so that you'll get familiar with it. Try to practice it. And let me cover one more thing called Girsanov theorem. It's related, but these are really basics of the Ito calculus, so if you have any questions on this, please ask me right now before I move on to the next topic. The last thing I want to talk about today. Here is an underlying question. Suppose you have two Brownian motions. This is without drift. And you have another B tilde, Brownian motion with drift. These are two probability distributions over paths. According to B_t, you're more likely to have some Brownian motion that has no drift. That's a sample path. According to B tilde, you have some drift. Your Brownian motion will-- A typical path will follow this line and will follow that line. The question is this-- can we switch from this distribution to this distribution by a change of measure? Can we switch between the two measures to probability distributions by a change of measure? Let me go a little bit more what it really means. Assume that you're just looking at a Brownian motion from time 0 up to time t, some fixed time interval. Then according to B_t, let's say this is a sample path omega. You have some probability of omega-- this is a p.d.f. given by this Brownian motion B. And then you have another p.d.f., P tilde of omega, which is a p.d.f. given by B of t. The question is, does there exist a Z depending on omega such that P of omega is equal to Z times P tilde? Do you understand the question? Clearly, if you just look at it, they're quite different. The path that you get according to the distributions are quite different. It's not clear why we should expect it at all. You'll see the answer soon. But let me discuss all this in a different context. Just forget about all the Brownian motion and everything just for a moment. In this concept, changing from one probability distribution to another distribution, it's a very important concept in analysis and probability just in general, theoretically. And there's a name for this Z, for this changing measure. If Z exists, it's called the Radon-Nikodym derivative. Before doing that, let me talk a little bit more. Suppose P is a probability distribution over omega. It's a probability distribution. So this is some set, and P describes the probability that you have each element in the set. And you have another probability distribution, P tilde. We define P and P tilde to be equivalent if the probability that A is greater than zero if and only if... For all... These probability distributions describe the probability of the subsets. Think about a very simple case. Sigma is equal to 1, 2, and 3. P gives 1/3 probability to 1, 1/3 probability to 2, 1/3 probability to 3. P tilde gives 2/3 probability to 3, 1 over 6 probability to 2, 1 over 6 probability to 3. We have two probability distribution over some space. They are equivalent if, whenever you take a subset of your background set-- let's say 1, 2. When A is equal to 1, 2, according to probability distribution P, the probability you fall into this set A is equal to 2/3. According to P tilde, you have 5/6. They're not the same. The probability itself is not the same, but this condition is satisfied when it's 0. And when it's not 0, it's not 0. And you can just check that it's always true, because they're all positive probabilities. On the other hand, if you take instead, say, 1/3 and 0, now you take your A to be 3. Then you have 1/3 equal to 0. This means, according to probability distribution P, there is some probability that you'll get 3. But according to probability distribution P tilde, you don't have any probability of getting 3. So they're not equivalent in this case. If you think about it, then it's really clear. The theorem says-- this is a very important theorem in analysis, actually. The theorem-- there exists a Z such that P of omega is equal to... If and only if P and P tilde are equivalent. You can change from one probability measure to another probability measure just in terms of multiplication, if and only if they're equivalent. And you can see that it's not the case for this when they're not equivalent. You can't make a zero probability to 1/3 probability by multiplication. So in the finite world this is very just intuitive theorem, but what this is saying is it's true for all probability spaces. And these are called the Radon-Nikodym derivative. Our question is, are these two Brownian motions equivalent? The paths that this Brownian motion without drift takes and the Brownian motion with drift takes, are they kind of the same but just skewed in distribution, or are they really fundamentally different? That's the question. And what Girsanov's theorem says is that they are equivalent. To me, it came as a little bit non-intuitive. I would imagine that it's not equivalent, these two. These paths have a very natural tendency. As it goes to infinity, these paths and these paths will really look a lot different, because when you go really, really far, the paths which have drift will be just really close to your line mu of t, while the paths which don't have drift will be really close to the x-axis. But still, they are equivalent. You can change from one to another. I'll just state that theorem without proof. And this will also be used in pricing theory. I'm not an expert enough to tell why, but basically what it's saying is, you switch some stochastic process into a stochastic process without drift, thus making it into a martingale. And martingale has a lot of meaning in pricing theory, as you'll see. This also has application. That's why I'm trying to cover it, although it's quite a technical theorem. Try to remember at least a statement and the spirit of what it means. It just means these two are equivalent, you can change from one to another by a multiplicative function. Let me just state it in a simple form. GUEST SPEAKER: If I could just interject a comment. PROFESSOR: Sure. GUEST SPEAKER: With these changes of measure, it turns out that all of these theories with continuous time processes should have an interpretation if you've discretized time, and should consider sort of a finer and finer discretization of the process. And with this change of measure, if you consider problems in discrete stochastic processes like random walks, basically how-- say if you're gambling against a casino or against another player, and you look at how your winnings evolve as a random walk, depending on your odds, your odds could be that you will tend to lose. So there's basically a drift in your wealth as this random process evolves. You can transform that process, basically by taking out your expected losses, to a process which has zero change in expectation. And so you can convert these gambling problems where there's drift to a version where the process, essentially, has no drift and is a martingale. And the martingale theory in stochastic process courses is very, very powerful. There's martingale convergence theorems. So you know that the limit of the martingale is-- there's a convergence of the process, and that applies here as well. PROFESSOR: You will see some surprising applications. GUEST SPEAKER: Yeah. PROFESSOR: And try to at least digest the statement. When the guest speaker comes and says by Girsanov theorem, they actually know what it is. There's a spirit. This is a very simple version. There's a lot of complicated versions, but let me just do it. So P is a probability distribution over paths from [0, T] to the infinity. What this means is just paths from that-- stochastic process defined from time 0 to time T. These are paths defined by a Brownian motion with drift mu. And then P tilde is a probability distribution defined by Brownian motion without drift. Then P and P tilde are equivalent. Not only are they equivalent, we can actually compute their Radon-Nikodym derivative. And the Radon-Nikodym derivative Z which is defined as T of-- which we denote like this has this nice form. That's a nice closed form. Let me just tell you a few implications of this. Now, assume you have some, let's say, value of your portfolio over time. That's the stochastic process. And you measure it according to this probability distribution. Let's say it depends on some stock price as the stock price is modeled using a Brownian motion with drift. What this is saying is, now, instead of computing this expectation in your probability space-- so this is defined over the probability space P, our sigma-- (omega, P) defined by this probability distribution. You can instead compute it in-- you can compute as expectation in a different probability space. You transform the problems about Brownian motion with drift into a problem about Brownian motion without a drift. And the reason I have Z tilde instead of Z here is because I flipped. What you really should have is Z tilde here as expectation of Z. If you want to use this Z. I don't expect you to really be able to do computations and do that just by looking at this theorem once. Just really trying to digest what it means and understand the flavor of it, that you can transform problems in one probability space to another probability space. And you can actually do that when the two distributions are defined by Brownian motions when one has drift and one doesn't have a drift. How we're going to use it is we're going to transform a non-martingale process into a martingale process. When you change into martingale it has very good physical meanings to it. That's it for today. And you only have one more math lecture remaining and maybe one or two homeworks but if you have two, the second one won't be that long. And you'll have a lot of guest lectures, exciting guest lectures, so try not to miss them.
MIT_18S096_Topics_in_Mathematics_w_Applications_in_Finance
19_BlackScholes_Formula_Riskneutral_Valuation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let's start with a simple but quite illustrative example. So suppose you're a bookie. And what a bookie does-- he sets bets on the horses, sets the odds, and then pays money back. Probably collects a fee somewhere in between. So suppose he is a good bookie and he knows quite well the horses, and there are two horses. He knows that for sure one horse has 20% chance of winning and another horse has 80% chance of winning. Obviously, the general public doesn't have all of this information. So they place a bet slightly differently. And then there is $10,000 bet on one horse and $50,000 bet on another horse. Well, bookie is sure that he possesses good information. So he-- suppose he sets the odds according to real-life probability. So he sets it four to one. What would be possible outcomes of the race for him? Monetary. So suppose the first horse wins. Then what happens? He has to pay back $10,000 and four times more. So he pays out $50,000. And he receives $60,000, right? So he can keep $10,000 out of it. OK. So what happens is the other more probable horse wins. Well, he'll have to pay back the $50,000 and one quarter of it, which is $12.25. So at the end, he'll pay 62 1/2 thousand, while he collected $60,000, out right? So he will-- in this situation, he will lose $2,500. Well, all in all, he expects to make nothing. So he probably could collect enough fees to cover his potential loss. But there is certainly a variability in outcomes. He can win a lot. He can lose some. Now, what if he forgets about his knowledge about the real-life probabilities of horses winning or losing and instead sets bets according to the amount which we are already bet. According to the market, effectively. So what if he sets the odds five to one, according to the bets placed? Well, in this situation, if the first horse wins, he pays back 10 plus 5 times 10, so 60. He is 0. And if the second horse wins, he pays back 50 plus 1/5 of 50, plus another 10. Again 60. So no matter which horse wins, he will get 0. We're 100% sure. And if he collects some fee on top of it, he will make a riskless profit. And that's how, actually, bookies are operating. So it's a simple example. But it gives us a first idea of how a risk-neutral framework and risk-neutral pricing works. So we are, here, not in the business of making bets on horses. We are actually in the business of pricing derivatives. So we will talk about the simplest possible derivatives-- mostly derivatives on stocks. But there are more complicated derivatives, underlying for which could be interest rates, bonds, swaps, commodities, whatever. So a derivative contract is some-- in general speaking, a formal pay-out connected to underlying. Usually, the underlying is a liquid instrument which is traded on exchanges. And derivative may be traded on exchanges. Actually, quite a few equity options are traded on exchanges. But in general, they are over-the-counter contracts where two counterparties just agree on some kind of pay-out. One of the simpler derivatives is a forward contract. So what is a forward contract? A forward contract is a contract where one party agrees to buy an asset from another party for a price which is agreed today. Usually, this forward price is set in such a way that right now, no money changes hands. Right? And here is an example. Well, suppose there is a stock which, right now, is priced at $80. And this is the forward for two years. So somebody agrees to buy the stock in two years for this price. And not surprisingly, I somehow set this price such that currently the value of the contract is 0. And we'll see how I'll come up with the price. So this blue line is actually the pay-out, what will happen at the end. Right? The pay-out, depending-- the graph of F at time T, the determination time or expiry-- how it depends on the stock price. Right? So obviously, the pay-out is S minus K, where S is the stock price, so it's a linear function. It turns out that the counter price is also a linear function but slightly shifted. And we'll see how come it's slightly shifted and how much it should be shifted. And K is usually referred to as a strike price. Another slightly more complicated contract is called a call option. So if previously the forward is an obligation to buy the asset for an agreed price, call option is actually an option to buy an asset at the agreed price today. You can view it-- a call option can be viewed as kind of insurance that the-- against the asset going down. Basically the pay-out is always positive. You can never lose money. On the forward, you can lose money. You agree on the price. The asset ends up being lower than this price, but you still have to buy it. Right? Here, if the asset ends up at expiry below strike price or out of the money, then the pay-out will be 0. If, on the other hand, it ends up being above the strike price or, it's called, the option is in the money. Then the pay-out will be S minus K as before. So in mathematical terms, the pay-out is maximum of S minus K and 0. Right? And that's what happens at expiry time-- this blue line. So what is the price of this option now? Well, obviously it should be slightly above because even if now the asset is slightly out of the money-- below strike price-- there is some volatility to it, and there is a probability that we will still end up in the money at expiry. So you would be willing-- you should be willing to pay something for this. Obviously, if it's way out of the money, it should be 0. Right? On the other hand, if it's way in the money, in fact, it should be just as forward. And in fact, it is. We'll see because the probability for the asset going back to the strike price and below will be low. And the Black-Scholes equation and Black-Scholes formula is exactly the solution for this curved line, which we'll see in a second. Another simple contract, which is kind of dual to call option, is a put option. So put option, on the contrary, is a bet on the asset going down, rather than up. Right? So the pay-out is maximum of K minus S and 0. So it's kind of reversed. Also a ramp function, at maturity. And here is the current price. Again, even if it's in the money-- if it's way in the money, we expect it to be 0. If it's way in the money, we expect it to be slightly below forward, just because of this counting. OK. So and here are a few-- three main points, which we'll try to follow, through the class. So first of all, what we'll see-- that if we have current price of the underlying and some assumptions on how the market or the underlying behaves, there is actually no uncertainty in the price of the option, obviously, if we fix the pay-out. Right? So somehow there is no uncertainty. It's completely deterministic, once we know the price of underlying. The other interesting fact, which we'll find out, is actually risk-neutrality, meaning that in fact, the price of the option has nothing to do with the risk preferences of market participants or counter-parties. It actually only depends on the dynamics of the stock, only depends on the volatility of the stock. And finally, the most important idea of this class-- that mathematical apparatus allows you to figure out how much this deterministic option price is now. So let's consider a very simple example, a very simple market, two-period. So suppose our time is discrete, and we are one step before the maturity. So right now, our stock has price at 0. And there is some derivative f_0 with some pay-out. We'll consider a few of those. Right? Also, we'll add to the mix a bit of cash. Right? Some amount of riskless cash B_0. And riskless meaning that it grows exponentially with some interest rate r. And there is no uncertainty. It's completely-- if you have now B_0, we know then, in time dt, our B_0 will grow exponentially. It will become B e to the rt. So a bond, basically, zero-coupon bond. Or money market account, rather. If you go to Cambridge Savings Bank, put $1 in today, then in a year, you'll get $1 and basically nothing because interest rates are 0. So in time dt, we will assume with some probability p, our market can go to the state where stock becomes S_1-- the price of stock becomes S_1. Our bond grows exponentially-- no uncertainty. And our derivative becomes f_1. Or with probability 1 minus p-- only two states, so-- our stock becomes S_2. Bond stays the same. And the derivative is some f_2. So let's start with our simple contract, the forward contract. So one can naively approach a problem, trying to get the price of the derivative, using the real-world probabilities, p and 1 minus p. Right? So we know that the pay-out is S minus K. That's given. So one would say that if we know we are one step before the pay-out, so let's just compute expected value of the pay-out, using real-world probabilities, get this value. And actually, what we are looking here is to set K such that the price now at time t is 0. That's usual convention. So we'll then set K to this probability, to this number, which depends on real-world probability and obviously depends on the stock price at expiry. But obviously, we don't know real-world probabilities. We can guess. We can say, oh, this stock is as likely to go up then down. Then it's just an average of end stock prices or something else. But it's all hand-wavy. And actually, we never will be right. Instead of doing this-- we're kind of following bookie example-- let's try to do something else. Let's think a little bit. So we have a stock which is trading at market now for the price S_0. How about we go to the bank and borrow S_0 dollars right now and immediately go to the market and buy the stock. So right now we are net 0. We borrowed S_0. We paid it immediately to buy the stock. So we have stock at hand. Then we'll wait for one period. And at the same time-- sorry-- we enter on the short side of the forward contract. So we agree to sell the stock for some price K_0. So in dt, in one period of time, the contract expires. We already have stock. So we just go and exchange it for K_0 dollars. Right? But at the same time, we need to repay our loan which now have become S_0 times e to the r*dt. This is deterministic, right? We borrowed S_0. In time dt, it became S times e to the r*dt. So what's our net? The net is K_0 minus S times e r*dt. So suppose K_0 is greater than this value. Then we made riskless profit. There is no risk in the strategy which we proposed. So this is good. But why wouldn't everybody do it all day long? On the other hand, if K_0 is less than S_0, that's a loss for sure. And if anybody thinks, as we did-- and we assume that everybody can do it-- then nobody would want to enter it, which means that in order for our forward to be price 0 now, the strike price has to be equal to this amount. And there is no uncertainty about it. So let's stop and think a little bit. Well, actually, just to see how it works. And that's exactly why I set this K to this number. So by the way, who can tell me which interest rate does it imply? If our strike-- our stock price is $80, our strike is 88.41. And the expiry is in two years, approximately. AUDIENCE: 2.5? PROFESSOR: 2.5. So in two years, it will be 5%. So roughly speaking, without compounding, it should be 5% of-- 80 plus 5%. It would be 84. So 10% for two years. So the interest rate is 5%. Yeah. So yeah. That's actually exactly 5 exponentially compounded. Yeah. Well, in a good world-- probably five years ago, that's how it would work. The two-years interest rates now, the last time I checked, was, I think, 30 pips. We can check where the bond is trading now. All right. Give me a sec. Now. Yep. 32 1/2 basis points. 1.6 basis points up, since the morning. Quite a bit, by the way. So yeah. So right now interest rates are basically 0. So these two lines would be very close right now if we were for two years, in that case. So coming back to our example. So what's important here? How did we arrive to this strike price, or to this price of the forward contract? We, in fact, tried-- we took some amount of stock. In this particular case, it was the whole price of stock. We took some amount of cash, and by combining these two pieces, we somehow replicated the final pay-off. Right? And that's the general idea of risk-neutral pricing and replicating portfolio. What we will try to do, in the rest of the class, is take a pay-off and try to find a replicating portfolio, maybe more complicated, maybe a dynamic such that at the end, this replicating portfolio will be exactly our pay-off. Right? And what would it mean? Well, obviously it would mean that the current price of the derivative should be the price of our replicating portfolio right now. Right? And that's how the risk-neutral pricing works. So we are still in this simple situation. But we will try to price a general pay-off f_1-- a general pay-off f. Right? And here's how it goes. So we still will try to form our replicating portfolio out of the bond, of some amount of bond, and some amount of stock. And we'll say that we will need a S_1 and b of the bond. Right? And we'll try to find a and b such that no matter what the real-world probability is, at one step maturity, we'll replicate our pay-off exactly. And fortunately, in this particular case, it's very doable. It's just two equations. We use two variables. We should be able to do it. And we can solve it and find this a and b. Then we'll substitute them in the formula. Right? Take the current price of the stock, which we know, and some cash, and find the current price of the derivative. Right? And this works-- it should work for any derivative. It doesn't matter, is it forward, call, put, or some complicated option, as long as it is deterministic at expiry. An interesting way, though, to look at it is to rewrite this formula slightly, in such a way, which does remind us, taking an expected value, maybe discounting it because this is expected value at some time in the future. But this probability-- and it is a probability because this number q, here, is between 0 and 1. But this probability has little to do with real world. Right? In fact, it's something different. But such probability exists. And it's called-- the measure where our stock behaves like this is called a risk-neutral measure or martingale measure. And in this measure, as we will see, the value of the derivative will be just expected value of our pay-out. And that's-- yeah. That's what I'm trying to say, here. So now let's get into continuous world. Right? In continuous world, we'll need some assumptions on the dynamics of our stock underlying. And let's make an assumption that it is log-normal. What does it mean that it's log-normal? It means that the proportional change of the stock, over infinitely small amount of time dt, has some drift mu, and some stochastic component, which is just Brownian Motion. Right? So this dW is distributed normally with mean 0 and standard deviation, which is actually square root of dt. That's how Brownian Motion works. And that's extremely important, that the standard deviation of Brownian Motion is square root of delta t. And that's how it works. And again, we will use this idea of replicating portfolio. What would it mean in this case? Well, we would like to find such coefficients a and b, on this infinitely small period of time dt, such that by combining small changes in stock, with coefficient a, and small changes in bond, with coefficient b, will exactly replicate the change in the derivative-- in the pay-out of derivative-- not pay-out. In the derivative. In the change of the derivative, over this infinitely small time t. Well, to do this, we'll need to use Ito's formula. Did you talk about Ito already? OK. Cool. That's great. So just to remind you that Ito's formula is nothing more than the Taylor rule, actually-- the first approximation up to dt. But because of the standard deviation of the Brownian Motion being on the scale of square root of t, we will need one more term there. Right? So one term is df/dt by dt. Another is df by dS by dS. And the square of dS now is actually of order of magnitude of dt. So we'll need a quadratic term there. All right. So if this is our df, so what we'll do-- we'll differentiate. We'll just substitute it here. Right? We'll substitute it here. We'll substitute df taken from our dS, which is like this, and dB. Let's not forget that dB-- that B is deterministic. Right? There is nothing uncertain about it. So dB is actually r*B*dt. All right? Because our B grows exponentially with interest rate r. So we substitute everything into the formula above. This is just our df with dS expanded and everything. And then when we start comparing the terms. One immediate thing to notice-- that a has to be equal to df over dS, for this to hold. Right? And if you compare the terms near dt, we'll get this expression here. But that's actually even more the most important part. Then we'll go and use our knowledge that some part of our equation is deterministic and basically take f and a*S on one side and leave the deterministic part, on the other side, differentiated once again. And left side will be just r*B*dt. And if we substitute once again df-- and don't forget that what we learned is that a is equal to df by dS. Then we collect all the terms and arrive to this partial differential equation which connects-- which basically is a partial differential equation for the current price of a derivative-- of any derivative. And how if we solve it, then we should actually be able to know the price of the derivative. So now how do we solve this partial differential equation? Well, for-- yeah. So a few observations about this equation. Well, the first observation is that any tradable derivative-- we made no assumptions about the pay-off. So any tradable derivative as any pay-off should satisfy this equation. The other observation is as we expected, there is no dependency on real-world drift or any probability of it going up or down. The only dependence is on the volatility of the stock. Right? Not only we found the value of the derivative-- most importantly, we actually were able to come up with a hedging strategy. And what does it mean, we came up with a hedging strategy? Well, we found coefficients-- for any time, we found the coefficients, a and b, such that we have a replicating portfolio. So what we could do, at any point of time, we can hold the derivative-- short derivative and long the portfolio of stock itself, and some cash, and then know how much it should be. Here, it's more complicated. We have to dynamically change these numbers, as time develops. Every time dt we will have to rebalance. But both parts will replicate each other perfectly. It's like in a bookie's example. We can go to a counterparty, agree for some derivative contract. Probably there will be some fee. And then we'll go to exchange and buy the stock, and we will get just cash from the bank. And we'll maintain this at some amount of stock and some amount of cash. And we'll be sure that we are hedged. There is no risk in this combination of the derivative and our hedge. So we will just collect a fee on the transaction. So that's what actually-- how the business is working. Traders are trading and hedging their positions immediately. I mean, they do take some market risks. But you want to take very little and very directional, very specific market risks and not everything. So our strategy allows us to have a hedging portfolio at the same time-- hedging strategy. And now there are more mathematical but practical consequences that actually, by certain-- not very easy-- change of variables, we can take the Black-Scholes equation and put it back to heat equation. Actually, I suggest it as one of the topics for the final paper, for you to do it or check it out in the books. Go and understand it. But the good part of it-- that heat equation is well known and well understood. There are many, many ways to solve it numerically. For simple pay-outs, for calls and puts, we don't have to do it numerically, but if the pay-outs are more complicated or the dynamics is different, then numerical methods will be needed, for sure. So again, to solve this equation, we'll need, as for any partial differential equation, we'll need some boundary and initial conditions. And these come from our final pay-out of the option, which we know. We will know what happens at expiry. And some boundary conditions. For call and put, the final pay-out we know. Right? So at time T. And the boundary conditions we discussed, we can observe them graphically. So basically for call, as we said, at current time t, and boundary 0, it should be 0. The price should be 0. And at infinity, it should be actually the forward price. So it should be just discounted S minus K. Discounted pay-out. Right? And similarly for put. So given these conditions, we can solve the equation. And as I said, for call and put and for simple dynamics-- Black-Scholes dynamical or log-normal dynamics-- actually, these equations can be solved exactly-- exactly meaning up to this term, the normal distribution, which still has to be computed numerically, obviously. But here are the formulas. They do kind of look a little bit-- and we'll see about it-- there is some kind of expected volume going on. Right? One probability times another. But these are the formulas. And that's how I drew the lines on the graphs. And as I said, in fact, the whole world, instead of solving the whole partial differential equation, we can approach it from a risk-neutral position and say that, in fact, the price of our derivative now is just expected value of pay-out, discounted, probably, from the maturity. But not in real time or real-world measure, but in some specific risk-neutral measure. And how do we find this risk-neutral measure? Well, the risk-neutral measure is such that the drift of our stock is actually interest rate. It's riskless. That's exactly how we saw it in our binary example. All right? So even in our binary example, our expected value of our stock, under risk-neutral measure, meaning using the risk-neutral probability, was drifting with interest rate r. So that the same happens in continuous case. And that's another good exercise-- and I would accept it as a final paper-- is deriving the Black-Scholes formula just by the expected value of the call and put pay-out with the log-normal distribution-- terminal distribution. All right. So for more complicated pay-offs, the life becomes more complicated. And some finite differences should be used for more complicated pay-offs or American pay-offs or path-dependent pay-offs, tree methods or Monte Carlo simulations. And that's what was happening in real life. Yeah. Now, since we have, actually, plenty of time, I would like to give an example of how replicating-- idea of replicating portfolio works. I give a couple more examples. So OK. Here is a Bloomberg screen for foreign options-- call options on IBM stock. It actually was taken a while ago-- a few years ago. And so here are different strikes for a call option. The current price of the stock is $81.14. And here are the strikes of the call. So obviously, if the option is way out of the money, meaning the strike is very high compared to the stock price, the value of the option is 0. If it's way in the money, in fact, it is just S minus K. So S being $81. And say, the strike being $55. So it's $26. Right? So there is some difference. But actually, here it's a bit small because the difference should be just discounting, as we know. Right? But it's pretty short-dated options. They are probably a month long, so there is not much discounting. So it becomes pretty parallel. It's similar here, right? So I mean, this changes by 5. This changes by 5. It's pretty linear. But it becomes non-linear around the money, around current stock price. Right? So we do observe this behavior. But to tell you the truth, if you were to-- I didn't put implied volatilities here. But actually, you would observe that the world is not Black-Scholes, meaning that-- what's the assumption of Black-Scholes. The assumption of Black-Scholes is that every option, for any strike, on a given stock, on a given expiry, would have the same volatility. Right? So if we went through exercise of implying the volatility according to Black-Scholes formula, from the option price which is traded on the market and the current price, we would find out that, actually, the volatility is not constant with strike. Well, it's actually skewed. Well, actually it is smiled. They would find something like this, which means that Black-Scholes theory is not perfectly good. Right? So something more complicated should be done. But in some cases, we even don't need to do something more complicated. One example, being so-called put-call parity. Right? So let's see. Suppose we look at the screen. So we know all prices for all call options for all strikes. Well, probably will be some granularity, but we know those. But instead of pricing a call, we need to price a put. Somehow, we don't know how the dynamics of our stock looks like. So we have strong suspicion that it's not exactly log-normal. So there is some volatility smile. It's not constant. The world is slightly not Black-Scholes. So how do we price put? Well, let's see. We'll stare long enough at the pay-outs of the call and put. So what's the pay-out of a call with some strike? It looks like this. Right? The pay-out of the put, with the same strike, would look like this. So what if we take, we buy a call and sell a put? So this would go like this. Right? Straight line. Looks very much like forward, right? So if we actually subtract the stock from here, move it from here, then it should be-- yeah-- minus K. Yeah. I think I got the signs correct. Right? And this is just a number. Right? And that's what happens at pay-out. So if we take this portfolio, if we action now, buy a call, sell a put, and sell a stock, we know that at the end, we'll for sure get K in money. Right? So which means that now-- so this is at time t. So right now, it looks, to me, that if we do write this, and that's just the current price of the stock, this should be-- right? We just need to discount this price to now, in this amount of cash, which means that our put, at any time t, is stock minus K. Right? So if we know all of the prices for any strike K-- if we know price of a call, we don't need any Black-Scholes or anything. We can immediately tell everybody how much is a put. Right? So then this relationship is actually a call-put parity. And that's, again-- that's a replicating portfolio. It's a simple replicating portfolio. It's static, meaning that we fixed it now and we don't change it to expiry. So it's quite good this way. But that's how it works. Another example. So for this, I have, actually, a picture. So again, we have the same situation. We have prices of calls. But instead of pricing a call, we want to price a digital. So what is digital? Digital is such a weird contract, which pay-out is just a function-- Basically, it's a bet on the stock to finish above strike price, K. Right? If at expiry, the stock is above K, you get 1. You'd get $1. If it's below, you'd get nothing, 0. Right? So So such an interesting contract. The question is, can we price it, given that we know the prices of Calls? And I suggest we use the idea of replicating portfolio. Any ideas how to do it? It's my typical interview question. So just pretend that you are interviewing. Yep? AUDIENCE: You long the call, and then you short the call, just like smaller or a higher strike. PROFESSOR: Yep. The call strike. Yeah, you're absolutely right. Good. You've got an offer. Yeah. So here's how it goes. So this is a strike K. Right? So let's buy a call with strike K minus 1/2 and sell a Call with strike K plus 1/2. Right? We just sold. So if we combine these two-- well, actually, if this is 1-- yeah. If this is 1, it should look something like this. Great. So how will it look like? So obviously, here, it's 0. Right? Then it will be like this. Right? And after that, it will be what? AUDIENCE: Constant. PROFESSOR: It will be constant. Right? And because this is K minus 1/2 and this is K plus 1/2, it will be exactly 1. Right? Good. So our pay-out, at the end, will be like this. So that's good. But there is quite a bit of slope here. So how can we do better than this? Well, if we buy it at K minus 1/4, and sell it at K plus 1/4, and just combine those, it will be exactly the same, but the level will be 1/2. So we need to buy two of those and to sell two of those. Right? Well, we might as well go K minus epsilon and K plus epsilon, so it'll be call price at strike K minus epsilon, minus call price at K plus epsilon, divided by 2*epsilon. Right? This 2*epsilon coefficient needed rescale it back to 1. Right? So in fact, if we go small epsilon, we need a lot of both. Right? And that's how-- that's the approximation of our digital price. And that's actually how people on the market do price and hedge, most importantly, the digital contracts, because call contracts are liquid, and they are traded on exchanges while digitals are way less liquid. So somebody would call again-- to counterparty, enter into digital, and hedge it on the exchange. These two calls with a call spread. But now tell me, is it surprising that-- I mean, what does it remind you? Yeah. So it's derivative of the call price but with respect to strike. Right? Is it surprising? How did our call price look like? It's a ramp. Right? If we take a derivative of this, what will we get? Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Right. So in fact, if we do something even more weird with this, and then I'll take a square or something else, the same will apply. So it's not surprising at all. All right. So that's basically how the replicate-- this idea of replicating portfolios is extremely powerful. And in fact, that's what happens in real life. In real life, you have some complicated derivative which you need to hedge. And how to hedge-- you'll find something else which replicates-- to a certain extent, replicates your pay-off. That's what you'll try to do. And this will be your hedge portfolio. Usually, it's dynamic. So you'll have to rebalance. And that's how you basically reduce the risks.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_11_The_ISLMPC_Model.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: So today, we're going to talk about perhaps the most important model in this class, the IS-LM-PC model, which puts together all that we have done up to now. But before we do that, let's talk a little bit about current events. Who knows what that is? Is this exam week or what? [LAUGHS] AUDIENCE: Silicon Valley Bank. RICARDO J. CABALLERO: Silicon Valley Bank, exactly. So Silicon Valley Bank, the 16th bank in size, asset size, in the US went, essentially, under last Friday. It was shut down by the FDIC last Friday. So that's the decline in the stock value during Thursday, Friday. And then it was shut down, and you see that it's not been traded anymore. So that's a pretty significant event. And the weekend was pretty stressful for anyone involved in this event-- the Treasury, the FDIC, the Federal Reserve, and so on. And it's first-- it was a large-- I mean, it's not one of the big systemic banks, if you will. It's not JP Morgan, Citi, Bank of America, one of those banks, which are regulated even differently from these banks. But it still is a pretty large bank, as you see, by asset size, $209 billion, which is comparable to Washington Mutual, which was the largest bank that went under during the global financial crisis. Great Recession. At that time, there were lots of other banks that went under. But the largest was comparable to this one. And in fact, all the things that were done over the weekend and are still being done today is to prevent something like this happening here as well. And so it was a pretty significant event. Now, what happened to Silicon Valley Bank? Well, in the immediate cost of the failure is what always kills a bank, which is a run on its-- by these depositors. And what you see here is the following. This is-- this bank actually grew enormously over the last two, three years, essentially doubled its asset size. But it began to have sort of outflow-- net outflows of deposits during 2022. And the reason for that is not because the business was doing poorly or anything. It was simply because this is a bank that serves primarily sort of the tech sector-- startups, companies, and things like that. And those sectors were having a hard time raising new capital in an environment that was not very friendly towards the tech sector. So they began to withdraw on their deposits, and that's what led to these flows here. Now, eventually, because of this and something I'll explain in a few minutes, they decided, SV Bank decided to issue new equity, issue new equity to cover certain losses they had incurred. And today, in the modern of social-- in the world of social media, that immediately led to sort of massive spread that this bank was in trouble. And then you saw enormous attempts to withdraw deposits. Now, not all of these were fulfilled, but there was a massive pressure to withdraw the deposits. And that's the end, always, for a bank that doesn't find an alternative source of funding. And often, for withdrawals of that size, the only alternative source of funding is either that some other bank buys you [CHUCKLES] or that the Fed comes in and gives you a line. Anyway, so what is the-- that was immediate, and it's always-- whenever you ask-- you hear about the bank run, the immediate cause of the problem is a run of the depositors from deposits in that bank. Now, why did this happen in this particular bank? Again, I explained why. You saw those small withdrawals of deposits. But what happens to them is actually their-- as I said before, their deposits grew very rapidly over the last two, three years. And then, rather than being very risky lenders, rather than investing-- sometimes when banks grow very rapidly they do lots of crazy things. They make lots of loans without doing the due diligence process and all that. That's not what they did. They bought Treasury bonds, the safest assets you can imagine. They bought 10-year Treasury bonds, lots of them. But they bought them at the wrong time. They bought them right before the hike in interest rates that we began to see in 2022. And we already looked at the relationship between interest rates and price of bonds. Well, you have a 10-year bond, and the interest rate starts going up, the price of that bond starts declining. Now, that is not-- is problematic for a bank, but not entirely the end of the story, because that means that the market value of the bonds you are holding among your assets starts declining. But banks do not need to recognize that loss unless they sell the bonds, because the logic is that, well, if the guy just sits on the bond, the bond hasn't really lost any value in the sense that it will get the same coupons that it was planning to get and so on. This is US treasuries US treasuries are not going to default on the coupons let's hope it's not going to happen in a few months from now. But typically they don't default on coupons. So the logic-- the regulation is designed in such a way-- perhaps that is a failure. I think there is a problem there. But they don't need to recognize the losses unless they sell the bonds. So they look pretty healthy because they had massive amounts of Treasury bonds. They didn't need to recognize that. Problem is that when these relatively small withdrawals start, at some point, they needed to find a substitute for those funds. They need to honor the deposits that were being withdrawn. And at that point, they had to sell assets. And when they sold assets, they made the loss because, at that point, you have to recognize the loss because you're not going to hold the bond until expiration and clip all the coupons that come from it. So you have to recognize the loss. That's the loss that led the CEO to announce that they needed fundraising to cover up $2.5-billion hole they had as a result of the losses. Now-- OK, so now we know where the losses came from. Now, if you notice, the losses are not that big. I mean, this is a bank with $200 billion, and the losses were relatively small. Where is the other leg of the problem? It's here. In the US, deposits are insured up to $250,000. That means, no matter what happens to the bank where you have the money, if that bank goes under, and you have deposits for below $250,000, the FDIC comes and gives you a check. So there is no risk. So you have a deposit under $250,000, you don't need to worry about this. You may go-- you don't need to read the news about this bank, because you will get your funds. In fact, when the bank was shut down on Friday, the FDIC announced immediately every deposit under $250,000 can come on Monday and get this money. So there is no issue there. And most banks have a large share of depositors that are small depositors. That means that they are covered by this deposit insurance mechanism, which was designed precisely to prevent runs because if you don't need to worry about whether you get your money, you don't need to run on the bank. The problem is that this bank was very different in the composition of depositors. It had primarily business deposits, meaning it was all these start up companies and so on in the tech sector. They had their deposits there. And those deposits were much larger than $250,000. If you see, it's about-- I think it's close to 95% of the deposits were not covered by the insurance, by the FDIC insurance. That means it's a very different calculation when you have a deposit that is not covered by insurance, and then you start feeling that the bank may go under. What do you do? You take your money out, put it in some-- you send it to JPMorgan, where there is no risk, and wait until this thing is resolved. Now, in this case-- and that's what typically happens. In this case, it happens even faster than normally. Why? Because many of the depositors, the businesses that were deposited in there, were startups that were being seeded by some venture capital funds. And venture capital funds, as soon as they noticed that there was a problem here, began to call all the startups and tell them, hey, take that money out of there because, you know, [CHUCKLES] they may run into trouble. So it was a venture capital world that caused the run, effectively. And that's what happened, OK? Now-- so that's what happened. That's the reason for the run. And so there was a problem. The problem was not that big, but the problem is that the deposits were very unsafe. They were not covered. And moving deposits out is very easy. I mean, you just [CHUCKLES] wire your money to another bank. So why wait there? Why risk it? And that's what happened. It's called, in economics, coordination failure. I mean, if everyone freezes and says, OK, nobody takes the money out and so on, this stuff is going to pass, then we're safe. But since we don't call each other, and we don't trust each other to really leave the money there, we make the call only after we have taken our money out. And since we all think the same way, then you get a run on the bank. Now, let me start connecting this a little bit with the kind of things we have done in this course. I made-- actually, this runs later in the course as a topic, crisis, speculative attacks, and things like that. But for now, here what you have is an indicator of-- essentially, this is the VIX. It's an indicator of implied volatility, something that's extracted from the price of options. You don't need to know the details. But the point is that it's one of the main indicators of fear, of how afraid are investors in a moment in the market. And what you can see here is that this indicator, the VIX, essentially, spiked Thursday and Friday, went up-- Friday, went up very, very rapidly. And then it got stabilized a little. Now, it turned out that-- it turns out that, over the weekend, you may have heard, the government, the consolidated government, came up with a very massive package to prevent runs on the remaining banks and also to prevent-- the fact that-- I mean, all of these were business deposits of small companies that used even this bank for the payroll and so on. So what was done this weekend is that all the deposits, not only the ones under $250,000, were guaranteed by the FDIC. There are mechanisms in which you can activate that. So that means that now all the depositors were made whole. But the idea was not-- it was-- partly, the reason to do that, it was to prevent a mess in the payrolls of the small companies and all that that had their account in this bank. But it was also to prevent runs on other banks. And so on top of this, the Fed now has a line of credit for banks to not have to sell their assets. For small banks, they can just pledge the assets to the central bank and get in exchange for that the cash they need. And they can do that without recognizing the implicit loss or without marking to market the price of the bonds. So had this mechanism existed before, the plunge of SVB-- we would not have seen anything like that. But the whole idea was to prevent that other banks running into that kind of trouble. Now, the markets reacted well to all that overnight and so on. But the VIX kept going up this morning. Now it's coming down again. I mean, there's still a lot of stress. And if you see the shares of First Republic Bank, for example, had declined by 60% today and things like that. So there is a still panic going on, OK? And as a result of that, all these indicators of stress are very stressed out. [CHUCKLES] Remember credit spreads. I told you about that x that we had several lectures ago, the probability of default of a bond, the perceived possibility of default. All those things went up a lot. And the riskier the bonds, the closer you are to the financial system, particularly to small banks, the larger those spreads have become. So x went up a lot. This picture that comes next, I find it very interesting from the point of view of this course. What this is is the following. This is the market expectation of the next hike by the Fed. The Fed next announcement on policy rate happens on the 22nd, March 22. So remember what has been happening is that, since the US has been running sort of very hot, with lots of inflation, interest rates were increased very rapidly, at clips of 50 basis points, a clip that's very large changes in policy rates for a country as large as the US. And so we had this big 25 basis points increases. And a few meetings ago, they decided to lower the pace of the increases to 25 basis points, rather than 50 basis points per meeting. So they said, we're going to keep raising interest rates. But we're going to go out to 25 basis points. Now, it turns out-- so this is 25 basis points. The data has become very hot. Remember we said inflation looked to have peaked and now is beginning sort of to turn around again, and it's beginning to rise. So what has been happening is that the market says, OK, 25% basis points is the most likely next hike. But you see, the expectation of that is it was sort of steady around 30 basis points. Some people expected-- some major players expected the Fed to hike by 50 basis points, not 25 basis points. By early last week, data came very hot. So there was indication that, clearly, inflation was picking up again. The labor market was very strong and so on. So look what happened to the bets. Immediately, as expected, value went up. This is all traded. It went up. And the expectation was, for the next meeting, was north of 40 basis points. So essentially, most of the market thought that the next hike would be 50 basis points, OK? But look what happened. And then the problems with this bank began. And look how this plummeted. Today, it's 15 basis points expected value. That means very few people are expecting 50 basis points. A lot of people are thinking 25 still. But about an equal size is expecting 0-- so a pause in the interest rate hike by the Fed. And all that is a result of the events of the last two or three days. Yeah? AUDIENCE: What is there to learn from this, I guess, in the bigger structure? Or who's at fault? Is it the people who got really scared-- all these depositors that got potentially scared or fear-mongering that capacity? Is it that the banks don't necessarily have-- I mean, I can't-- feel like it's an unrealistic-- RICARDO J. CABALLERO: There are many good questions, and you're going to see a lot of that. And politicians are going to talk a lot about that in the next few days and so on. It's very clear that there was some sort of regulatory failure here. The regulator-- it was pretty obvious that-- I mean, this bank had doubled the asset size in a year. That's already a red flag. And these guys are regulated by the Fed. So the San Francisco Fed should have been worried about this bank. There is issues, conventional issues of diversification. I mean it's pretty crazy to have all your savings in one bank, [CHUCKLES] especially if you're not insured. There is issues-- there is also-- remember, after the global financial crisis, there was a bill designed to-- legislation designed to strengthen the balance sheet of the banks. It made them hold a lot more capital. They are subject to-- if they're systemic, they are considered-- they are subject to stress tests, where regulators go in there and check whether portfolios can survive major macro shocks and so on. And that's called the Dodd-Frank bill, OK? So that was done. In 2018, that got partially undone, and partially undone precisely for this type of banks. And these guys were actually lobbying for that. They said, OK, why don't you-- because to be sort of really stress-tested and so on by the regulators, you have to be big enough to really be able to leave a big mess. And so what these guys and banks like them did is they lobby a lot. So they got the threshold of assets that you need to have in order to be stress-tested and so on raised dramatically. So they were right below the level that you need to be really sort of monitored very, very closely by the regulators, by the Fed. If you're a systemic bank, then the Fed regulates you. These guys were lightly regulated by the Fed because they were below that threshold. So there are regulatory failures. It's clear that the regulator failed in what it did. Depositors didn't diversify enough. They didn't diversify enough, the bank itself, didn't diversify enough the source of funding. I mean, what is very special with this bank-- and that's what gives us hope that this stuff is not going to spread all around-- is that their funding was very sort of-- was all coming from the same sector, large saver-- large depositors and so on. A typical bank doesn't have that. They have a much broader source of funding, which is what you need because otherwise-- so there are lots of lessons for bankers, for regulators, for macroeconomists as well. I mean, to tell you the truth, one of the concerns with the pace at which the Fed has been hiking interest rates is that people were wondering, well, do we know whether something will break at some point? And there was lots of concern that something could break. Well, something broke now. And this broke entirely-- the part of the loss comes entirely from interest rate hikes. Essentially, they got into a portfolio of long-- that was very long rates when rates began to rise. So they had losses entirely from that. And that's the risk. I mean, when you do monetary policy, is that some people will be stretched out there. And if you sometimes miss one that is important, that's very costly. And I think that's one of the reasons they wanted to lower the interest rate hikes from 50 basis points to 25 basis points because they knew that something could be fragile out there. And this was one of those things. So those are lessons. Now, I was about to connect with the things we did in a few lectures ago. I said, look, so this is telling you the markets, when they saw this x going up, it's sort of betting that the Fed will not hike interest rates as much and, in fact, that it may even pause. Rather than raise the interest rate as was planned, they may even pause interest rates. We talked about this, lecture 7. Remember? In lecture 7, when we talk about the expanded IS-LM model, we had this x variable. And we said, look, if x goes up, that measure of riskiness and so on that increases the cost of borrowing for the private sector, that is like a shift in the IS to the left. For any given safe interest rate set by the central bank, now, all of a sudden, the cost of borrowing for companies is higher. And therefore, this is contractionary. And then we went on, remember, we went on and said, well, here it is, the question, what should the central bank do in this case in which x went up? Yeah? AUDIENCE: Lower interest rates. RICARDO J. CABALLERO: That was the next slide, in fact, lower the interest rate because there's one component of cost of borrowing that's going up for firms, which is the x. Well, the Fed can offset that by lowering the interest rate. Now, here they're not planning yet to lower the interest rate. They were planning to raise interest rates, and now they're slowing down. That's the bet. So the market knows some basic and expanded IS-LM model because that's what explains exactly what you should anticipate that's what is likely to happen. Anyways, that's where we are at this moment. Any questions about this? Otherwise, I'm going to move to the lecture, really. [CHUCKLES] But I thought we had to talk about it. Well, anyways, if he gets a lot messier-- I'm hoping that it won't. But if it gets a lot messier, then we can add a section at the end. I can replace something for something on banking crisis and something like that, OK? Which is what I teach in one of my graduate courses. So it would be fun. Anyway, so now what I want to do is start this IS-LM-PC model. And sort of the number-- the name is not very creative. It's pretty obvious what we're going to do here, no? [CHUCKLES] It's going to combine the IS-LM model with the Phillips curve. And what this will do for us-- it will allow us to think not only about the impact of a policy or a shock, but also think about what happens over time with that shock, OK? Not to the long run, but we call this analysis sort of the short run, which is what happens in the very few early weeks, months, and what happens in the medium run-- say, a year, a year and a half from now. And so this model will allow us to put all of this together. But so you don't get lost on this, so the analysis of the short run, essentially, will remain unchanged. It's our IS-LM model. It's just that, give it a little time, and you start seeing other certain effects get undone, and some others get exacerbated and so on, OK? But the short run is still IS-LM is your basic model. But then we're going to see that things happen over time. So remember the IS-LM model was essentially this is equilibrium in the goods market, and then we had an LM which said i equal to i bar. And so I'm going to replace the LM already inside this, and I get my IS-LM model. So for any given i bar, I could solve out for equilibrium output. Now, here, I'm going to I'm going to adopt the-- I didn't want to do it before, but I think at this point is useful because it will simplify the diagrams when we draw them to really think of the Fed as setting the real interest rate. So I'm going to assume now, and then I'm going to explain what happens when that's a bad assumption. But I'm going to assume for now that, rather than the Fed setting the nominal interest rate, that the Fed is setting the real interest rate, OK? So it's setting this. And then we're going to talk about problems. I mean, in principle, if the interest rate is not against the 0 lower bound, the Fed can always do that. And say, OK, I'm going to give them-- I'm going to give you the nominal interest rate that, given this expected inflation, gives me the real interest rate I want, OK? That's what the Fed is really trying to do all the time. The Fed is not trying to figure out what is the equilibrium nominal interest rate. It's always trying to figure out whether the real interest rate is at the right level or not for the economy. Now, the tool they have is a nominal interest rate. But they are thinking always about the real interest rate. And sometimes there is a problem because when you are against a 0 lower bound, then you can't affect the real interest rate in the same way. But most of the time you can. And so I'm going to think-- I'm going to rewrite the IS-LM model now. But I'm going to call this r bar. And the bar is there just to tell you, remind you, that it's something that the Fed is setting, OK? So that's our IS-LM. Remember the Phillips curve part. The Phillips-- that was our Phillips curve, remember? It's the last ones we replaced the natural rate of unemployment in there. We had an inflation minus expected inflation was a decreasing function of the unemployment gap. So if unemployment was above the natural rate of unemployment, inflation was lower than expected inflation. And conversely, if the unemployment rate was lower, then the unemployment rate-- and the situation of the US today is that everything seems to point towards a situation where u is below u n. And that's the reason we're seeing sort of high inflation, OK? Now, what I'm going to do next is I'm going to go from unemployment to output. So I can put-- you see, I don't have unemployment anywhere here. I have output. So what I want to do is play with the Phillips curve until I write it in the space of inflation and output, not inflation and unemployment so I can put the two curves together. That's what I want to do. Remember, I want to [CHUCKLES] merge here the IS-LM with the PC, so I want to put them in the same variable. So remember, we have operated with a very simple production function in which output is equal to employment. Remember? That's what we assume. Employment, we call it n. Well, I can rewrite n, employment, as the labor force times 1 minus n, employment rate. That's employment, OK? So I can think of output as that. Similarly, I can define what we call-- we don't call it natural output. We call it potential output, no? Potential output is defined as the output that you get when unemployment is at the natural rate of unemployment, OK? So that's our definition, three lines. The potential output is when the output you get, which in this-- with this production function, is the employment you get when you are at the unemployment-- at the natural rate of unemployment. And now we can construct the difference, this minus that. This is something we call the output gap. And you may hear-- typically, when people talk about issues of monetary policy, often it's described in terms of this variable more than this gap. People talk about the output gap. If the output gap is positive, that means output is above the natural rate of the potential output. When the output gap is negative, that's output is below potential output. So I can rewrite this. This minus that is just that. And now I can replace u minus u n here for minus Y minus Yn over L. And I get the Phillips curve now written in terms of the output gap and inflation. So this says when output is above potential output, when the output gap is positive, then inflation exceeds expected inflation. Conversely, when output is below potential output, then inflation is below expected inflation, OK? But the logic is exactly the same as the logic we have here. Why is it that this happens? Well, because when output is above the potential output, that means, also, unemployment is lower than the natural rate of unemployment, OK? So that's the logic. Any question about this? No? OK, good. So anyway, so now we have a Phillips curve and our IS-LM model. So let's put them together. And suppose for now-- and I'm going to-- that's the example I want to carry around is that expected inflation is equal to lagged inflation. So this is a case in which expected inflation is not well anchored. And then we're going to talk about what happens when it's anchored and not anchored. So suppose that inflation is actually-- whatever is this year's inflation, that's what you expect for next year, OK? So here I have an example in which here I'm plotting our IS-LM now, which I'm using remember the real interest rate here. And in this diagram down here, I'm plotting the Phillips curve. OK, so first thing, let's look about this Phillips curve. Why is it upward sloping? Here is output, so this is a parameter by n. And this is the left-hand-side variable. So it's obviously increasing in output. Why is that? Well, because if output grows, that means unemployment goes down. That means wages go up, prices go up, and you get inflation. That's the mechanism, OK? So in this particular example, we have-- this is the real interest rate that the Fed has set at this moment. That's the equilibrium output. What I'm trying to tell you here is that nothing has changed in the way you calculate equilibrium output. If you just use-- for that, you only need this top diagram in the short run. I tell you what the real interest rate is set by the-- is, which is a decision by the Fed, then I know where my IS is. I can pin down output. I don't need this diagram to really pin down equilibrium output, OK? So nothing is different there. But-- and this is an example-- in this particular case, we have that inflation is rising here. And the question is, why? So for this-- what I'm trying to say is that, for this IS, which is a function of fiscal policy, of how confident consumers are, and stuff like that, if the Fed chooses this real interest rate, we end up with this output. But it turns out that this level of output is increasing inflation. And the increase in inflation I can read here. I see the change in inflation is positive here. Why is this happening? AUDIENCE: If you're changing the output that means you have a different level of employment, which changes the expected inflation, which will raise interest rates. RICARDO J. CABALLERO: Yeah. Well, actually, here, I don't need to take-- this diagram would have also worked with expected inflation as a constant. Here, I'm more looking at what happens to inflation. I'm saying, if output is above the natural rate of output, then inflation is above expected inflation. But I can take expected inflation as a constant. In fact, here it is a constant because-- a constant in the sense that it's given at time t because it's a previous year's inflation. But what is important is that you have too match aggregate demand. This economy is running very hot. [CHUCKLES] If output gap is positive, then that is going to lead to inflationary pressures. In this particular model, where expected inflation equal to lagged inflation, this is pretty bad because it's not only you get inflation above the target of the Fed, but inflation is rising over time. So this is a case in which this central bank is setting the real interest rate too low, OK? You may want-- Japan is doing a little bit of this. But they have a reason, is that they have had inflation so low that it makes sense for them to build a little inflation. In the US, it made less sense. The US got into trouble because it was in a situation like this for a long period of time. I mean, the reason we have today 6% inflation-- well, it depends which indicator you use-- is because the US experienced sort of a year with a situation like this, a year and a half. And that's what-- sometimes people say the Fed was behind the curve. They-- for a variety of reasons. One, initially, potential output declined because of all the COVID-related issues. They expected that to recover quickly. So they said, let it go because I'm not going to start moving my policy rate around for something that will recover quickly as soon as COVID is gone. Well, it took longer to recover, and then it came sort of the Russian war shock and so on. And so natural rate of unemployment moved to the left to start. And second, because of an enormous policy support, primarily, and the fact that houses were able to save a lot during COVID, there was a lot of pent-up demand. Then we had enormous aggregate demand when we came out of it. And the real interest rate that we had was just way too low for all that aggregate demand and that low potential output. So we were in a situation like this. And inflation began to climb. Initially, expected inflation was very well anchored. And then we began to lose that anchor. Then we recovered, and now we're losing it again. We shall see what happens after this current episode. But that was exactly the situation of the US and of most economies around the world. China is in a different story. But in most economies around the world, certainly Europe, all of them, the UK, continental Europe, and the UK, Latin America. Regardless of where you look, the situation was like that. Yes, real interest rates were way too low for the natural rate, the potential-- the level of the potential output we had at that time. And so we got into a situation like this. OK, so that's the short run. In the short run, if you have an interest rate that is very low-- I mean, again, in the short run, you know how to determine output given a real interest rate. And then now you can say a little more and say, OK, but that's going to put inflation-- is going to cause inflationary pressures, up or down, depending on whether you are to the right or to the left of the natural rate of output. That's a new twist about the short run that you know. But now let's start moving over time. So what happens over time? Well, first, let me define something. With potential output, we know what it is. But I'm going to define something which is called the natural rate of interest rate. Sometimes it's called the neutral interest rate. Sometimes it's called the Wicksellian interest rate. Let me not get into that story. But I'm going to define implicitly the natural rate of interest rate, or the neutral rate of interest-- or some people call it r star. You may have heard of r star in the newspapers. People talk about r star. When they are talking about r star, they're talking about that, OK? It's simply the interest rate that makes the potential output the equilibrium of the goods market, OK? So I'm solving implicitly. I say, I want to get as a result-- as an-- I want to get as a result of this equilibrium here the natural rate of output. What is the interest rate I need to pick so that's the case? So I want to get the natural rate of output here, the potential output. I know that there is an interest rate, real interest rate, at which that holds. It's a matter of looking for the interest rate that does that. And in this particular diagram, it's this, you see? At this interest rate, the [INAUDIBLE] equilibrium output is exactly the natural rate of output. So what I know is that, eventually, the economy will have to go there. Eventually, the economy will have to go there. So how will this happen in practice? The way it will happen is, OK, this is the point we're at in the previous slide. So we're here. Well, that's building inflationary pressure. What do you think will happen? Inflation starts climbing. Who will react? Who's in charge of not letting inflation get carried away? The central bank, the Fed. So what they start doing is hiking interest rates, which is exactly what they have been doing. And as they hike interest rate, they're going to keep-- they start increasing the real interest rate until they get to this point, OK? That's the idea. So the point is that, in the medium run, the real interest-- real variables determine real variables, not monetary policy. Monetary policy has to follow whatever it is that the economy throws at them. Central banks have to follow whatever is the real interest rate. If they made a mistake, and they set a real interest rate which is not consistent with a stable inflation, they're going to learn about it. And over time, they're going to have to fix that. And when will the problem go away? Only when they reach the natural rate of unemployment. And so that's what will happen. As the real interest rates start going up, from here to there, then you start seeing the change in inflation in this particular model, a declining and declining. And when you get to a natural rate of output, at least you get a stable inflation. Is this adjustment clear? OK, good. OK, so this is what happened in the medium run. So the medium run is described as moving from that point here, the whole process of going back to a situation where we converge to a natural rate of interest rate and therefore the natural rate of output and the natural rate of unemployment and all these kind of things. So that's-- the short run is whatever its output is. That's your IS-LM. The medium run is whatever the natural rate tells you should be the natural rate of unemployment, the natural rate of output, and therefore the natural rate of interest rate, the Wicksellian interest rate or the neutral interest rate, or r star. That's all pinned down there in the medium run. And the transition is obviously going from the short run, the pure IS-LM, to the natural rate type analysis. Now, I assume here-- and that's related to your answer. I assume here that the expected inflation was an anchor-- that is, that expected inflation was equal to log inflation. That's-- I told you before that's not where central banks want to be, because that means that, if you mess up, inflation is high. Then, in order to bring it down, you also have to bring down expected inflation. You need to cause a recession. And you can see that here. So suppose that the central bank starts with a level of inflation that it likes. Suppose that this is the model. So what I said before, the expected inflation is equal to lagged inflation. Suppose that the central bank starts at the level of inflation that it likes, 2% in the US, OK? But suppose that, for whatever reason, whatever shock, it finds itself with an interest rate that is too low, a real interest rate is too low. That means inflation exceeds expected inflation, which was 2%. Well, by next year, say, suppose this gap is 2%. Well, by next year, the inflation is 4%. OK? So if inflation is 4%-- in fact, in the US it's got to be 9%. If you're at 9% level of inflation-- and this is the model of expected inflation you have-- then, Houston, you have a problem because it's not enough with raising interest rates up to this point. Suppose that the Fed says, well, I don't like 9%. I'm going to go back to-- that clearly tells me that my output is way above the natural rate of output. I'm going to hike interest rate. And somebody tells the Fed, this is your natural interest rate. A very good research department tells them, look, this is your natural interest rate. Hike it to there. Suppose the Fed hikes the interest rate to that point. What happens? So the Fed realized here this was going really wrong. They end up with 9% inflation. But somebody tells them, look, this is your natural-- your neutral interest rate, your r star. Bring it there. And the Fed immediately reacts and takes it there. What happens? Is the Fed happy with the final outcome? And suppose the research department was really good, so they got it right. So the r star was the right r star, and the Fed implemented that policy, moving to-- suppose that the interest rate, the real interest rate they had, was minus 1%. I'm telling you numbers that are not that different from what we had-- minus 1%. And the research department tells them, no, your rn is really 1%. So they hike interest rate by 2% immediately. And now what happens? So I guess that question is a little vague. But I'm saying, is the central bank happy now that it-- ooh, I got-- we got the right natural rate, neutral rate? It's called neutral rate. Well, I'm telling you, I wouldn't be asking you if the Fed was happy after that. So why do you think-- why are they unhappy? Why is the Fed unhappy after that? Not unhappy with the policy, but what I'm saying, the adjustment is not completed at that point. Why? And I'm trying to make the bigger point for why central banks are so eager to maintain credibility and not have this kind of model of expected inflation. They want the markets to believe them that they have a target and that they want to go to that target and that, to set an expected inflation equal to that constant equal to the target, that's what they dream with because, if they don't get that, if they get this instead, things are nasty. And I'm trying to describe that nastiness-- I mean, what is happening now. So what happens here? OK, so we went here. Inflation got to be 9%. And now the Fed, boom, hikes interest rate by 200 basis points. It got to a natural rate. We're back at output equal to natural rate of output. What is happening to inflation here? So now we're back at the natural rate. What is happening to inflation? Well, this diagram tells you something very specific. It says it's not changing. So now your inflation, at least, is not changing. OK? So that's good. Yeah, at least it's not rising. Here it was rising. It's not changing. But what is the problem? Inflation not changing when you're at 9% is not a good outcome for the Fed. The Fed wants 2%, not 9%. OK? So they-- when you have this modification, you need to do more than that because you need to bring expected inflation down. So you need to overshoot. A Fed that finds itself with 9% inflation and has expected inflation an anchor needs to be inflation much lower. So it needs to raise interest in the short run much higher than the natural rate of interest rate. So it gets negative inflation here so you can bring the 9% back to 2%. No? So I have to generate the minus 7% here. And to generate the minus 7% here, I need to bring output much below the natural rate of output. I need to cause a big recession to do that. And that's the reason the central banks don't want to be in this scenario, because with this level of inflation, if expected inflation becomes an anchor, then there is no way around that. The Fed will have to cause a big recession to get out of the inflationary problem. Contrast that with a case in which the market-- the expected inflation is not equal to lagged inflation but is equal to whatever the Fed tells them is the long run average, 2%. So now, suppose that, therefore, rather than having here pi minus 1, I have that target, pi bar, which is 2%. So yeah, we got to 9%. But for the Fed to go back to-- the Fed would say, oop, I messed up. Clearly, I set a real interest rate that was way too low, and so I end up with 9% inflation. But if credibility is maintained, and still people expect 2% in the medium run, then that means that the Fed doesn't need to cause a recession to bring inflation back to the normal level. It just needs to bring output to a level equal to potential output. So it just needs to raise interest rate to rn, to the natural rate of the r star, not to r star plus something in order to have disinflation in the short run, OK? And we're there at this moment, on the verge of these two worlds. We have been alternating between the two worlds, still more biased towards the good world in which the Fed doesn't need to cause a-- they need-- the Fed needs to slow down the economy because it still needs to bring output down to Yn. But that's a small change. In practice, all these things are growing over time. It just means that the economy grows at a lower pace for a few quarters, OK? But it's very different to have to bring temporary output down here because, for that, you need to sort of bring the-- growth has to become negative for some period of time in order to bring inflation down. Good. So big lessons from this part are that, as I said before, in the medium run-- so I haven't changed-- I haven't changed any of the two models. I told you what was the model of the short run, the IS-LM. That's still true here. I told you then what was the model of the natural rate of unemployment and all that and that there we didn't have any monetary policy or anything like that. We look at what happened in the labor market, and we determine the natural rate of unemployment. And that was it, OK? So the medium run here is when we are in that world in which it has nothing to do with monetary policy. It has all to do with real variables, OK? What is an equilibrium, long-run real interest rate, what is a natural rate of unemployment, and things of that kind. But monetary policy, what it does do is certainly determine, in the short run, equilibrium output. But in the medium run, it determines what is a nominal interest rate, equilibrium nominal interest rate, and the level of inflation because the economy will have a real interest rate, which is the r star and rn. The economy has rn. But the Fed-- and the Fed will not get to pick what rn is. The only thing that the Fed will get to pick in the medium run is what is a nominal interest rate that is consistent with that rn, because suppose that the rn is, say, 2%. If the economy ends up having 3% inflation on average, that means that the nominal interest rate for the long run is going to have to be 5%. If instead, that economy has 2% inflation average, then that means that the long-run nominal interest rate will be 4%. So monetary policy affects the nominal interest rate, nominal variables in the medium run, but not the real variables. The real variables are determined by the real sector. And that's often referred as the neutrality of money. In the medium run and the long run, money tends to be neutral. And that's what it means, is that real variables are determined by something entirely different. But in the short run, monetary policy is the main game in town. And in the medium run, it's just about inflation. It's not about real activity. Let me stop here.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_17_Introduction_to_Open_Economy.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: So remember what has been happening to the US economy. As economy-- and it happens similarly with a few lags and leads and differences in sizes but around the world, in most economies around the world, as the economy began to reopen from COVID, we had a situation where we had too much demand for supply. The potential output, using the terminology you have in the IS-LM-PC model, it was slow in picking up because we still had lots of bottlenecks in the supply chains, and in different sectors of the economy, some people didn't want to come back to work, to the labor force for a while and so on. So the supply side was very still impaired. Not as impaired as in the middle of the COVID in 2020, say, but it's still impaired, while demand was very strong because people were fed up of staying at home. They had saved a lot during the COVID recession and they wanted to spend. OK? And there had been lots of fiscal support and monetary policy support and so on, so people feel wealthy and felt rich and so they wanted to spend. There is a big demand, but supply is not there. That starts introducing inflationary pressures. Say we have a negative output-- a positive output gap, output above potential output. That puts immediate inflationary pressure. That's what we learned from the Phillips curve and all that. OK? And now for a while, the Fed did not want to react to this because they thought that this was going to be mostly a temporary phenomenon, that supply would recover pretty rapidly, and that people would, after taking one trip, well, they wouldn't want to take a second one. Fiscal support was winding down and so on, so they thought this would go away, and they didn't want to cool off the economy because they thought they didn't want to fight a temporary fight. Now as a result of new shocks, the war and things like that, but also that the initial call was not right, was incorrect, that there was a lot of inertia in demand and that pick-up in demand lasted a lot longer than they expected. Is that inflation really began to rise, and the Fed was really caught what is called behind the curve. They should have started-- at least exposed, it's easy to see it that way. Remember what you have. If you have a situation where output is above potential output, the interest rate is supposed to be rising. So the Fed is supposed to be increasing the interest rate, but they didn't for a while. Assuming that the forces that were bringing potential output down and demand up were transitory. When they discovered that wasn't the case, they had to start catching up. As a result of that, they began to hike interest rates very, very rapidly. OK? Unusually rapidly for an economy like the US. There are many reasons why policymakers, especially monetary policy makers, prefer to be gradualist, meaning to move things in small steps rather than in one big bang, especially in the way up. It is to cut rates, they're very willing to be very aggressive in cutting rates. But raising rates is something they tend to be reluctant to do very, very rapidly. And one of the main reasons they are reluctant to do that very rapidly is because something may break in the process, and there are certain things that are very important if it breaks. There are certain other things that are not very important if they break. But one of the things that is very important if they break is banks. OK? And that's typically where you run into trouble when there is episodes of very fast hikes in rates. Now, because the US banking sector, especially the large banks, were very resilient, they had lots of capital and so on, there wasn't a lot of concern that that would be an issue in the US because, again, the big banks looked very healthy. Deposits were flowing out of big banks, but it was happening all at a normal pace. It's normal that deposits go out of the banking sector when interest rates start to rise. But there was no sign of any trouble. Well, that changed a month ago, as you well know, and something broke finally. The major episode there was the Silicon Valley Bank. We saw a big bank run there and eventually that bank collapsed. And since then, things have looked a little more complicated. So here you have, for example, commercial bank deposits. As I said, as the Fed began to hike-- so focus on the red line. The Fed began to hike rates, then deposits began to-- people began to move their money out of deposits into US treasuries, money market funds, things of that kind. But that was normal. OK? And that didn't lead to a big cut in lending, which is what you worry about when banks lose funding. But things began to change quite rapidly, well, in the second half of 2022. This was felt mostly by large banks. And when deposits decline gradually in large banks, that's not such a big issue because deposits are not the only funding source that big banks have. They have many other sources of funding. But then that's what happens this year. Things accelerated very, very quickly this year. And so this is different. This is a different animal from this sort of very controlled decline in deposits as interest rates were hiking. This is a very sharp decline in deposits. OK? So that's a problem because that unavoidably will hit lending. OK? So that's going to show up in terms of the models we have had. If you were to use your IS-LM-PC model, it would show up as an increase in x. Remember, we had risk premium and stuff like that. Well, that's the way you probably could model what is happening right now. Now, one of the-- as I said before, these are in different scales. So small bank deposits on the left, large bank deposits on the right. As I said before, through most of these episodes in which deposits were declining in the banking sector, it was mostly a phenomenon that affected large banks, and it was very gradual. OK. But what happened since a month ago is essentially small and medium sized banks experienced a very large run in deposits. Part of that went to money market US treasuries and part of it-- you don't see different scales and so on, but it went actually to large banks. It was relocation. That's called flight to quality. OK? Now even if this has been a full relocation of deposits from large banks-- from small banks to large banks, and so no deposit would have declined in the whole banking sector, that would have consequences for the economy because these type of banks don't lend to the same type of people. Small banks, small, especially regional banks, lend a lot to people that-- to businesses and people that do not have other sources of funding. They tend to be small businesses and so on. The only way they can borrow is either from the family or from a bank. But they cannot issue bonds and things like that, no? And so they don't have other sources of funding. And so that's a problem because what you see here is naturally when you start seeing deposits going out, there have been also losses experienced by the banks because they had to recognize the losses in the asset side once they lost deposit is that they had to cut lending. And you can see here what has been happening to lending. Large banks, they began to slow down here, but it's a gradual slow down. But you have seen a very sharp decline in lending over the last three or four weeks. OK? So we're in the early phases of what we like to call, as described as a credit crunch. OK? Now that's a problem, as I said before, because those banks, these banks, the banks in gray here, are the banks that lend primarily to small and medium sized businesses. So you see here the share of commercial and industrial loans by bank size. And this is loans to small businesses, loan to larger businesses. These are the banks that are in grave or below $250 billion, and you see that they have a large share of loans to small businesses. OK. So those sectors are going to suffer a lot. So there's going to be a contraction in the economy, and it's going to be very concentrated on small businesses, medium sized businesses, and so on. And again, it's more problematic, a contraction in lending to those businesses because they don't have alternative sources of funding. That's it. OK? It's either return earnings, the family is really, really small, or banks. Again, big corporations, these guys, probably are borrowing from 10 different banks and they have issued corporate bonds and they even have commercial paper. This is entirely different life when you live here and when you live here. If you look by sectors, that also is going to have implications for sectors. So this is going to be clearly contractionary, but it's not going to be equally contractionary. It will depend on the composition of your sectors. Not all sectors have the same share of small businesses and medium sized businesses. You see here the construction service has a very large share. More than half of the businesses in construction are really small or medium sized businesses. OK? Big contrast with utilities where they are all big businesses and so on. OK. So big dispersion, and so this is going to be a contraction that is going to be very felt-- is felt very strongly here at the top. So that's what is happening right now. And from the point of view of the aggregate, essentially where we have gone is from a path like this to a path like that. So the economy was overheating, so we needed to slow down the economy. There's no way around that. Output was above potential output. That was causing inflationary pressure. We needed to bring this stuff down, and the economy was slowing down. But it was happening at a very slow pace. And that was a bit exasperating for the Fed, but it was happening very, very slowly. Among other things, because the balance sheets of the household sector and corporations in general was very healthy. But now things are accelerating very quickly because once you introduce credit crunch, credit constraints, and things like that, the same declines in wealth that we were experiencing before that were driving aggregate demand down have a much larger effect. And so we're changing from a world that looked like that to a world that's going to look a lot more like this. And that's a tricky for the central bank because now the Fed needs to-- before there was no way around. They had to hike interest rate, because the main problem was inflation. Now they know that as they hike interest rate-- they still need to hike interest rate, I think, unless there is a big mess, because inflation is still way above their target. But they have to be very worried that this stuff doesn't become too steep. Things get to be very non-linear when the financial sector is involved. And so probably that means that on net, they already did it. In the previous meeting, everyone anticipated 50 basis points of hikes. Once SVB happened, the bets went down dramatically and the realization went down to 25 basis points. And so that's where we are now. Now if everything works as planned-- and more or less that's a forecast at this moment. There isn't any panic and so on. Is that we're going to experience not a technically a recession, but a significant slowdown in the next quarter or so. OK? That's sort of the consensus expecting to see as a result of all these combined forces. A Fed that still needs to tighten because of aggregate demand reasons and the very negative effect of the credit crunch, especially in certain parts of the economy. OK? If everything, again, works as planned means things continue to go fairly smoothly, then it's certainly not going to be a good year for economic activity but it shouldn't be a disaster either. That's sort of where we are at. And the tension is, well, if the financial crisis leaks into the larger banks, then these numbers are going to get a lot worse, of course. But that's the concern at the moment. But it doesn't seem like the central scenario. Now, there are also some good news happening because remember that the problem we had-- the problem we have comes from two sides. One side is too much demand, and that the Fed can affect very quickly. That's what it's doing by hiking interest rates. Not as quickly as they would like, but still, they have an impact on that. But the other problem was aggregate supply. And in particular, it's that the labor market looked very tight. And remember when we did the Phillips curve and all that, the labor market is very critical in all the process of generating inflation and so on. That's the reason you try to generate more unemployment, essentially to lower wage pressure, because that means less pressure on prices and so on and so forth. And one of the problems is that, again, is very nice in principle, that we have very low unemployment rate. But it's very difficult to lower wage pressure if unemployment is so low. Having said this-- and this is the part that I say is fairly good news. Having said this, in the model we simplify things and we just put unemployment as the only variable that could adjust and that was important for wages aside from some institutional. Things in practice, there are many other indicators. And really what really matters is employment. What happens is we took a fixed n, we took a fixed the participation rate, and that's the reason unemployment was the variable to summarize everything. But what really puts pressure on labor market is shortage of workers. If you have an unemployment rate that is constant but there are lots of workers coming into the labor force, then that doesn't put as much pressure on wages, and therefore less pressure on inflation. And this is exactly what we're beginning to see in the US economy, and I think that's very good news. It gives us hope that this inflationary process may come down a little faster. So this is labor market participation. Remember? So this unemployment number spiked a lot, but it underrepresented how much contraction there was in labor, in employment, and so on, because many people simply exit the labor force and those, remember, we don't count as unemployed OK? And so that happened. It was a very sharp decline in the labor force, in the labor participation rate, so the decline in the labor force. And then that recovery is one of the things that happened much slower than the Fed anticipated. That was part of the mistake is they thought that this was going to come back quickly and that for potential output was going to rise very quickly, and it didn't. That was one of the mistakes, the forecast mistakes. But now it's clearly that it's coming back to levels that are more consistent with historical levels. If you look at the employment to population ratio, also a big decline for similar reasons. But if you look at employment to population ratio today, it's clearly getting back to the trend it had before. OK. So that's very good news in the sense that even if unemployment doesn't move a lot, this reduces-- well, it's good for output. It expands and so on. But it also lowers inflationary pressures in the economy. And another component actually that I think I mentioned a couple of times in the lecture is immigration. In a market like the US, a lot of the labor forces comes from immigration. OK? And that stopped for a while for a variety of reasons, but certainly COVID had a big effect. So that meant about 500,000 less people a year coming into the US labor force, and that has big impacts, especially in some sectors of the economy. OK? And that's clearly been fixed now. OK? We may have other problems. People may fight for political reasons and these things and whatever, but from the point of view of macro, this is certainly helping. OK? In fact, if you look at where the wage pressure is really coming from in the US economy, it's coming from those sectors where immigrants play a big role. Accommodation, food services, stuff like that, you see that their sort of wages rose pretty dramatically because there was a massive shortage there for two reasons. One, people didn't want to go back to those sectors to work, close contact and stuff like that. And the other one is that the important flow of supply of workers into that had slowed down quite dramatically. OK? So that's where we're at. And as a result of all these good forces, despite the fact that we have very low unemployment, you can see that wage pressure is beginning to decline in the US. OK? So it was very high there, but those numbers are very distorted by composition effects and stuff like that. But these were the numbers that were very worrisome. I mean, with wage inflation of 6%, it's going to be very difficult to bring inflation to 2% down. You need much lower wage inflation. But you see that is beginning to decline. 2% is still too high for a steady state if you want to go back to 2% inflation rate because you can have an increase in real wage, but that has to be more or less aligned with the rate of growth of productivity. Which is much lower than that. I mean, at best it's 1%, 1 and 1/2 sometimes. So you could live with 3.5% real nominal wage growth, but 4, 4 and 1/2 is a bit too much. OK. Anyway, so that's the state of the economy. Any questions about the state of the economy? Open question. No? No? You're happy with it? Doing fine? OK. Good. So what I want to do next is so this was a summary and all that I did here I did close economy economics. All my description, I didn't need to tell you what was happening in the rest of the world and so on. That would have been a lot harder to do if this was Singapore, for example, because Singapore depends a lot on the rest of the world and it's very difficult to tell a story that just depends on what happens in Singapore. The US is pretty unique in that you can tell most of the story based on what happens in the US. Most. Not all, but most of the story. And that's what I just did. Almost anywhere else, you need to think about-- even if you're in Japan. Big economy and so on. Effects will play a big role. And so when you describe the state of the economy, you're going to be talking about the yen. Very high, very low, or not. Here you don't worry much about the dollar. The US is very unique on that. I think no other country in the world has that feature. OK? So now we're going to open up the economy, and again, perhaps it's the least important for the US but anywhere else it's tremendously important. More so it's becoming increasingly important for the US. It has been becoming increasingly important in the US. Now we're in the middle of a deglobalization mess. We shall see where we end up. So there has been a bit of a reversion of a very strong trend towards integrating the economies of the world through many different channels. And so I want to talk about what are the key variables when you integrate an economy to the rest of the world and things of that kind. I'm going to start with some definitions. What are variables that we're going to be talking about that we weren't talking about up to now? So one of the things that's going to be very important in an open economy is the exchange rate. OK? And here I'm giving you-- I'm going to define things very formally later on, but here you see an example of the US dollar vis a vis the main trading partners. The US trades with many different parts of the world, and there's bilateral effects between those things. And this measure here is something that weights by the amount of trade that we have with different economies of the world, trades the different effects and says, well, is the dollar strong, weak, or whatever. This is a matter of convention. There's many ways you can-- no, there are two ways in which you can do it, but we're going to do it the following way. What these effects will reflect is the price of the domestic currency in foreign currency terms. So that means when this goes up, means the dollar was becoming very expensive. OK? And that's a very sharp-- I'll get back to it, but we call that an appreciation of the currency. OK? So here the dollar was becoming very expensive in terms of other currencies. The opposite happens starting the second half of 2022. Now we have had some cycles here during this year. That's going to be a very important variable, the effects. We call it the effects, the exchange rate. Another variable that we haven't talked about but that is going to be important here is-- and that politicians argue a lot about it with the wrong arguments, but they do, is the trade balance of goods and services. Of goods and services. So this, the trade balance of goods and services, is simply the difference between the exports, so what a country sells to the rest of the world, versus its import. That is how much it buys from the rest of the world. OK? So this is monthly data for the US. The US nowadays of runs a deficit, a trade balance deficit of the order of $70 billion a month. That means exports to the rest of the world about $70 billion less than it imports. Sorry. Sorry. Yeah. 70 less than it imports. Perfect. OK. You have seen that this is pretty sustained, actually. Here it looks pretty balanced, but the US, on average, has a situation like that. OK? The US tends to export less than it imports, and then that's when politicians get all very worked out. They say, this is unfair competition from the rest of the world and so on and so forth. I think in general, it has very little to do with that. It has to do with the fact that the US likes to save less than the rest of the world. But let me not get into that until much later in the course. But anyway, that's the situation for the US. This is obviously a blip that has to do with COVID and so on, but you see that we're now more or less back to where we were before. This was a period, actually, of lots of global political tensions because it had one counterpart that was very big. It was China. OK? The time of the global imbalances in the US was learning very large trade deficit, and then China in particular was running very large trade surpluses, and so there were lots of political quarrels because of that. So obviously the US has many, many trading partners, and with respect to many of them, it has very large deficits. So on net. And here are the main-- I ranked the 10 main deficits for the US. So I don't know when was this. But anyways, it looks like that for the last eight or so years. Remove COVID, it looks more or less like this all the time. And you see that the US, indeed, large exports to China, about $153 billion a year, and it imports about $536 billion a year. So the net deficit is $380 billion a year. That's the reason this is a big political thing. OK? It sounds like a big deficit. But there are other countries. With respect to Mexico, $130 billion, and so on. The US exports a lot more to Mexico. Vietnam. A lot of what happens from Vietnam is really Chinese exports in disguise. But there you see it. OK? There are many others. Germany is always a problem, but-- a problem for somebody that sees deficits as a problem and so on. OK. Now that's on the good side. What I was saying here is that one sense of openness is in the goods and services market. So you buy goods from the rest of the world, the rest of the world buys goods from you. Sometimes these things are balanced, sometimes they are not. When at the aggregate level, they are not balanced for a very long period of time, unless you're the US, that often causes problems. But at the bilateral level, if you look at any country, we'll have situations like this. Some countries where they export a lot to and import very little from and vice versa. But that's the way the US looks. Another sense of openness, which is very important is financial openness. Meaning that you can also buy or save using foreign assets or domestic assets. So you can buy foreign assets-- perhaps not you directly or most of you directly, but you can do it through a broker and so on. You can buy foreign assets, and foreigners buy lots of US assets. OK? That's a sense of openness in financial markets, that you can buy-- you're not stuck with your country's financial assets. You can also invest in other countries' financial assets and vice versa. Those things probably you as-- I mean, you know this trade openness a lot more probably than financial openness because you're all the time buying imported goods and stuff like that, while you're probably not involved in a lot of transactions of international financial instruments and so on. But they are very large. OK? So pension funds and so on, they're all involved in very large transactions. In fact, transactions in financial markets are an order of magnitude larger than transactions in the goods market. Very large. And here you have an example by origin, the foreign holdings of US assets. And this is in billions. So these guys here-- this, China and Canada have the largest, and the UK, those have on the order of $2 trillion of US assets they're holding Japan should be here also. Yeah, there it is. Yeah. Now in these two countries here, a lot of those holdings actually-- well, particularly in China more than in Japan. In China it's a little less than Japan. Japan, they save a lot so they need to buy lots of financial assets and the US is the main producer of financial assets in the world. They save a lot on that. But also the central bank, because of currency intervention and so on, buys lots of US treasuries. China, this is mostly the central bank buying US treasuries, foreign reserves, large amounts of reserves. Canada is private sector, pension funds and so on, probably. But you see, Brazilians also buy lots of assets from the US. Again, central banks play big roles in all this. Australia is less a central bank, much more private sector. UK is all private sector, and so on. And in Europe. OK. But the point of that picture is that there are lots of countries in the world or residents of different countries in the world that buy US financial assets and in big amounts. OK? I don't know what the total number today is. Probably-- eh. Actually, let me not make up that number. You may find it. But it is certainly in the $20 trillion or something like that of assets held-- US assets held by foreigners. You can check it and tell me. It's probably more. Here's the other way around. US residents holdings of foreign assets. OK. You see, US residents hold lots of Canadian assets and mostly developed economies, but there you have India, China, and so on. Lots of Latin American assets and so on. OK. So the US-- so it's not only that the rest of the world demands US assets, but US residents-- perhaps not directly, most of you, but indirectly demand lots of foreign assets. OK. There are some very fascinating facts that happen here because the type of assets that foreigners tend to demand from the US are very different from the ones the US demands from the rest of the world. In fact, one of the things that-- the reasons the US can afford running those chronic trade deficits is because it tends to get much higher returns on the assets it buys abroad than foreigners get on the assets they buy in the US. The difference allows you to fund systematic trade deficits. And the reason for that is a lot of the US assets that foreigners buy, they do it for safety reasons. They are buying US treasuries, very safe instruments. Just in case there is a big mess, they want to have those assets, so it's almost for insurance reason. The US treasuries are sort of perceived as a main safe assets in the world. So they hold it for that reason. While the US mostly holds assets abroad for risky investments. Either foreign direct investment or equity, stuff like that. And typically most fixed income investors in the US-- a lot of what you see here is just US residents reaching for yield. Brazilian sovereign bonds, the equivalent of the US treasuries give you 9%, 10%. It's a lot higher than the US tends to give. 14% even now, and so on. So on net, the US tends to make-- has less assets in the rest of the world than foreigners have of the US and it still makes sufficiently more return, higher return on average on an asset it holds on the rest of the world that it has a surplus which it can finance the trade deficit. This is complicated. You don't need to know the details, but just to tell you what the kind of things are happening. So there's three senses in which you can have openness. The two that have described implicitly already, goods. In goods market you can buy foreign goods and you can sell goods to the rest of the world. And the second one is financial markets, which is what I just described. You can choose between domestic and foreign assets. The main impediments to the former typically are tariffs and quotas, and you have heard a lot about tariffs and so on these days. The main impediments for financial markets is what is called capital controls. Very rarely developed economies impose capital controls, but emerging markets do it regularly. OK? Limit the amount of-- especially capital outflows. When lots of capital is live in the country, they try to stop you from doing more of that. And they sometimes do it in the way in because they want to avoid the macro instability that comes from the big reversal of capital flows, so they don't let lots of capital flow in during the boom just to prevent a reversal later on. Thus capital controls. And there is a third way of opening to the rest of the world, which is factor markets. OK? Which is that firms can choose location. I mean, some plants-- we're seeing Japanese plants that do not export Toyotas directly from Japan. They have the plant here either in Canada or somewhere in the US and they sell locally. So that's relocation of factors of production. Japanese capital that relocates to some place in Canada or in the US, and labor. You can also have workers that move from one place to the other. That's another form of openness. Free factor mobility. In this part of the course we're not going to talk about this. OK? We're going to focus on these two. The first model we're going to look at is going to be a model of the goods market, and that's going to be very much like the IS-LM but with an open economy. And then we're going to bring in interest rates, but now once you-- remember what we did in the closed economy first? We looked at the goods market, then we look at interest rate determination. That was our LM. And then we put the things together, we came up with IS-LM. Here the tricky thing is that there is not only one interest rate. There are really two. You have to decide between the domestic and the foreign interest rate, and then there is an exchange rate in between which will also affect. So that's the reason it's going to get a little more complicated, because you're going to be having two different prices, two different goods you can buy. You're going to have also two different assets you can buy. And the effects is going to affect all those things. Whenever I say effects, I mean the exchange rate. OK? Foreign exchange. That's the reason sometimes it's called FX. Now one thing that happens is that because economies are so integrated in goods and financial markets is that business cycles tend to-- especially large ones tend to be very synchronized around the world. You see here emerging markets. Sorry. Yeah. This is the world. That's emerging markets and that's advanced economies. You can see the kind of things we discussed in the growth section, which is these countries tend to grow faster than these countries because they're catching up. That's the reason they're called emerging. But the level of the business cycle, they're very synchronized. I mean, there's 2008, 2009. The recession was a recession globally. This one doesn't have COVID, but, well, COVID very naturally was very synchronized around the world. The point I'm making here is that once you're very integrated to the rest of the world, you're also exposed to a new source of shocks. It may help you in many instances, but you're also exposed to things that come from the rest of the world. And the evidence is that business cycles are very synchronized. It's very difficult for the rest of the world to be immune to a US recession, for example. It's very easy for the US to be immune to an Argentinian recession. That's a different story. But when things are large, involve the large economies, typically that will leak into the rest of the world. China has changed a little bit the composition. It used to be the case that that was always the case. If the US sank, then everyone sank. And now you have China, which stabilizes-- it's different. It's not completely correlated with the US, and so that has been stabilizing, actually, for many especially commodity producing economies and so on. But it's still the case. Big mess is a big mess everywhere. If the economists were closed, there would be no reason unless you have some exogenous shock. COVID, even if the economies were completely closed, would have been a mess because as long as the virus is spread, then it's a mess regardless. But that's not the case here. This recession was caused by a financial shock in the US. OK? And still, the global economy as a whole suffer a lot. So things become very synchronized because of that. Another thing that has been happening is that everywhere-- again, we shall see where we end up after this COVID things. There was a slight reversion of that. But everywhere, even the US, which is one of the closed economies in the world-- one of the closed of the significant economies, there has been a sort of-- there was a steady trend rise towards higher integration to the rest of the world. And the same is happening in financial markets. This is just an order of money larger. But you see here in the US, you see that imports and exports as a share of GDP, they were both rising over time. Here shows you what I showed you before, which is the deficit, chronic deficit that the US has had. But still, even exports have been rising for a while. Now this number here, you know that about 20% of the US produced goods of 15% of GDP, The US has about 15% of GDP in imports and in exports. Some people use that, the sum, for example, of imports plus exports over GDP as a measure of openness, how open is an economy. And it's OK for comparisons, but it's clearly underestimated how open economies really are. I mean, many of the goods that are considered-- that the US does not import are produced domestically so they don't count as part of imports or anything. Their price is really determined by international competition. OK? The price of a Ford is very different with foreign competition than not. So we don't count the Fords produced here as imports or anything and it was part of the non-tradable. Not non-tradable. It wasn't part of this measure of openness. But it's clear that the price and even the quality is being affected by exposure to international competition. So there are very few sectors that are not really exposed to international competition. Yeah, haircuts. They're not. I mean, you're not going to-- unless you live in some county at the border of Canada and there is a town right on the other side, that's not going to happen. OK. So trend is upward, and even more so than those numbers suggest. If you look across the world, here is what I showed you before-- what I said before is that the US actually certainly looks very close relative to others. If you look across large economies, Japan, which is also very closed economy for a variety of reasons, it's still more open than the US, UK, little Chile here. One thing that this shows, this dimension here shows is that the smaller you are, the more open you're likely to be. And it makes sense. It's harder to produce all the goods if you have a small country. So that's a pattern. The smaller you are, controlling for a variety of factors, you tend to be more open. You need to import and export more. Import more, in particular. Now, that pattern is disrupted when you look in this direction, no? So all these countries, which are clearly much larger than Chile in terms of GDP and so on, have very high export ratios. Why do you think that's the case? Yeah? AUDIENCE: Europe. RICARDO CABALLERO: Exactly. They are in Europe. Europe is very special. Europe as a whole is as closed as the US. So if you look at the whole area together. But there is lots of intra-Europe exports and imports, especially in the Eurozone. I mean, you have lots of things that have the same currency. It's right next door. So they're very open, but they're very open within Europe, not so much with the rest of the world But this doesn't differentiate intra-Europe exports and imports versus total. That's the reason you see these numbers are very, very large. But even here, within here, you see that the smaller countries tend to be very open. Much more open than bigger countries. OK? Good. So as I said, so terminology. This picture looks very similar to the picture I showed you earlier on, but it's not. There it was a trade weighted dollar, and here it's just one particular bilateral exchange, which is the dollar-yen exchange rate. What happens is when the dollar appreciates, typically it appreciates against everything and so on. That's the reason the pictures look very similar. But this was pretty dramatic. There was a very sharp appreciation of the dollar-- well, strike what I said. So anyways, this is the pattern of the number of yen's per dollar's. Meaning we decided that that's the way we're going to define exchange this the price of the domestic currency in terms of foreign currency. So the price of the US dollar goes up when they pay you more yen's per dollar. And it went up very rapidly from close to 100 to 150. That was massive. There were massive interventions here because it was clear that this was getting totally out of hand. But anyway, so that was-- OK. And we say in this case-- so when a currency gains in value, we call that, in terms of other currencies-- there's no sense of a currency gaining value in terms of nothing. It has to be in terms of other currency. That's what an exchange rate is. So we say that the currency is appreciating. OK? So in this case, it's a nominal exchange rate, so it's dollars per dollar. When the dollar is going up here, we say the dollar is appreciating relative to the Japanese yen. OK? That means the dollar is becoming more expensive. They have to give you more gains per dollar. You can look at it from the point of view of Japan, and then the picture would look the other way around and it says in this same picture, I know that the Japanese yen is depreciating relative to the US dollar. OK? So depreciation is when your currency loses value. Appreciation is when your currency gains value. I mean, if I tell you-- unless you more or less know the prices in the different places, if I tell you that the Japanese-- the US buys 130 Japanese yen, you can tell me-- and then I ask you, where would you like to buy your car? I assume that the car is the same quality. No transport costs and so on. And I tell you, look, the Japanese yen is-- for each dollar you get 120, 130 Japanese yen. Where do you want to buy your cars? That's the right answer. You have no clue. That doesn't tell you anything. The nominal exchange it tells you the relative value of currencies, but to make the decision of where you buy your car, you need to know which car is more expensive. That's not enough to know the exchange, the nominal exchange. You need to know what is the price of the car in each place in its own currency. And then I'm going to use the exchange rate to translate them into some common currency. Suppose I tell you now that the price of-- I do the opposite experiment. I say, look, the price of the same car in Japan is a 150,000 yens. Well, there you all have lots of zeros, but a 150,000 yens, and in the US is 1,500. This is a used car. $1,500. And then I ask you a question. Where do you want to buy your car? You can't answer either because you don't know how to compare. Unless I give you the nominal exchange as well, you don't know how to compare those 150,000 yens versus the $1,500. So you need both. And that concept that captures both is what is called the real exchange rate. The real exchange is the sign for that, to capture where are you going to do your imports and exports. OK? And it's a relative price of two not currencies but goods. So that's what the real exchange is. It's the price of domestic goods relative to foreign goods. How many Japanese cars you give me for one car, US car? That's the real exchange rate, which is different from the nominal exchange rate. Now, it's related to the nominal exchange rate. It happens that in practice a lot of the volatility of that price is as a result of the volatility of the nominal exchange rate. Let me end this. With this, I will stop. So let me call epsilon the real exchange rate. Let me see how we're going to get to that expression here. So we want to compare here is the relative value of two goods, because that's what will matter for my consumption decision or my purchase decision. So suppose we're talking about this car, and suppose I know that the price of the car in the US is p. And now I want to compare it with-- well, do I buy it here or do I buy it in Japan? For that, I'm going to have to compare them in the same currency. So the first thing I'm going to do is I'm going to translate my US dollar price to Japanese dollar-- to yen pricing. OK? So say that I'm going to have simple numbers and they want there. So price of the car in the US costs $10,000. Suppose that for each dollar you get 10 yens. Then $10,000 times 10, that means it's 100,000 yens. So if I buy the car in the US, I pay 100,000 yens. And so now-- ah, sorry. This was not Japan in this example. It's the UK. Well, the same story. OK. And it's even simpler. It's say then the car in the US cost $10,000. Each dollar buys $0.80 or $0.80 of pounds, so that means 8,000 pounds. So the US car cost 8,000 pounds. Now compare it with a car in the UK, say, the same car, and the ratio of these two things is what we call the real exchange rate. So it's the price of the domestic good times the exchange rate divided by the foreign price, and that's the real exchange rate. If the price of the car in the UK was $9,000 and I could buy the car in the US for $8,000, I would probably buy it in the US. In practice, there are difference between the two currencies. That's the kind of thing that the real exchange means is the relative price of goods. OK? But the way to go from that is you have to multiply the nominal exchange rate times the price, because that converts it into the same currency as the price. And then you can compare them. That's the idea. So let's stop here. We're going to continue talking about open economy next week. On Wednesday, what I'm going to do is a review of IS-LM-PC and growth.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_8_The_Labor_Market.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: So today we're going to look at-- we're going to start looking into the labor market. Now, the labor market is very interesting for a wide variety of reasons that we will not discuss in this course, because it's not about labor economics, it's about macroeconomics. But there are at least two reasons why labor markets are very important in macro. One is because things like unemployment rate is a very important indicator of the macroeconomic health of a country or an economy. And the second one, which is quite relevant these days, is that the inflation rate is-- one of the main drivers of the inflation rate is what is going on in the labor market. And we will try to understand this mechanism in the next couple of lectures. But you have there is the inflation rate in the US. And I'm showing you this picture several times after going through a long period in which the inflation rate hovered around 2% with cycles. We are experiencing an episode of very high inflation. Things are coming down. But they are still at extremely high levels, 6% or so. And actually, very recently these numbers have picked up again a little. And so that's very high inflation rate-- way too high for an economy like the US to feel comfortable with. And whenever some member of the FOMC comes out and explains why interest rates are so high at this moment and explains why they are likely to remain high for quite a while, they say, well, look, inflation is uncomfortably at high levels and labor market conditions are very tight. And that suggests that the inflation problem is not likely to go away in the near future. So that's something we need to understand in macro. Why is it that the labor market being tight says anything about the inflation rate, for example? And that's the kind of things that we're going to discuss in particular on the Monday lecture. Now today we're going to start with more basics of the labor market and at the same time, we're going to begin a transition in the course in which we have been focusing on things that are in the very short run into things that take more time. Because many of the things that we're going to discuss today are things that you're not likely to see in every single quarter. But there are things that you're likely to see over averages over several quarters, several months. That's what we're going to look at today. So remember-- let me just recap a little bit what we have been doing up to now. We have been looking at this IS-LM model, which is a great model. It is a very good model to build on. But it's a very nice model, starting point, to understand what happens during a recession and what are the likely impact of the different macroeconomic policies, monetary policy, fiscal policy, and so on. It is not such a great model once the aggregate supply side of the economy, something we have completely ignored, starts becoming binding. Remember that to now in the IS-LM model we had basically, as a model, we had two assumptions, related assumptions. One, prices were fully sticky. They didn't move at all. Second, that output was aggregate demand determined. So whatever aggregate demand wanted, producers found a way to produce it at some given price. That combination is unlikely to happen when, for example, when firms are finding trouble finding new workers. Because there may be more demand, more demand for its goods. But the firm may find it hard to expand production. And it is also highly likely that in a situation like that, firms are going to want to keep prices constant. At some point they will, look, you have lots of meals at my restaurant. I cannot find people to work at my restaurant. I'll hide the prices so at least fewer tables and I can manage one way or the other. So we're going to start building a model that takes those things into consideration, what is the impact of a tight supply side of the economy on prices? And how that starts affecting, feeds back into equilibrium output eventually. And so the main thing I would say we're going to do, really, relative to the model in the next two, three lectures is endogenize the inflation rate. We have kept prices fixed. But now we're on endogenize. And the story of that endogenization of inflation starts from the labor market. And that's the reason why we're going to start looking at the labor market today. Now, let me remind you a few things that I think we discussed in the first lecture or so or maybe second, I don't remember. Let me give you a picture of the labor market and some variables, important statistics of the labor market, that matter for understanding inflation and so on. So this is a picture that's the one that you have in the book of the labor market. It's a picture of the labor market at some point in 2018. I don't know when. This is a picture at one point. And that says that at that time the US had about 330 million people, that the noninstitutional civilian population. That is, those people that in principle could work, were about 260 million. That excludes people under 16 years old, people that are incarcerated, people that are in the armed forces. Those are excluded from-- that's the difference. That's the reason you have such a big gap between these two numbers. Now, out of this people that potentially could work, some of them want to work. And that's what we call the civilian labor force. And then some of them are out of the labor force-- again, at one point in time. It doesn't mean that these people are permanently out of the labor force. They may be temporarily out of the labor force, and so on. But we started with about 330 million. And by the time that we look at the people that really want to work at that point when the picture was taken, was about half of that, 162 million people. Now, these 162 million people, the great majority of them are typically employed. They have a job. And then there is a group of people that would want to have a job-- that's the reason they are part of the civilian labor force-- but do not have one. And that's about 6 million in that picture there. So when you hear the unemployment or the unemployment rate, you're really talking about these people here. And when you hear about the unemployment rate is these people not divided over total population, but is these people divided by the civilian labor force. OK, so that's a picture. The most recent numbers we have about that kind of statistics is here you have them. I mean, the unemployment rate in the US today is about 3.4%. That's very low. I'll show you historical data in a minute and a half shown you historical data in the recent past. But this number is very, very low. The change in the unemployment level-- this is for January-- was a reduction. This is not rate. This is number of people that were-- it's not number of people that were unemployed that are no longer so. You look at the total stock of unemployed in December and then you look at the total stock of unemployed in January 2023, the difference between these two is 28,000 workers. So 28,000 less workers are in the unemployment pool. Now notice that how this number is made. It's not that you 25,000, 28,000 people just gain a job. That's not what happened. What happens is first employment, 895,000, 894,000 people got a job-- much bigger number. But also, the civilian labor force went up by 866,000 people. So if you go back to this picture, what you have in January, or the numbers reported in January-- I do not know which month they correspond to exactly-- is that yes, this decline. But that decline was made of a big increase in employment together with a big increase in the civilian labor force. So that must have been mostly movement out of the labor force and probably had something to do-- well, I'm not going to get into that here. But all these numbers are seasonally adjusted. So they are corrected relative to what happens normally in January and so on. And COVID and weather can derail a lot what happens in January and February. Numbers tend to be very noisy. Since COVID they have been very noisy because the seasonal adjustments are different. And also, weather matters a lot in January and February, and so on. So you can get pretty large fluctuations which are really not that interesting to macroeconomists. But anyways, those are the numbers. You look at the civilian labor force participation. Then it was about 62%, 62.5%. And the employment population ratio is of the order of 60%. So the employment population ratio is just this divided by total population. And those are the averages. The number of unemployed in 2022, about 6 million people that's unemployed. So there you have the unemployment rate. And it moves, as you would expect it. It typically goes up in recessions. The last sort of large recession we had big swings. One thing that was interesting, and we couldn't quite understand what was going on, is that we noticed right before COVID the unemployment rate had already declined to very low levels. And so people were wondering whether something was going to talk about later in this lecture, whether the natural rate of unemployment had changed for some reason. We'll come back to that. Then we got COVID, obviously a very recessionary shock initially, massive unemployment and so on. But then it came back very quickly. And today we have record low levels of unemployment. We haven't seen numbers like this since the '60s really-- very low levels of unemployment. So one of the things that when you hear the FOMC members talking about the labor markets being very tight, well, one of the things they're looking at is this one. There are other statistics that I'll show you. But this is one of them. Unemployment rate is really, really low. Sometimes, again, especially post-COVID because of movements in and out of the labor force, the unemployment rate is not such a great statistic, not as reliable because many people left the labor force. So people look a lot at the employment rate. This is not the employment population ratio. Its employment rate-- so employed over the non civilian population. And that number, you can see, we have discussed this before, was trending up here because of the increase in the labor participation of women, then it came down, had a lot to do with the students and things like that, systematically. But then it was climbing up enormously. It collapsed during COVID. That's mostly unemployment and people out of the labor force. And then recovery. But the recovery has not picked up to back to the trend. So we are back to more or less the levels we had before COVID, but we're certainly off the trend. And one of the reasons the labor markets are very tight is that we haven't recovered the employment rate that we used to have. This has to do with migration flows, with a variety of things. But that's the issue. OK, so that's sort of-- those are very static pictures of the labor market. What is the stock of unemployment at one point? What is the unemployment rate, and so on and so forth. But the truth is that labor markets are very dynamic, especially in an economy like the US. The flows are very large. So what I have there-- and I don't know for which state this is in the book-- but the pictures look more or less the same for the point I want to make. This is monthly labor flows. And this has happened in some months-- I don't care, 2018, at some point. Look at what happened there. You have this. We're talking about the stocks recently. So employment in that month was of the order of 132 million people. Out of the labor force, about 79 million people. Unemployed, about 8.6 million. Those are the stocks. Those were the type of numbers I was showing you before. But look at these arrows. These are flows. So in every single month, you see in the US about 3 million people that move from one job to another. So employment to employment. You see about 1.8 million that moved from employment to unemployment and about 2 million people that move from unemployment to employment, large flows. Not only so, not everything goes to unemployment to employment. There are people are also moving out of the labor force and into the labor force into an employment, into employment. Here in this particular case, out the flow, out of the labor force into employment is 3.4 million. Flows from employment, without going through unemployment without the labor force, 3.7 million. During COVID, this must have been a very thick arrow, lots of people move out from employment to out of the labor force. And one of the problems the economy has had in the recovery on the labor market side is that this arrow hasn't been as strong as we would want it. This arrow or this arrow for that matter of fact-- people coming out of the labor force into unemployment. That's also a big flow. Sometimes people are not working. And then they decide they run out of unemployment insurance or something like that. And so they decide to start looking. And they move into unemployment here, or they run out of savings and they have to come back. And they may not find a job initially. They have to go through unemployment. So point is that these flows are very large. And these flows matter a lot for the kind of things we want to talk about in this course. Look at what we have here. The red line is the unemployment rate and it's measured on the left axis. And what we have here in the blue line is measured in an inverse scale. Look at this-- this goes up as you go down-- is the percentage of unemployed workers becoming employed. So it's the job finding rate from unemployment. So you have unemployed people. And they will be finding jobs. They are looking for jobs. And they will be finding jobs. This number here, this blue line here, shows you the likelihood that they'll find a job in inverted scale. OK, so what correlation do you notice there? I mean, you know, it's very tight. Yeah. AUDIENCE: There's a percent of people that get a job each month is smaller, there's more people without a job. RICARDO CABALLERO: Exactly. That means when the unemployment rate is high, it is harder for unemployed workers to find a job. Or another way of a direct implication of that is the typical unemployed worker will spend more time in unemployment, because they're going to be looking for jobs and it's harder to get a job. So you're going to be looking for a job for a longer period of time. Why are we talking about these things? Well, because of these type of reasons. Well, this means that when unemployment is high, workers are worse off in at least two ways. And there are two ways that are going to be important for what I'll say next. One is that the employed workers face a higher probability of losing job. That's what happens when unemployment-- the reason unemployment gets to be high is because firms are firing workers and so on and so forth. And so when unemployment is high, the first thing the workers know is that it's very likely that they'll lose their job, more likely that they'll lose their job. But the second channel, which is what this picture highlights here, is that if you fall unemployment, it's going to be a lot harder to get out of unemployment. So when unemployment is high, it's scary for workers for two reasons. One, you're more likely to lose a job because you're capturing recessionary conditions and so on in the economy. But second, if you end up in unemployment, it's going to be hard to get out of it And later on, this unemployment rate is going to show up in wage bargaining. And the main reason is going to show up is of this kind. And also think about the other side. When there's bargaining, there's two in a bargaining. There's going to be firm and workers. From the firm point of view if there's a lot of unemployment, do you think it's hard or easy to find a worker, to replace a worker that decides to leave for whatever reason? Yes. We have lots of people to choose from. So it becomes easy. So unemployment is high, workers are more scared. If they get out, they're scared of losing the job. If they get out, it's hard to get a job. And on the other side, the firms, for the firms, it's not that scary to lose a worker because it's fairly easy to replace that worker. Today, firms are very worried about losing their workers in some sectors. In some sectors are getting rid of workers. But if you run a restaurant, you're very scared of losing your workers because it's going to be very difficult to find a replacement for that worker. Surprise, surprise, wages in that industry are going up a lot. We're going to get there. So that's what comes next-- wage determination. Look at what I'm trying to build here. I'm starting from telling you stories about the labor market, what things are important for workers and so on. Now I'm going to get into wage determination. And obviously the variables I talked about are going to be important in this way determination. But my ultimate goal is to talk about inflation. So the next step-- so I'm going to talk about wage determination here. And then we want to talk about prices. And there we're going to be one step closer to talking about inflation OK, so let's go through the intermediate step, wage determination. And so just to give you a little background, sometimes wages are set by collective bargaining, unions, in particular. Now, in the US unions are not a big thing. They were a much bigger thing many years back than they are today. In other economies they are a big thing-- Japan and Europe. And the unions can happen at different levels of aggregation, at the level of the firm, at the level of the sector, and you name it. In general, regardless of the level of unionization you have in a country or in a sector, the higher the skill needed to do a job, the more likely it is that bargaining takes place between an individual, between an employer, and an individual, rather than a union, because it's sort of much more idiosyncratic and customized and so on. But either way, regardless of whether wages are set at a collective level or at an individual level, the main macroeconomic drivers of wages are similar across both of them. Of course, the particulars are going to be different. Even the dynamics can be different and so on. But the big drivers, the big macro drivers, are similar, regardless of the bargaining mode you have at the level which happens, and so on. And those are the things we're going to highlight here. So a fact of life is that workers' wages typically exceed the reservation wage. Now, what does it mean the reservation wage? The reservation wage is a wage that would leave you indifferent between employed or unemployed. It doesn't mean, it's a nice wage or anything. And certainly it doesn't mean that you wouldn't prefer to have a higher wage. But it tells you that, look, at that wage you'd rather be employed than unemployed. And there is a way, long list of reasons why that ends up being the equilibrium type wage. And I'm not going to discuss them here. But take it as a fact for now. So workers preferred to be employed. They may take the risk of becoming unemployed. But they typically prefer to be employed. And now wages-- and this is where it becomes important, interesting for us in macro-- is the wages that are finally set depend on labor market conditions. So very clearly, the lowest unemployment rate, the higher the wages will tend to be. And you're seeing it now. The unemployment rate is very low. Wages are rising a lot. And workers' bargaining power depends, again, there's a huge literature on these things and just compressing it into the bare minimum. And the bargaining power of our workers are things that we already discuss. Well, it depends on how costly for the firm to find the workers. So obviously, if unemployment is very high, it's very easy for firms to find a worker. That's not good for the bargaining of a worker. If you want to bargain with your employer and there lots of people like you out there, you're not going to have a lot of bargaining power. So it's unlikely that you're going to come up with a very high wage. And it's also, the other side of it is how hard is for workers to find another job if they were to leave the firms. I mean, if you know that there are lots of jobs like the one you currently have out there, which are not occupied-- so there's empty, vacant, jobs-- then you probably are going to have a much stronger hand with your employer because you can say, OK, if you don't pay me what I want, I move to the next door. And in terms of the macroeconomic variables we care about, a situation like that is very likely to happen when unemployment is very low because that means that other jobs are unlikely to be filled because there are lots of people looking for things, but it's difficult for the firms to find the workers. And therefore, you're going to be a lot more attractive to that labor market. So in summary, at the aggregate level, we can write a wage setting equation of this form. So the wage-- and this is a nominal wage-- can be written as an increasing function of the expected price, meaning wages are not set, in most professions, they are not set second by second. You bargain for a wage and so on. And that thing sticks for a year or so at least. Well, obviously if you expect, if inflation is zero, you're going to demand a wage that is more or less what you need today. If inflation is 10%, you say, well, I'm going to have to demand a higher wage because I have to live with this wage for a year and prices are going to be rising when I have this wage. So if they expect lots of inflation, if I expect prices to be high in the future, then I'm going to ask for a higher nominal wage today because I'm going to have to live with that wage on average for the next year or so. So that's the first thing, and it's going to play an important role, is that wages are an increasing function on the price level workers expect. They expect a higher price level in the future or during the life of the wage contract, then they obviously are going to demand a higher wage, other things equal. What are other things? Well, the arguments of this function here-- unemployment. For any given expected price, if the unemployment rate is high, workers are going to demand a lower wage. Why is that? AUDIENCE: Because it's going to be harder for them to find a job so they have less bargaining power. RICARDO CABALLERO: And they have less bargaining power, exactly. And so they're going to demand a lower wage. This variable, z, here is a catch all variable for strength, workers' strength in the bargaining position situation, something like that. So for example, this is things like employment protection laws, firing costs. If it's difficult to fire someone, the z will tend to be high. So this only tells you that given the level of unemployment, if it is very hard to fire someone, workers are going to be very likely to demand a higher wage. It's hard for you to fire me. I'm going to bargain hard for my wage. And these type of institutional factors play a huge role in Europe, much more than in the US. Good. But as a matter of definition, we're going to say an increase in z, z is something that increases the bargaining power of workers. And therefore, for any given level of unemployment and expected prices, they're going to lead to a higher wage demand. This is the workers' demand in a wage. We have to figure out what happens in equilibrium. But this is what the workers are demanding. Is it clear what we have here? So let's now move to the other side. So that's one side of the scissor. We have the workers. And they given certain macroeconomic conditions summarized by the unemployment rate and expected prices. They demand certain wages. Now, we can find the equilibrium wage until we don't see the other side, what firms are willing to pay and so on. So we need to explore this other side. And the starting point of that other side is the production function, meaning firms are going to end up setting prices for goods. But producing those goods will take factors of production. They're going to have to use something to produce that. And the cost of that something will determine, importantly, what is the price they end up charging. I'm going to simplify things a lot here. I'm going to assume the production function is first linear and linear only on labels, so no other factors of production. Meaning this says that to produce one unit of the aggregate good, you need-- well, this says that if you add an extra worker to the big production function of the economy, then you're going to get A more units of output. That's what this says. So Y is output, the output we've been talking about, measuring the way we have been talking about, and so on. N is employment. And A is labor productivity. That is the output per worker. I want to make things very simple. We're going to talk a lot about in the next part of the course and the part of growth about this A, what moves this A over time, what it does, and so on. But I'm going to simplify things a lot here for now. And I'm going to set A equal to 1. So it doesn't get any simpler than this as a production function. This production function says you want to produce one more good, you need one more worker. This is what this says. If you have 10 workers, you produce 10 units of good. If you have 11 workers, you produce 11 units of goods. To produce one more unit of good, you need one worker more. Now, why do you think I'm simplifying it so much and I'm even repeating this idea that one more worker one more unit of good? That tells you that how much does it cost to the firm, to the firm that has this production function, to produce one extra unit of workers, of one extra unit of goods. Well, you have to ask the question, well, what will the firm have to do? Well, the first-- so suppose a firm wants to produce one more unit of goods. What is it that it needs to do? AUDIENCE: Hire another worker. RICARDO CABALLERO: It needs to hire another worker. How much will that cost? AUDIENCE: The wage. RICARDO CABALLERO: The wage. Exactly. So now we're beginning to [INAUDIBLE].. So this is the wage, in this case is the cost per unit of production for this guy. So the marginal cost of production for this firm is the wage. All the rest, intermediate inputs, is all summarized. This value added is built on something else. So this production function, as simple as it is, says exactly that. The marginal cost of production is equal to the wage. So now I'm going to come up with a pricing model, a price setting rules. Firms that understand how much more it costs to produce an extra unit of good now have to decide the price they want to charge for that extra unit of the good. There is a lot that comes into that decision. But we're going to summarize it with a markup, very simple. I'm going to say, look, the firm will do the following. We'll say, it costs me one worker to produce one unit of extra of good. A worker costs me W. So the price I want to charge is 1 plus is M. M is a positive number times W. So M is a number like 0.2. So you pay 100 in the wage. Then you're going to charge-- and suppose the wage is 100. If the markup is 20%, you're going to set a price of $120. That's the price setting rule that we're going to adopt. And again, it's not that crazy. Simple, but not that crazy. So thus, we call this the price setting equation. The firm takes the wage because that's the marginal cost of production and then adds a markup. And that's the final price. Now, we can rewrite this price setting equation as a wage equation in the following sense. It's still a price setting equation. But all that I've done here is I divided by P, by P and 1 plus M. And I get that the wage, the real wage the firm is willing to pay, is equal to 1 over 1 plus the market. That's another way of saying it. It's the same. I took this price setting equation. And I just rewrote it. I rewrote it this way because then when I look at the wage setting equation, I also wrote it that way, W over something. I want to write the price setting equation in the same sort of units as my wage setting equation so then I can use one diagram, put them together easily, and find an equilibrium of something. So what you see here is, for example is that the higher is the markup, the lower is the real wage the firm is willing to offer. You see that? And this is an equilibrium at the level of the economists, not you individually. But on average, that's what ends up happening. Firms, on average, end up charging a higher markup. It has to be the case that in equilibrium the real wage offer by the firms is lower. That's what this is. So if we're in a situation where the markup was 0 and now all of a sudden because of imperfect competition or perhaps some price of a key input went up and it's not well measured in value added or whatever, if the market goes to 1, then the real wage in equilibrium will fall to half what it used to be. That's what the firms will offer. Well, that's an equilibrium or not, we shall see. Or how do we get that to be an equilibrium? We shall see. But that's what the firms will offer. That's what the price setting equation says. In fact, you already know from this equation that that's what the real wage will be because there is no variable here that can adjust to that. What happens is something else will have to give in the economy. So this ends up being the equilibrium wage. But you'll understand that a little later, or you'll understand it better a little later. So now, we're almost ready to come up with-- to discuss a very important concept in macroeconomics. And that's the concept of the natural rate of unemployment. Now, the first warning is that there is nothing natural about the natural rate of unemployment. It's not something that God gave us or anything like that. I'll say for us, and what typically means the natural rate of unemployment, simply means what I wrote there, which is the unemployment that takes place when the expected price is equal to actual prices. That's what we'll define for this class, for this course, we'll define the natural rate of unemployment. The natural rate of unemployment is when the expected price is equal to P. That's what we mean. If we ask you any question about the natural rate employment, we don't mean that that's what is good, that that's what is bad, that that's what God decided or someone else decided. This is all that it means, is that in any equation where you have P, you can stick in P and then solve for equilibrium. And the unemployment rate that comes from that is what we call the natural rate of unemployment. Because of this, you can also think of that unemployment rate as a good proxy for what is likely to be the average rate of unemployment of an economy over a longer period of time. Because people are unlikely to be fooled all the time in the same direction. So sometimes they're going to expect a higher price [INAUDIBLE],, sometimes it's lower, and so on. On average, unless there is something very weird going on, they're going to get right, because you know more or less what the level of inflation of the economy is. Sometimes you miss up. Sometimes you miss down. But on average, you're going to be right if you take an average over a long period of time. So for that reason, you can also interpret this natural rate of unemployment as the unemployment rate of the medium run, if you will-- so when you have collected enough data and positive errors are balanced with negative errors, and so on. But that's all that we mean by the natural rate of unemployment. Now, notice that with this assumption, that Pe is equal to P. I can go back to my wage setting equation, which was W over Pe equal to F that. I don't remember what I had divided by P. But I had P there. And when I divide by P both sides, and then I'm going to set Pe equal to P. And now I have that my wage setting equation can be written this way. And notice that the unemployment rate I put here is N. It's a natural rate of unemployment. Because once I have a model in which I assume that Pe is equal to P, the unemployment rate that comes out of that is a natural rate of unemployment. That's all that it just means. It says OK, you allow me to replace Pe by P, well, then I can call my unemployment rate here the natural rate of unemployment. And it has lots of names-- natural rate, structural rate of unemployment, and so on. Now what this tells you-- I mean, you can see the slope of this function. If the natural rate of unemployment is higher in this economy, what is the real wage that comes from the wage setting equation? Lower. It's a downward sloping curve in the space of wages to real wages to unemployment. So the natural rate of unemployment is a downward sloping curve for the reasons we discussed before-- bargaining power and so on. OK, so I can put together-- remember, the price setting equation led me to also an equation the real wage, which was not a function of anything. It was only a function of parameters. So that's a horizontal curve in the space of real wages and unemployment, natural rate of unemployment. Here we have a downward sloping curve. And the intersection of these two curves is the natural rate of unemployment. So this was the price setting relation. Remember, it's 1 over 1 plus M. This is the wage setting equation with the assumption that Pe is equal to P. And thus, the natural rate of unemployment. And here you can understand what I said before is that you see, in this economy, the real wage, because this price setting equation is flat in this simple economy, the real wage is pinned down by the firm, by the firms collectively-- and not collectively in an oligopsonistic way. It's what happens in equilibrium. But the equilibrium unemployment, natural rate of unemployment, is intersection of the real wage set by the firms and the wage setting relationship. So what happens to a point, say to the right? What happens in this point here? What is the situation we have? Well, at that high level of unemployment, workers are willing to work for much lower real wages than the firms are offering. This is what workers, at this level of unemployment, very high level of unemployment, workers would be fine with this. Firms are paying that. So unemployment is very likely to be falling because workers are not demanding a lot and firms are going to hire all these workers. The opposite here. If here, workers are demanding a wage that is much higher than firms are willing to pay, that's likely to lead to more unemployment because firms are going to be very reluctant to hire these very expensive workers. So that's going to be unemployment in this direction. Good. So that's the natural rate of unemployment. Again, nothing natural about it is the equilibrium when you assume the expected price is equal to price. So there are some important parameters in this diagram here. One is this M, the market. That's a parameter, very important markup here. See, if the markup changes, the natural rate of unemployment will change. There's another set of parameter here, which is z. We took as given the institutions that protect the bargaining power of workers institutions, supporting institutions, the z. That's a parameter here. If that changes, the natural rate of unemployment will change, which is, again, something that confirms that there is nothing natural about the natural rate of unemployment. So let me just do it in equations very quickly and then I'll do a couple of important shifts. So in terms of equation, all that I did is says, look, the wage setting equation, the price setting equation gives us that. The wage setting equation gives us that. Therefore, that equal to that, that's a point we found. That's the natural rate of unemployment So from here, that's when the two are equal. This was the flat curve. This was the downward sloping curve. Well, these two are equal when these two things are equal. And that's what you get there. So from there you solve the natural rate of unemployment. What do you think happens to the natural rate of unemployment is z goes up. Let's just be very mechanic at this point, just math. If z goes up, watch what happens to F? Goes up if z was positive. Well, the right hand side hasn't gone up. So this went up. Something has to give, so F comes back down. And the only thing that can give, the only thing that's endogenous in that picture there is a natural rate of unemployment. So if z goes up, F goes up. Well, I need to bring F back down because the right hand side hasn't given an inch. So what do I have to do to a natural rate of unemployment for F to come back down? Rise-- because that's what will weaken bargaining power. Workers' bargaining power got stronger because increasing z. Well, I have to weaken it some-- I don't have to. Equilibrium will weaken it somehow so that we end up in the same situation, with the same real wage that we had before, which was equal to 1 over 1 plus m. What happens if m goes up? Well, if m goes up, markups go up, that means firms real wage-- the real way to firms offers drops. So if this right hand side drops, then I need the left hand side to drop, as well. And the only thing that is endogenous here is the natural rate of unemployment. So I know that I need to drop F, what do I need to do to? So I need to bring F down. And the only tool you have is not a tool, but in equilibrium anything that will can change here is uN. It will increase uN because that will reduce bargaining power of workers and that will reduce the real wage demand. And therefore, you restore equilibrium that way. So this environment is a very nasty environment for workers in a sense because it's always the escape value is the natural rate of unemployment. So here you have what I just said in pictures. So that's example of z going up, bargaining power of workers going up. So suppose you start at an equilibrium like this. And now z goes up. Well, that means that workers, for any given level of unemployment, natural rate of unemployment, want a higher wage because they have more bargaining power. Well, that higher wage is inconsistent with the wage that firms want to pay. What restores equilibrium is unemployment, the natural rate of unemployment goes up enough so that the wage demand sort comes down to the same original level because this price setting equation is completely flat. So there you have a situation where bargaining power of workers went up and all that end up happening is that, in the medium run, at least, that the natural rate of unemployment went up. That's very much the story of Europe, by the way, in the '80s-- France in particular. France had major labor reforms, z boosting, if you will, in the 1980. Initially it was a great deal for workers. Real wages went up and so on. It was wonderful. But eventually with the passage of time, they ended up just with a much higher real-- not a much higher real wage, but a much higher real unemployment rate. That went from single digit, low single digits, to 15% unemployment rates and things like that. And since then, they have been sort of reforming the labor market to fix some of that. But that was very much what happened in continental Europe in the '80s. This is the case of a markup increase, the other one I described. So if markups go up, then that means firms in equilibrium are not willing to pay real wages. They want to pay a lower real wage. Well, at this level of unemployment, workers are not going to take it. The only thing that will restore equilibrium is that the natural rate of unemployment goes up that weakens the hand of workers. And you end up with this. And again, there is nothing natural about this. I'm not saying this is good or bad. I have no idea why the markets went up. It is just imperfect competition, going up, that's clearly not a good thing. But it may have been something else-- the price of oil went up a lot, I don't know. It was a war somewhere and then productivity came down, something of that kind. So I don't know what it did. But the only thing I'm describing here is the mechanics. Good. So the quiz is up to here. So the quiz ends here. In the next lecture, we're going to start the Phillips curve, which is now using this model, but looking at deviations situations where the price is not equal to expected price, or the expected price is not equal to the actual price. And that's going to lead to interesting situations. And then we're going to be talking about inflation. Here, this is not a model to talk about inflation. I'm talking about what happens in the medium run. I haven't told you whether the adjustment happens through the nominal wage, through the prices, or what. I mean, there are many ways of reaching the same real wage. You could have it-- you could lower real wages by increasing wages by 50% and prices by 60, say. Or you could do it by lowering nominal wages by 10 and not moving prices. So there are many ways of doing it. The Phillips curve is going to allow us to get into that part. But it's not going to be part of your quiz. The quiz, that's going to be part of the second quiz. So in the next lecture, I'm going to talk about the Phillips curve and then on Wednesday a review. And then you have your quiz.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_9_The_Phillips_Curve_and_Inflation.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: So today I'm going to talk about the Phillips curve and inflation. Now as I said in the previous lecture, the material that is specific to this lecture will not enter this quiz. It's the beginning of what is perhaps the most important model you'll see in this class, but it will take us three or four lectures to develop. So I'm going to say things that certainly will-- may help you understand a little better the previous lecture, so if you're only concerned about the next quiz, there will be a sort of small review of the previous lecture here. But again, anything that is specific to this lecture and was not in the previous one won't be part of this quiz. So what is this Phillips curve? Well, in 1958, an economist at LSE, London School of Economics, came out with just an empirical relationship. This is AW Phillips found that using historical data for the US, I think he did, there was a negative relation up to the '50s, I think. There was a negative relationship between the unemployment rate and the rate of inflation. And then our very own Paul Samuelson and Robert Solow labeled this relationship the Phillips curve in honor of AW Phillips. And nowadays, it is a central concept in macroeconomics, and it's certainly very, very relevant to understand what is going on right now, not only in the US economy, but in most economies around the world. So let me show you this is not the one that Phillips plotted. I think this is the one that Samuelson and Solow plotted. For data from between 1900 and 1960 for the US, you find this negative correlation. I think it's reasonable, this negative correlation between the unemployment rate and inflation rate, no? At very low levels of unemployment, you typically see very high levels of inflation. Conversely, very high levels of unemployment, you tend to see low levels of inflation or even deflation. In fact, this period includes the Great Depression, for example. So that's the data. And again, this was just an empirical regularity. But we can build some theory about this relationship using the ingredients-- most of the ingredients that-- I mean, essentially we can build a relationship that is downward sloping from the ingredients we already have. And this is the part that is a little bit of a review of the previous lecture. Remember that we had a-- actually, the previous two lectures. We had a wage setting equation, W equal expected prices, and then a decreasing function of unemployment and an increasing function of this labor market supporting institutions, or workers supporting institutions. Institutional variables, I should say. And then we had a price setting equation, which was simply the wage marked up. m is a positive constant. So let me start from these two. So what I'm trying to do is derive a Phillips curve. Again, this was only an empirical relationship. But it turns out that even with theory, we knew by the time of Samuelson and Solow we could come up with a theory of that relationship. And that theory builds on the ingredients we have been looking at. So these are the wage setting equation, the price setting equation. I'm going to just simplify things and assume that this relationship here, this function, F of u z, is some linear function, at least locally linear function, which is decreasing in unemployment and increasing in z. Why is it decreasing in unemployment? This says no, that if unemployment goes up for any given expected price, wage demand is lower. And that's essentially because for the worker, it's sort of-- becoming unemployed is a scary situation. Conversely for firms, it's easier to find a worker and-- so we say a worker is scared for two reasons. One is that it's more likely that he gets fired when unemployment is high. Typically, that's a recession. It's also like-- that worker knows that if she were to fall into the unemployment pool, it will take a longer time to get out of it. And the firms are seeing the opposite side. It's pretty easy for them to replace a worker if they were to dismiss a worker, because there's lots of available workers in unemployment. So that's the reason that's negative. So I'm going to stick this function back in here. And then I'm going to replace this W with this function in there in the price setting equation, and I end up with an equation for P. So this says that the price, given the expected price, is decreasing unemployment, and increasing in z, and increasing in the markup. So again, why is this price decreasing in unemployment? This is the part that is review of the previous lecture. Previous two lectures. STUDENT: Because when wages go down, and then factors of production are achieved. RICARDO CABALLERO: OK, perfect. Because wages go down, since our firm needs one worker to produce one unit of a good, then the cost of production of one unit goes down with the wage and therefore the price goes down. Because the firm is asking for a constant markup over that wage, the wage declines, and the price drops. Good. So that's all review. So this equation, you have seen just without an explicit functional form here. What I want to do is to go from here-- this is still not the Phillips curve. Remember, the Phillips curve was a relationship between inflation and unemployment. Here, we have a relationship between the price level and unemployment. So we want to take one derivative higher. We want to go to relationship between inflation and unemployment. Inflation is the rate of change of P, no? It's not the level of P. So to do that, all that we'll do is-- so this, when I don't have a subscript here, I mean the price at time t, And this is the expected price for next period. That's what you have. But today, for the next period. What I'm going to do, I'm going to divide both sides by P minus 1. By that, I mean the price in the previous period. So both sides. I'm going to divide this side by P minus 1 and this one by P minus 1. So I get that expression. That's exactly the same equation we had before. All that I did is I divided by P minus 1. Remember what this means. So if this is the price for January 2023, this is the price for-- say, what, we're using annual data for January 2022? So I'm dividing by the price of January 2022 both sides. Now notice that-- remember that P over P minus 1 is equal to 1 plus the inflation rate. Remember where inflation rate is just P minus P minus 1 over P minus 1. So this is just straightforward algebra, no? Remember our definition of inflation. That's P minus-- that's inflation. So 1 plus pi is just P minus-- over P minus 1. And that's what you have there. I can do the same for expected inflation. Notice that sometimes, people get confused. But expected inflation is equal to P expected, not minus P expected minus 1. It's P minus 1. P minus 1. And the reason I'm not subtracting the expectation here is because at time t, which is when you're forming that expectation, you already know what happened at t minus 1. So that's the reason this is expected inflation. I don't need to put expectations in here. So that's pi e. And so what we get is I can replace this guy here for 1 plus pi, this guy here for 1 plus pi e, and I get the following relationship. All that I've done is substituting this for that, that for that. So that's our price setting equation now expressed in terms of inflation rates and expected inflation rate. And now, we're not in Argentina, we're in the US, expected inflation are small numbers. And the log of 1 plus a small number is approximately that number. So I'm going to use this approximation, which, again, is valid for x small. And so I can replace this 1 plus pi for pi and this 1 plus pi e for pi e, this 1 plus m for m plus, and this term here, if these numbers are not too large, again, plus minus alpha u plus z. And I do all that and I end up with this expression. All I've done is I took logs of these. So I get log of 1 plus pi equal to log of 1 plus pi e plus log of 1 plus m plus log of 1 minus alpha u plus alpha z. I'm saying if pi, pi e, m, alpha u, plus z are not very large numbers, which we will assume, then this is approximately right. So I can rewrite that expression as that, approximately. I should have put in approximately. So now we have something that looks a lot more like the empirical relationship we were talking about. We have a relationship between inflation and unemployment. So this says that for any given expected inflation, and markups, and labor market institutions, higher unemployment means lower inflation. And why is that? So that curve tells you, that's the negative relationship we wanted, no? It says, higher unemployment, lower inflation. Why is that? Look, you heard it very clear when we talk about this, no? You understood very clearly why an increase in unemployment lowered the wage. You understood very clearly why, therefore, an increase in unemployment lowered the price. I haven't done anything but algebra in the two steps. So the same economics behind the explanations that you had before apply to this curve here. So the reason inflation will be lower when unemployment is higher, given all the rest, is because it will be less wage pressure. Workers will demand lower wages. That means lower prices, and therefore inflation will be lower. The economics doesn't change at all. I only divided both sides by P minus 1 and I took the logs and I approximated it. So the economics has not changed. I just did a little bit of basic math. So all-- what I'm trying to say is all the intuitions that you can already-- you already had from the wage setting, price setting equation and so on, you can apply to the Phillips curve as well. Good. So now we have something that, in principle, could explain the type of relationship that Phillips found and then Samuelson-Solow corroborated with extended data. Let's see. How do we get to something that looks like what these people run as a regression? Remember, they ran a regression essentially, or they correlated inflation with unemployment. And they found a downward sloping relationship. Well, look at what happens here. Suppose that we assume that the expected inflation is equal to some constant. In economics, we say when that's the case, and especially if pi is a low number, inflation expectations are well anchored, meaning any single year, there can be a price of oil is high, or something happens and inflation will deviate from that. But people are all the time expecting for inflation to go back to what is a normal level. Nowadays, or at least a few years ago, in the US, the normal level was around 2%, say. So people say, well, this year inflation was 1.8%, but we expect next year 2%. The next year, we got surprises on the upside. Price of food went up, or something like that, we got inflation of 2.3%. But you ask people how much you expect for next year and say, well, 2%. So that's what a model of expectation like this means. It's that you're always expecting something which is some historical value and that we have agreed is a reasonable level for our economy or something like that. So you see that if I replace expected inflation for a constant here by bar, then I have-- then my Phillips curve is really this. It's inflation, then I have a constant minus alpha u. That's the simplest of the downward sloping relationship I can have. That case is a downward sloping line. That's it. Of course, it could be nonlinear and so on. But this captures the essence. So that's a theory for why Phillips was finding what he was finding. Our theory of the wage of the labor market, if you will, and the price setting behavior of firms gives us a Phillips curve of the kind that he had in mind. And if you look in the '60s in the US, then you see this negative relationship that eventually became steeper. So it wasn't linear like this. It was a little convex, but it's downward sloping. And in fact, to some extent, our very own Bob Solow and Paul Samuelson were advising the US government at the time and they said, well, let's exploit this stuff a little. We like to have lower unemployment. We can live with a little more inflation, but we know that it's a negative trade off. It's a negative trade off between these two things. So if we'd like to lower unemployment, it's fine. We get a little more of inflation. And initially the deal was very good because this curve was very flat, you see? So you could cut unemployment a lot-- you can see the dates here. You're cutting unemployment a lot and you're not getting a lot of inflation. Eventually, the deal turned into a much rotten deal-- much more rotten deal, because then to lower a little bit more unemployment, we start getting a lot more inflation. So people, for a while, were OK with this model, assuming that inflation was low. But when they realized that this thing was being exploited, then they began to change the expectations they made. I think that's what we had here. But the region held pretty well during this time. And again, it became steeper and steeper as we pushed it more and more towards very low levels of unemployment. So that's the story. But again, there is your model of the Phillips curve. And that is a very good model for the times where Phillips also estimated his Phillips. Now if you turn the page and look at the same data in the '70s, look how it looks. So from 1970 to 1995, that's the data you have there, there is no negative relationship. I think it's all over the place. So had Mr Phillips been born a few years-- a few decades later and had he estimated his regression, he would have found nothing. There would be no curve in his honor, at least he had run that same regression. Maybe he would have run a different regression. But nothing. OK, so what happened? Well, our theory can explain as well what happened there. Remember the theory is not that inflation is equal to a constant minus alpha u. The theory says this is a constant only if the mode of expectation is this constant. But if the expectation is moving around, or if anything in this constant is moving around, then there is another source of variation. For example, what happens-- suppose you are here in 1965 and all of a sudden, you get-- the price of oil goes up a lot. And I'm telling you, capture the price of oil with an increase in m. Firms need to mark up things more in order to cover higher energy costs. Well, look at what m does. m says that for any given level of unemployment, now I get higher inflation. That's what an oil shock does, no? You get an oil shock, then for any given level of unemployment now, you find yourself with more inflation. So that moves you in the opposite direction. It moves you up there. And that's one of the reasons for these points around here. We got lots of inflation because we got massive oil shocks during the '70s and early '80s. We had wars in the Middle East and so on that led to those shocks. So that was one of the reasons we got shots here, to this term here. And that sort of muddied the relationship. But the other reason, which is more interesting, I think, and that you already began to see that something was happening here is that as inflation went up, people stopped believing in this model. So the expectation formation mechanism changed. So this guy began to react to endogenous variables. And I'm going to explain more precisely why. So that's what we mean by expecting inflation became deanchored. It was no longer anchored around this constant of 2%, but it became deanchored. It began to follow the data. So if the data came with more inflation, then people believed that next year, we would have more inflation as well. Not back to 2%, but if we got 5% inflation today, people began to say, well, OK. I don't think that next year-- my best estimate is 2%. It's probably closer to 5%. So that's what it means, deanchoring. That's what has the fed and most central banks around the world terrified today. Inflation is much higher than 2% and they're very worried about this guy becoming deanchored, or unanchored. I'll get back to that in a second. Anyway, but let me explain how this expected inflation term works. So let me replace the model of expected inflation for something which is some weighted average of a constant and the most recent inflation. So this model says, what is my expected inflation for next year? Well, it's an average of this long run target that we have, say, 2%, and whatever was the most recent inflation. If theta-- in the model I showed you before, the one that applied to the '60s and so on-- up to the '60s had essentially theta equal to 0. So this guy didn't show up there and expected inflation was very well anchored. What, what began to happen as we began to move that way and then we got hit by oil shocks, so people began to see much higher inflation numbers than they were used to, then this theta began to increase. So people began to change the model of expectation and began to think that the inflation was going to be more persistent than they used to think in the past. So high inflation today means high inflation tomorrow. That's what it means more persistent. In the past, it was high inflation today. That was a backdraw. We'll go back to the normal long run average. Now that's no longer the case. And so if I replace this more general model of expected inflation here in the Phillips curve, I get this expression, which now has this extra term. So we used to have theta equal to 0. But during the '70s and '80s, and even early '90s, actually that theta got to be very close to 1. You estimate these models, you get that theta was very close to 1. And look at what happens when theta gets very close to 1. So when theta is 1, literally, then the best forecast for inflation is the previous inflation. So this year is 5%, and I think next year is 5%. Not 2%, 5%. If this year is 7%, I think next year is 7% again. And so if you do that, then my expected inflation becomes lag inflation by t minus 1. So if I replace the expected inflation for pi t minus 1, I get to this Phillips curve, which I can rewrite as the change in inflation-- as a relationship between the change in inflation and the level of unemployment. So now what you have is that if unemployment is very low, then inflation is picking up. It's going-- so if inflation-- if unemployment is very low, not only the inflation is high, but it's also growing over time. That's the reason sometimes people refer to this formulation of the Phillips curve as the accelerationist Phillips curve, because now it's the relation between unemployment and the change in inflation. And if you estimate this Phillips curve, this accelerationist Phillips curve on the data I just showed you of the '70s and '80s, you get a much better relationship. You still have the oil shocks that mess things up, but you start recovering this negative relationship. But again, it's between the change in inflation and the level of unemployment. And that's a very scary situation for the central bank to find itself in, because it's very easy for things to escalate. So by the mid-90s, we had reanchored expectations. There were very aggressive policies to control inflation by Paul Volcker in the US and it was imitated around the world with some lag. But inflation became reanchored. So we went back to this theta equal to 0 type model. The expected inflation in the US-- the target inflation of the central bank was around 2%. That became what people expected for the next year and that reanchored-- so we went back, in other words, to that Phillips curve. And that's what central banks want to be at. They want to have inflation expectations very well anchored. And they were very successful after the '90s. And so we got into again-- now look. And now I'm not running the accelerationist-- I'm again running inflation against unemployment and you again see this downward sloping relationship. So that was very good news. It was a great success of monetary policy during the '90s and later was the reanchoring of expected inflation again all around the developed world, and many of the-- even Latin America. Many economists in Latin America reanchored their expectations, Asia, and so on. So it was a good time for central banks. So the next thing I want to do, this will connect more with the previous lecture, and it's the last thing I want to say for this lecture, and I may start the review afterwards, is that I want to connect now this Phillips curve with something we discussed in the previous lecture, which is a natural rate of unemployment. Because that's the way you typically see the Phillips curve written and that's also the way that when Chairman Powell is talking about the labor market tightness and so on, he's not talking relative to m, and z, and things like that. He's talking relative to what is called the natural rate of unemployment. So I want to go from a Phillips curve that looks like that the one that has the natural rate of unemployment in there. And so that's the last step in this lecture. So remember the definition of the natural rate of unemployment? What was the definition of the natural rate of unemployment? Was it the unemployment rate that God gave us, any God? No. It had a very precise meaning for us. And remember, we used exactly that model to figure it out. Remember? We saw-- actually, we saw the natural rate of unemployment from something like this. I think we had-- the function is still generic function F of u z, but we solved from an expression like this. And we said, under one assumption, we can call this u, un, the natural rate of unemployment. What was that assumption? And that's the only thing-- STUDENT: The price is equal to the expected price. RICARDO CABALLERO: Expected price is equal to the actual price. So we said, if this is equal to that, then you solve out the natural rate of unemployment. And that's the only thing that natural rate of unemployment means, simply that when the price is equal to the expected price. But if the price is equal to expected price, what else is equal? And I pointed at the right expressions. STUDENT: [INAUDIBLE]. RICARDO CABALLERO: Inflation is equal to expected inflation. So I can use the same logic I used here for the natural rate of unemployment using the Phillips curve. I can say, OK, I can solve out for the natural rate of unemployment here simply by setting the expected inflation equal to actual inflation. And if I do this, I can solve for the natural rate of unemployment from here, un. I mean, I'm going to give-- I'm going to put the superscript n here when you let me replace pi for pi. The fact that I replaced this pi e for pi is what allows me to put the superscript n there. Call it the natural rate of unemployment. And now I can solve it. Well, obviously that cancels with that and I can solve the natural rate of unemployment, and it's equal to this function here. So why is the natural rate of unemployment increasing m? A question like that can come up in the quiz. I'm not going to use the Phillips curve to ask you, if I ask you about that. But I can ask you that. What happens to a natural rate of unemployment if m goes up? You know that un will go up, but what is the mechanism? So why does the natural rate of unemployment go up when the markup goes up? Yep. STUDENT: [INAUDIBLE] our main cause and we just have to go down, right? RICARDO CABALLERO: I mean, another way of saying it is that the firms are not willing to pay-- they want to pay a lower real wage. At the original level of unemployment before the change in m, workers would not take that lower real wage. It's not an equilibrium real wage, because workers say, no, no. At this level of unemployment, we need a higher real wage. So the only way to restore equilibrium in that model we had was to increase unemployment, because that will lower the bargaining power of workers, and they will end up accepting the lower real wage that firms are willing to offer now. So that's the reason and we get this markup effect. z is same logic. It's a little easier to see it there, but z means, well, at any given level of unemployment, an increase in z means workers want a higher real wage. Firms are not willing to pay a higher real wage. So you have to bring down the real wage and workers' demand, and the only way that can happen is with a higher unemployment. That's the reason the natural rate of unemployment is also increasing in z. And now the last step. The last step is to-- you see, I can go back to my Philips curve, say that. And I'm going to replace m plus z for alpha un. I can do that, you see? I can replace this m plus z for alpha times un. How do I know that? Well, m plus z, z is equal to un times alpha. We can replace in the Phillips curve m plus c by alpha un and I can therefore rewrite the Phillips curve in the following form. Inflation is equal to expected inflation minus alpha times the gap between the unemployment rate and the natural rate of unemployment. So when Chairman Powell is worried about labor market very tight, what he's saying is, well, unemployment is likely to be below the natural rate of unemployment. Because if unemployment is below the natural rate of unemployment, that's putting upward pressure on inflation. So that's a-- so that's what he means. This gap is very important for macroeconomists, and certainly for central bankers that are very worried about inflation, that gap here. Problem is, is this is a difficult object to estimate. So you have to have estimates. The truth is that it's very difficult to know what it is, although there are estimates out there, and I'm going to show you one. You notice that something is wrong when this guy starts speaking up. It's a little bit the other way around. The US, in fact, had the opposite problem before COVID. It's that somehow, unemployment was very low, relative to historical levels, but inflation was not picking up. So that was implicitly telling us that for some reason not fully understood, the natural rate of unemployment was declining. So here is one picture that looks-- is one estimate. Again, I don't trust any particular estimate, but it tells a story. That's one particular estimate of the natural rate of unemployment in the US, that blue line. And what you see in red, the red is the actual rate of unemployment in the US. So what happens in situations like this? So what do you think was happening to inflation in this episode, which is right after the global financial crisis, or the Great Recession? So what do you need to read here? Well, the unemployment rate was a lot higher than the natural rate of unemployment. Does that put upward or downward pressure on inflation? Downward pressure on inflation. Unemployment is very high relative to natural rate of unemployment. It's minus alpha times u minus un. And that's what happened. We had lots of problems with inflation. Inflation was going very low. We even had negative inflation there, a little deflation for a while. So that was the problem. Here is the period that I described before. It's a little mysterious, because we went-- unemployment went below what we thought was a natural rate of unemployment and inflation wasn't really picking up a lot. At the end, it began to pick up a little. But it wasn't picking up a lot, and that was a little bit of a mystery. Now we're in this situation here, which we have extremely low unemployment and very high inflation. So I think this captures well the situation right now. We have a negative gap between unemployment and the natural rate of unemployment. And that's the reason that's putting a lot of pressure on inflation. We also have other things that are putting pressure on inflation that come from the supply side of the economy and so on. So that combination is pretty bad for the inflation outcomes and outlook as well. So that's where we're at. We're going to talk a lot more about this, because this is what is going on right now. Any questions about that? Otherwise, I want to start sort of reviewing things. Although, I don't know-- any question about this? Yeah? STUDENT: Is the only way to fix it this direction to increase unemployment? RICARDO CABALLERO: Sorry? STUDENT: Is the only way to fix, I guess, the inflationary-- RICARDO CABALLERO: Well, that's a very good question. That's a very good question. I'm trying to decide what to answer it with what we have. There are two views at this moment. There's one view that says there's no way around that. Just look at this curve. It says, look, there is no way around that. That's the reason we need a recession. Because otherwise, we're not going to control inflation. A recession means high unemployment. That's one view. At this moment, it's becoming the dominant view. It has gone in cycles. But at this moment, it's the dominant view. There is another view, which is the one that the central bank, the fed adopted for a while that said, well, this is not the only indicator of tightness of the labor market. There is other things as well. And those indicators are moving in the right direction. And so we may be able not to create a big mess here because these other factors are moving in the right direction. Some of those factors are, as I said, other measures of labor market tightness and hiring, the flows. Remember I showed you flows between employment and unemployment, out of employment, and so on? Those flows look extremely tight and now they're improving. So the gaps in those dimensions are better. And the other one is there was a big cost push component, which is what I said before. The supply chains and so on created extra inflation, abnormal inflation, like increasing markups. m was very high. And some of that is subsiding as well. So there are dynamics that suggest that inflation is declining even without unemployment. But I would say the median voter in this space of forecasts, of inflation and so on thinks that we will need some adjustment through this part as well. My main concern-- I think that the path the fed is forecasting is feasible, but a very narrow path. I mean, it may happen. And to me, whether they're successful at not creating a big mess here, I mean bringing unemployment very high in order to bring inflation down, has a lot to do with whether somehow we manage to keep expected inflation anchored. And there was some evidence-- I think I said that a few lectures ago. There was some evidence that in the summer of 2022. I'm from the Southern hemisphere, so I get always confused with summers and so on. So in the summer of 2022, US summer of 2022, inflation was becoming very unanchored. This guy, one year expected inflation was creeping up to 6%, and that was very scary. Because think what happened. If you get expected inflation at 6%, then it's not enough to bring an employment to the natural rate of unemployment to get inflation back to the 2% we like, because you need to bring expected inflation down now. And that means you need to bring the unemployment rate very, very high in order to re-anchor expectation. So that's a very scary situation. They were very persuasive, though, at the end of the summer with very hawkish speeches and so on. And they managed to re-anchor expected inflation. Expected inflation very quickly came down to 2%, 2.5% one year out even. But now it has been picking up again and now we're around 3% again. So it's a little scary where we are. So to me, this is going to be very important in that. So if inflation keeps lingering around 6% and so on, and eventually, expected inflation becomes unanchored, then there's almost no way around but to have a recession to get out of that. If that doesn't happen, if they succeed convincing people that they're very serious about this stuff and they re-anchor expected inflation, then we don't need to create a large recession. Still, they may create causes because accidents happen, but they don't need to. But they will need to if this guy gets unanchored. Actually, maybe I can use even this expression here to explain what I'm trying to say. And I realize that again, this is material really for the next lecture. What I'm trying to say is that if they manage to keep this theta very close to 0, then in order to bring inflation back to their target of pi bar, 2% or so, all that they really need to do is to bring unemployment to the natural rate of unemployment. So they only need to really fix this gap. They need to raise unemployment so it closes that gap. But it's a small change. That is, they succeed keeping expected inflation at around 2%. If they don't, suppose that theta becomes very far from 0. Then we have a problem. Because then expected inflation is above the target, no? Because we have 6%. So suppose that is equal to 1. We have 6%, then expected inflation is 6%. That means that if your expected inflation is 6%, then in order to bring the inflation-- if you bring unemployment just to the natural rate of unemployment, so the red line to the blue line, you haven't made a lot of progress. All that you have done is you have brought down inflation to 6%, which is expected inflation. So if you have expected inflation of 6%, you need to bring unemployment much higher than the natural rate of unemployment in order to bring inflation back to the target of 2%. That's the reason I say to me, the fight will be-- the battle will be won or lost on that term there. Yep? STUDENT: How much of this current inflationary pressure is caused by unemployment and how much of it caused on the supply side? Because it feels like a lot of the stuff, like CPI going up, energy prices going up, how much can the fed control something like that. RICARDO CABALLERO: Well, it varies at different places around the world. But in the US, for a while, a big component of the inflation was all that stuff, bottlenecks in the ports and stuff like that. That's almost all gone. There's very little of that left. So now it's aggregate demand. People feel very rich for a variety of reasons. They're spending a lot, and that's the reason unemployment is very low. It's not unemployment per se, it's just that aggregate demand is very high. And that translates into very low unemployment and that feeds into inflation this way, through wages and so on. But in the US, the component of aggregate demand is much larger than in Europe. In Europe, those supply side factors are much more important. So around yeah, the summer of 2022, you could say both Europe and the US are about the same amount of excess inflation. We're hold with around 10% inflation. But in the US was 2/3 excess aggregate demand, while in Europe was 2/3 problems on the supply side, especially because of the war and stuff like that. But for the US today, it's mostly an aggregate demand problem. We're not going to get a lot of-- obviously, if the war stops, that's going to help. But it's not going to be enough. We need to-- just the economy is too hot. It's too much aggregate demand out there. That's the fundamental problem. STUDENT: Can you explain again why an increase in z would increase the natural rate of unemployment? RICARDO CABALLERO: An increase in z? Yeah. So for that, the base is the previous slide diagram, but remember what z does. Actually, let me go to this equation here. So we can figure it out in these two equations here. If z goes up, that means for any given level of unemployment, unexpected inflation, wages go up. Workers demand a higher wage. But remember that the firms-- sorry. We're talking about the natural rate of unemployment, so let me replace this Pe for P, first of all. So I'm going to divide W by P on both sides. So I get-- if z goes up, the workers want a higher real wage, because if z goes up, then W over P, I'm dividing by P both sides, goes up. Workers demand a higher wage. But the firms, from here, you can see that I can divide by P both sides, W over P that the firms offer is equal to 1 over 1 plus m. So the firms are not going to offer a higher real wage. The workers want a higher real wage. The only thing that can restore equilibrium that the workers end up demanding the same real wage as the firms are willing to pay is that somehow, the hands of the worker gets weakened. And the only variable here that can weaken their hand is higher unemployment. So let me put it all in. So at the natural rate, I know that P is equal to P. So that means the wage setting equation implies W over P equals F u, z. From the price setting equation, I have that W over P is equal to 1 over 1 plus m. So in this very simple model, this is given. If this guy goes up, these guys want a higher real wage. But that cannot happen because that would be inconsistent with the price setting. So you need to bring down this guy. The only thing that can bring it down is for unemployment to go up. And that's-- at P, we call that a natural rate of unemployment. STUDENT: Last lecture, we talked about the labor force participation rate. Is there any reason to try and increase that to increase-- RICARDO CABALLERO: Oh, that would be fantastic. Yes. STUDENT: Is there a policy that [INAUDIBLE]?? RICARDO CABALLERO: Well, I mean there are negative policies as well. z reduction, in a sense, does that, because there was emergency unemployment benefits, an emergency income supplement and so on as a result of the pandemic that are disappearing slowly. And that's very natural, so it's going to bring participation back up. And it is beginning to pick up. So yeah, you need to incentivize return to work. And now there are some people that there is nothing, that they retire, essentially, or they have health problems and they just cannot return. We lost that. And the other margin, which is very important, is immigration. So that's a big issue, because immigration, obviously-- we lost, I think in the US-- I'm not a labor economist, but we lost, I think, a flow of the order of 500,000 people a year during COVID. And that's a big chunk of the decline in the labor. No, what you need is more employment. That's going to-- that puts downward pressure on wages for the same amount of aggregate demand. And that's what you need. But yeah, that's a very good point. We're taking all that as given here. Remember, we're fixing all that. But if you don't, then other terms start appearing in this expression and so on. Good. Obviously, I'm not going to start the review. We have only one minute. So in the next lecture, I'll just review the material for the quiz.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_20_The_MundellFleming_Model.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: Let's start with the Mundell-Fleming model. Now, this is a model that I think is extremely useful. And in the short term it will be important for you because probably 70% of the quiz will be related to things, to this model. Meaning, we're going to use this model for different things. But if you understand it well, you probably have 70% of the last quiz under control. So I'm going to go very slowly over it. And please stop me if there is any step you don't understand. I put steps into myself so I don't rush. Because again, I think it's important to understand things. So here you have the exchange rate, two exchange rates. The white one is the euro dollar exchange rate. I'm quoting it the opposite of the way it's normally quoted. There are some conventions in effects markets. But this is, as we have defined in this course, if it goes up, it means an appreciation of a local currency. That is the dollar. That is, you get more of the foreign currency per unit of the domestic currency when it goes up, and down is a depreciation. And you see there that-- this is the dollar became a gain value relative to the euro through all this period. And then it has lost quite a bit of value since sort of late 2022, with respect to the Japanese yen. That's the blue line. The whole cycle is even more dramatic. Big appreciation of the dollar. Depreciation of the yen. And a reversal since late 2022 and so. So what is behind this big fluctuations? Many things. Effects are volatile, like almost any asset price. But one of the main drivers of these fluctuations is perceptions about interest rate policy in the different parts of the world. So the reason we have seen a lot of this decline here-- the reason for the rise here of the dollar is mostly because investors in general perceive that the US was more advanced in its business cycle. It began to tighten interest rates before the rest of the world. And since interest rates were rising in the US, that led to an appreciation of the dollar by a mechanism that I describe at the end of the previous lecture., but I'm going to repeat today. Remember when I talk about the uncover interest parity condition. Well, it's related to what I'm talking about here. And when I, again, go again over that. And a big reason for the decline more recently is simply that there is a sense that monetary policy is peaking in the US in terms of tightness while the rest of the world is catching up. And in the case of Europe, more than catching up because they have further supply shocks coming from energy shocks and so on. So if you look, for example, at the expected policy rate path, in the case of the US, nowadays it looks like this. So the steel markets expect some hikes in the US, but a limited amount of hikes and then they expect quickly the Fed to start undoing that. That's what this path is telling you. This is expected policy rate path. What the market thinks now, the policy rate will be in the next meeting, two meetings from now, three meetings from now, four meetings, meetings of the FOMC from now. OK, well, if you look at the same picture in Europe, it looks like that. It's clear that there still is more ahead. And you see that. That's what the market perceives at this point. Whether that ends up being true or not doesn't matter. At any point in time the exchange rate is determined by what the markets think. So what actually happens is less important for an asset price. An asset price is a lot about pricing today things that you expect to happen in the future. But what you expect is what matters, not what actually happens. And at this moment, the market expect the Euro area to go through a more prolonged period of hiking, interest rate, hiking. Japan hasn't had hikes in interest rates for three decades. But even now, you begin to see some-- the scale here is very small. These are a few basis points. But even the point I'm trying to make is that certainly people expect interest rates in the US to go down relative to interest rates in Japan. Not to say that the interest rate in the US will be lower than the interest rate in Japan, but the direction of the change is in that way. So relative to where we're at now, the direction of the change is towards the US loosening monetary policy before the rest of the world does And that's what is leading to these big swings. I said before, this is a period in which the US have to start tightening before the rest and the currency appreciated a lot, especially with respect to the yen. Because again, the yen has been against the zero lower bound for a very long time. So nobody expected the yen to move, to follow the US. And with respect to Europe, well, Europe was having inflationary problems and so on, as well. So people expected it to follow the US at some point. For Japan, there was nothing like that. And that's what led to a massive depreciation of the yen, appreciation of the US dollar vis a vis the yen. So what the Mundell-Fleming model is about is about first connecting these things, trying to understand what moves the exchange rate, how the different monetary policies in different places or different policies in different places of the world affect the exchange rate. And then it's about understanding how those exchange rate movements affect real activity in the short run that's what the Mundell-Fleming model is. So it is really, we're going to go back to our old IS-LM model, very short run. We're going to even fix nominal prices and so on. So back to that environment. But we're going to do it in an open economy. So we're going to have a new variable floating around, which is the exchange rate. And we need to understand how the exchange rate moves when different things happen in different countries and what is the impact of that on aggregate demand, and hence, on output-- we're talking about a very short run in the different parts of the world. So that's the plan. That's what we intend to do. So let's start with the Mundell-Fleming model. Remember, we wrote down the equilibrium in the goods market in the previous lecture. And I'm just reproducing what I wrote in the previous lecture. So it looks exactly like the closed economy. Output is being determined by aggregate demand. But it's aggregate demand for domestically produced goods. Domestically produced goods is not the same as domestic demand for goods, which is this, because now there is a net export term. So part of the things that residents demand, they demand from the rest of the world, not from domestic producers. And at the same time, part of the demand perceived by domestic producers comes from the rest of the world, from exports, not from domestic producers. So that's the reason we got an extra term here, which is this net exports. And we said this net exports is a function of three things. It's a function of output. So it's a decreasing function of output. Why is that-- of domestic output? Domestic output. Domestic income. Why is it a decreasing function of domestic income? Why do net exports decline when domestic income rises? AUDIENCE: People might be more inclined to buy imports. RICARDO CABALLERO: They import more. They consume everything more. But part of that is import. And so part of that energy of the extra demand goes to foreign goods. And that's what deteriorates net exports. And that's the reason, had we just stopped there, made the net export function just a function of output, we would have not needed all these extra parameters that I'm about to build because all that would have meant is that just we have a smaller multiplier. It would have been exactly the same as we did in the closed economy, but with a smaller multiplier, because every time an output goes up, now part of that demand goes to foreign goods rather than domestic goods. But it's not so-- first, because we have an extra another income that matters here, which is the income of the rest of the world. But more importantly, because we also have an exchange. But let's start from this side. So net exports is increasing the income of the rest of the world. Why is that? That is, demand for domestically produced goods rises when foreign income goes up. For an output, for income goes up. Why is that? The symmetric argument with inputs. Well, our exports are the imports of the other country. So if the income in the other country goes up, then their imports will go up, which is our exports that go up. That's the reason net exports goes up. And the last term, remember, says that net exports is declining on the real exchange rate. Why is that? What happened when the real exchange rate goes up? AUDIENCE: It makes our goods more expensive relative to foreign goods. RICARDO CABALLERO: Exactly. Our goods become more expensive relative to foreign goods, and that affects us from two dimensions. First, our exports will tend to decline because our goods are more expensive. And also, our imports are going to tend to increase because foreign goods are cheaper. And so that's the reason this is decreasing with respect to the exchange rate. The big thing of the Mundell-Fleming model really comes from the fact that this guy is there. Had we not had the exchange rate there, again, we could have used exactly the same apparatus as we used earlier on. But we're going to have an exchange rate floating around. And that will require us to build more, a little more. We need an extra equation, because we have an extra endogenous variable. Now, what I'm going to assume here, as we did in the first part of the course, is that both the domestic and foreign prices are completely fixed. So I'm going to ignore Phillips curve, inflation, expected inflation, and all that, and assume all that is 0. Expected inflation, inflation, zero. When I do that, the same equation, the equilibrium in the goods markets changes a little bit. I mean, it's the same equation. But now I don't need to differentiate between real interest rate and nominal interest rate because inflation is zero. So nominal interest rate is equal to the real interest rate. So I'm going to stick in here the nominal interest rate. Second, I really don't need to differentiate between real exchange rate and nominal exchange rate because the relative prices, the prices themselves are not changing. And so all that will move the real exchange rate is the nominal exchange rate. So that's the reason I'm going to write here the nominal exchange rate is because it's the only thing that will move this variable around given that prices are fixed. So our equilibrium in the goods market. And this is the thing you need to compare with lecture three or something like that. And as I said, this part here only lowers the multiplier, so not a big change. This one here is an extra parameter that shifts aggregate demand up and down. So you can treat it almost like we treated [INAUDIBLE].. Remember, as the consumer confidence goes up, then aggregate demand goes up. Well, here we have the rest of the world's output goes up. It does exactly the same, the same analysis. The problem we have, though, is that we have an extra variable here, which is the exchange rate. And that's an endogenous variable. So we're going to have to come up with some other equation to solve for that equation here. In lecture 3 or 4, what we did is, OK, we said we have two endogenous variable, output and the interest rate-- output and the interest rate. We need one more equation. Well, the other equation was just monetary policy that set the nominal interest rate. Here, that's not going to be enough because we also have an exchange rate floating around. So we need to bring another equation here to deal with this new endogenous variable. What is that extra equation? Well, it's the uncovered interest parity condition. Remember, it's the last expression we had in the previous lecture that takes this form. Before I simplify lots of things, I wrote this down. And it says that the exchange rate is equal to that. Now, what is this? Where does this equation come from? What is it trying to do? Remember, we talked about this in the context of say, well, you know, when you open goods markets, then you need a relative price to decide what you're going to buy. That's what the real exchange did. And then we open the capital account. And then people need to decide where are they going to invest their money. And that equation was related to that. AUDIENCE: Expected rate of return has to be the same for domestic. RICARDO CABALLERO: Exactly. It's what equalizes expected rate of return. In equilibrium, that has to happen. Again, in reality, there is risk adjustment. There is lots of other factors that we're removing from here. But absent those other factors, the returns have to be similar in both places because if one asset is giving more return than the other expected return, then people are going to invest all their portfolios in that asset. And what happens is those flows that try to go to those assets that give the highest return end up equalizing expected return in equilibrium. And that's the equation that does that, exactly that. How do I know that? Well, remember, I can divide this by the exchange rate on both sides. And then what you get is 1 equal to a numerator that has the nominal exchange rate times the expected appreciation of the currency. And in the denominator, you have the foreign interest rate. And so when you compare the two, you have to compare one base interest rate, the domestic or the foreign, plus the expected appreciation of the appreciation of that currency. And that's what this term is doing here, this divided by that. Good. So what do we get out of this? One thing we're going to do for quite a while, because it will simplify things a lot, but sometimes also lead to confusion in the way we understand why currencies depreciate or appreciate. But we'll pause and I'll remind you of this repeatedly. We're going to assume for now that the expected exchange rate for t plus 1 is fixed. And until I tell you otherwise, we're going to make this assumption. Now, that's a huge simplification, completely unrealistic, and so on. But it will help me explain the mechanism. I mean, one of the things that moves exchanges a lot is that people have lots of expectations about future exchange rates. We'll get to that later. But for now, so you understand the mechanism, how the Mundell-Fleming model works, I'm going to assume that we all know what the expected exchange rate-- we all have a common expected exchange rate and it's fixed. We may move it as a parameter. But I'm trying to say, I'm not going to endogenize that. I'm going to take it as fixed. And I may move it around to show you what happens when that changes. But I'm not going to endogenize it. Otherwise, I need more equations, one more. I want to stop this sequence of equations that I would have to build. Later we'll understand more that, what I just said. But for now, just take this as fixed. So if I take this as fixed, now I have an equation. Remember, I was looking for an equation here for my exchange rate. Once I do that, then I have what I want. I have an equation for my exchange rate today. It's just a function of domestic interest rate, international interest rate, and the expected exchange rate. So I know the following, for example. I know that an increase in the domestic interest rate, other things equal, appreciates exchange rate. I can see it in the equation. If I move the domestic interest rate up, the exchange rate goes up. That's an appreciation. The dollar becomes more expensive. Even simpler. Suppose we start with a situation in which the domestic and the international interest rate were the same. And now I increase the international interest rate. And I'm saying the exchange rate will appreciate. Well, first of all, let me start with something even simpler. Suppose that this interest rate is equal to international interest rate before analyzing the change I'm about to analyze. Then from this equation, what do I know about exchange rate? What is it equal to? If the domestic interest rate is equal to international interest rate, what is the exchange rate today equal to? The expected exchange of next year. If I have the same interest rates, I cannot expect a capital gain or loss on the currency position because I have already an equal interest rate in the two bonds. So then I'm starting from a situation where the current exchange rate is equal to the expected exchange rate and these two are equal. And now I'm going to increase the interest rate, the domestic interest rate. And it's very easy for you to read from here that the exchange rate will go up. The currency will appreciate. Why? This is not an easy thing to answer unless you have read the book or something. AUDIENCE: The interest rate goes up, then money supply should go down, which would generally increase the value of money. RICARDO CABALLERO: Nope. No money here. That money is only related to the mechanism we use to increase interest rate. But I'm saying just use that equation and the logic behind that equation, the uncovered interest parity. Why is it that if we went to financial situation which interest were the same, now I increase the domestic interest rate. I'm saying the exchange rate has to appreciate. AUDIENCE: [INAUDIBLE] RICARDO CABALLERO: No, no, but that's a description of-- yeah, we know that. The question is, what is the logic? Yeah, we know the result. What I'm asking is for an economic explanation for that result. AUDIENCE: More people will want to invest in the currency, using currency, so the demand go up and [INAUDIBLE].. RICARDO CABALLERO: Well, if you go to Wall Street, [INAUDIBLE] they will explain it in those terms. It's not the right explanation. But they will explain it on those terms. And there is some logic behind that because this equation assumes that the arbitrage happens instantaneously. Immediately things move. But before that happens, some people will start buying more of the one that has more return. But this equation already solves all that. And that's when this assumption matters. And it's a little annoying. It bothers me for a variety of reasons. But we're going to use it to understand the mechanism. You see, if I keep the changes fixed, we start with a situation where the exchange rate was equal to expected exchange rate. If I keep it fixed and I appreciate the currency today, then what do I expect to happen to the dollar-- let's talk about the dollar-- from this period to the next one. Remember, we start from a situation where the exchange rate was equal to the expected exchange rate. Now I increase the interest rate and I said the exchange rate appreciates. Then what do you expect the exchange rate to do over the next period? If I have a move expected exchanger, now the exchanger moves above the expected exchange rate, what do you expect the exchange rate to do? Exactly. It has to depreciate. So the reason the depreciation happens here is because you need to expect depreciate the dollar from this period to the next one. Why do I need to expect the exchange rate to depreciate? So not appreciating the currency, in equilibrium, I need to expect to depreciate. That is, I need to expect to lose money on the currency part of the trade. Why is that? Confusion is good. You learn from that. And this can be very confusing, I know. What is this equation trying to do? We are trying to make the specter returns the same. That's the whole idea of this. So if I am now telling you that one bond is paying a higher interest rate than the other one, I need to offset that somehow. How do I offset it by expecting a depreciation of the currency of the bond, that is, the bond that is denominated in the currency that is expected to depreciate? So what I need to do is compensate for the interest rate differential with an expected depreciation of the currency that is paying a higher interest rate. So that's what, in this model, when I fix the expected exchange rate, the only way I can do that is by appreciating the currency today so I can expect it to depreciate in the future. That's the logic. Now, what is the connection with Wall Street? They will tell you, well, before this may happen not instantaneously. It happens somewhat slowly. So traders immediately will go to the US dollar bond because they see that they have a higher return. And it will be the case until the currency really appreciates. Once the currency appreciates enough, then that advantage disappear. That's what this condition is doing. It's making the spectral return the same. But in the process of the exchange going from the initial exchange rate to the new equilibrium exchange rate, there may be an opportunity there. And that's when you start seeing these flows. That happens very, very fast. But that's when you can see some of those flows. I mean, in this market, that happens very, very quickly. So what is typically wrong is that then an analyst comes and tells you, explains the story, why the exchange rate is going to continue to appreciate. Well, that's just way too late. You're already in this environment. You lost the trade. What about an increase in the foreign interest rate, I star? So an increase in the foreign interest rate, let's start from the same situation we had before. We start from interest rate equal to international interest rate. Therefore, the exchange rate is equal to the exchange rate. And now the foreign interest rate goes up. What is going on now in the US. The US is sort of stabilizing here. And Europe is beginning to hike a little more than the US. So we know from the equation that means the exchange rate will fall. That is will drop here. So that means the exchange rate is depreciated and the dollar is depreciating. Why is the dollar depreciating? AUDIENCE: Needs the same mechanism that you described previously, except replacing Et with 1 over star key. RICARDO CABALLERO: Yeah, that's correct. I mean, the issue here in terms of the economics is that, remember, if we start from the same interest rate and now all the [INAUDIBLE] I'm giving you doesn't need to start with the same interest rate. It's just simpler to start from the same interest. But suppose we start with the same interest rate. And now we increase this one. Then that means the foreign bond is paying a higher interest rate than the domestic bond. I need to equalize the spectral returns. The only way I can do that is by having an expected appreciation of the dollar. Since the expected exchange, we fix it here. The only way I can give you an expected appreciation of the dollar is by depreciating the dollar today. So this is the same mechanism. The same logic is symmetric. That's the mechanism. Now, is this true that in the very short run, when I star goes up and I doesn't move, then lots of people go and buy foreign bonds. And that produces sort of demand for euros and blah, blah, blah, blah. But that's very quick. Machines do it for you now. So it happens very quickly. So this equation shows you what happens after all that mess has already cleared, which happens in milliseconds. OK, what if I change expected exchange rate? So again, I'm fixing it. But I can move it around. I'm treating it as a parameter. When I say I fix it, I just don't want to endogenize it. I don't want to make it another endogenous variable. So what happens here is the exchange rate, we start with the same situation we had before. Now the expected exchange rate goes up. Well, from the equation it's very clear. The current exchange rate immediately rises-- one for one, in fact. If I have these two interest rate at the same. And now I move the expected exchange rate up, then the current exchange rate immediately jumps. So if we expect the dollar to appreciate in the future, then it depreciates today. Why is that? Expectations are very powerful in financial assets in general. This is the first time you come. And we'll talk a lot more about that in the next week. But you can see it here. So if I move the exchange rate today up, the expected exchange rate. It means expect the exchange to be-- we know today the dollar is $0.90-- 0.9 euros per dollar. Well, suppose I expect 1 euro per dollar in the next period. What will happen to exchange rate today? Well, it jumps today to 1. Why is that? AUDIENCE: The dollar will be more expensive to buy later. So people are-- RICARDO CABALLERO: OK. That's your friend, the trader there. Yes, that's true. That's true. What does that mean, though? It is true. It's more expensive. But why did you want to buy it to start with? I mean, who cares that something is more expensive. You are not planning to buy it. AUDIENCE: Because the current price or something also has to take into account the future price. RICARDO CABALLERO: That's what the equation says, yes. AUDIENCE: Because it's part of it. Because its value, at the present moment, takes into account some of its value in the future. RICARDO CABALLERO: This is an arbitrage type relationship. And what I suggest is whenever you come across an arbitrage type argument, you ask the question, well, suppose not. Suppose this didn't happen. What would then happen? What would look odd? Almost any arbitrary, that's a good way of thinking about this. The equation tells me that the exchange rate has to jump right away. But suppose not. What goes wrong? Well, I think that's the way, the easiest way to think about any of these asset pricing in general, by the way. Well, suppose not. Suppose that the expected change that goes up. The interest rates haven't changed. And the exchange rate today doesn't move. What happens then? Remember, we're in a situation with both interest rates are the same. Now, the expected changes went up by 10%, say. And the current exchange rate hasn't moved. I'm sure between the two of you, you can design this trade. What do you do? AUDIENCE: Everyone would buy foreign bonds in the next period. And no one would in the first period. RICARDO CABALLERO: No, no. But what did you do today? Suppose you're a trader. And now you see, whoops, the exchange, the dollar will appreciate 10%. The interest rates are the same. And the exchange is not moving today. What do you do? Which bond do you buy? AUDIENCE: Buy a lot of American bonds. RICARDO CABALLERO: Of course, because you have a 10% expected capital gain from buying that bond if that doesn't happen. The two bonds are paying the same interest rate. And now I tell you, well, yeah, but one is going to appreciate by 10% relative to the other. So clearly, you go short massively the foreign bond. And you go very long the US bond. That's what you do. We all want to do the same. So it happens very quickly. And the changes appreciated today up to a point in which that incentive is no longer there. And in this particular case, if the interest rate are the same, that will happen only if the exchange rate jumps exactly by the same amount as expected appreciation of the-- expected value of the dollar change in the future. Think about this. Play with these things. I know it can be confusing. And I always start with let me move something. The equation tells me this is what has to happen to the exchange rate. Well, suppose that didn't happen to the exchange rate. And then you say, then I clearly invest in this bond. This dominates the other one. Well, that condition tells you no, no, in equilibrium you have to be indifferent. So the only thing you can move is exchange rate. And the exchange has to move until you are indifferent again after you have done some change, some argument on the right hand side. That's the way you need to think about it. So here I'm just plotting this relationship in the space of exchange rate in the x-axis and the domestic interest rate here. So that's an upward sloping relationship. You can see here that as I move the interest rate up or the other way around. But anyways, if I move the interest rate up, the exchange is going up. So that's a positive relationship. I can do it the other way around. As I move the exchange rate up, then the domestic interest rate has to go up. I'm taking as parameters, the foreign interest rate and the expected exchange rate. If I take as parameter this and that, then I have a positive relationship between the exchange rate and the domestic interest rate. So that's going to be-- I'm plotting the UIP and covering parity condition. Notice this point here is interesting. This point tells you that when the domestic interest rate, I, is equal to the International interest rate, then the exchange rate has to be equal to the expected exchange rate, which is the question I asked before. Remember, I asked you a question, let's suppose that we start with an interest rate that is equal to the International interest rate. What is the exchange rate? And you said the answer was, well, it has to be equal to the expected exchange rate. That's that point here. If the interest rate, domestic interest rate, is above that, the international interest rate, then the exchange rate today has to be above the expected exchange rate because that will give you a expected depreciation of the currency, which will compensate for the fact that the domestic bond is paying a higher interest rate than international bond. Conversely, if the domestic bond is paying a lower interest rate, then the exchange rate today is very depreciated because you have to expect it to appreciate in order to compensate for the interest rate differential. Probably not. But this requires practice, I tell you. OK, so now we have an equation for the exchange rate at least. So I can go back to my IS equation in the open economy. And I have an equation for the exchange rate. So I replace it. This is nice because I have two new parameters, expected exchange rate and international interest rate. But now this is also a function of the interest rate. So at this moment, I have one equation and two unknowns, really, after I solve out for the exchange rate. I have one equation and two unknowns. The two unknowns are output and the domestic interest rate. All the rest are parameters. So that's the same situation we're at in lecture 3 or so. So then we need an extra equation. The extra equation was monetary policy, the LM. We're going to do exactly the same here. LM is the same. It's the domestic central bank sets the interest rate. So now I'm set. Now we have the IS-LM model in the open economy. This is the Mundell-Fleming model. That's what the Mundell-Fleming model is. A more complicated IS with a UIP, a driven exchange rate. And then the LM is the same as in the closed economy. So this is the Mundell-Fleming model. So one thing we know already, we knew from the previous lecture, that we have a small multiplier in the open economy because we have the inputs that are also responding to output. We have a new parameter. But now we also know that an increase in the interest rate-- so monetary policy in the open economy-- has two effects now. It used to have only this effect. Remember, it affected domestic investment. So an increase in the interest rate would lead to a reduction in aggregate demand because investment would fall. Remember that was the role of the interest rate. That's the way monetary policy works in the closed economy, was through this channel here. Now we have a second channel, which is this one. So when the interest rate goes up, it's contractionary for two reasons. One, for the reason we had before, which is that investment falls. But there is a second reason it's contractionary. What is that second reason? I mean, there's only here. It's only second-- yeah. AUDIENCE: Raise the exchange rate. RICARDO CABALLERO: Because appreciates the exchange rate. And when you appreciate the exchange rate, net exports decline. So more domestic consumption is diverted to foreign goods. And less foreign demand is allocated to our exports. So that's the second channel. So in an open economy-- and the smaller is the economy, the more important is this term, the more powerful is that channel. The US cares very little about this effect. Most other economies care a lot about this effect. Because the US is a relatively closed economy, believe it or not. So this is the start diagram of the Mundell-Fleming model. So this thing here is our old IS-LM model. It's just that this IS is a little thicker now. It has net exports in there and so on, but it looks exactly the same. That is, plot equilibrium in financial and goods market, the combinations of output and domestic interest rate that are consistent with equilibrium in both markets. That's the case here. This is IS, which is all the combinations of domestic output and domestic interest rates that are consistent with equilibrium in goods market. This is the interest rate that is consistent with equilibrium in financial markets. That's what the Fed does in the US. That point is where both markets are in equilibrium. But we can take this interest rate. So that's what will happen. The interest rate will be, in the US, the interest rate is set by the Fed, not by the ECB. The Fed will set the interest rate. That will give us some equilibrium output. And then we can go to the UIP condition, you see, I'm plotting here, and figure out what the exchange rate is. Because for this interest rate here, there's going to be some point in the UIP. And that tells me exactly what the exchange rate is. So with this set of diagrams, I can determine the interest rate output and exchange rate. So I can study the effects of different policies, for example, on output, the interest rate, of course-- that's the policy itself-- and the exchange rate. So this is the new thing I can explain. I can do a little bit of asset pricing here. I can explain the behavior of the exchange rate, as well. So this diagram-- I mean, you need to really control very, very well. So I'm going to play with it quite a bit. Monetary policy. Let's do monetary policy. We talk about monetary policy already. So suppose that for whatever reason, the domestic economy, domestic central bank, decides to hike interest rate. Suppose the economy was overheating, output was too high relative to natural rate of output, the typical reasons why you need to raise interest rates. And so suppose that the domestic interest rate goes up. Well, as it used to be, that's going to be contractionary. What happens to the exchange rate? Well, I know the interest went up. I look into my UIP. For the high interest rate and a current exchange rate that is above-- that has to go up relative to all. When I increase interest rate from here to there, then my exchange rate has to appreciate. Why is that? So an expansionary domestic monetary policy will lead to a contraction in output, which is what we get out of a monetary policy. But it will also lead to an appreciation of the currency. Why is that? That's what we just discussed, UIP. If I move the domestic interest rate and the rest and the rest of the world does not follow me, so we move interest rates, they don't, then now I need to compensate for this increase in the interest rate differential. And the compensation will come through an expected capital loss through the currency. So if I appreciate more the currency, since I haven't moved the expected exchange rate, I expect a larger loss from the point of, from the countries from the currency side. That's what has happened here. So that's what is behind depreciation. And, of course, the depreciation is already built in here, which is what makes monetary policy more powerful than the closed economy because you get the next export channel. But that's built in here. OK, here all that I did is exactly the same as we were doing in the last 30 minutes. I just use this UIP. For whatever domestic reason, I need to raise interest rate. And have contractionary monetary policy. Well, one of the effects that you're going to get in an open economy is that your currency will tend to appreciate. OK, good. What about fiscal policy? Well, if the Fed doesn't follow, the central bank doesn't follow, and you have an expansionary fiscal policy, then that will increase output. It has no effect on the interest rate. Therefore, it has absolutely no effect on the exchange rate. So an expansionary fiscal policy, which is accommodated by the Fed-- that means the interest rate is kept at the same level-- then does not lead to an appreciation of the currency. It doesn't move the exchange rate. It no implication for the exchange rate. Now, what about this change in output? Is it larger or smaller than the one we did in lecture 3 or 4? AUDIENCE: It's going to be smaller. RICARDO CABALLERO: Smaller. Why? AUDIENCE: Because it's part of the increase in [INAUDIBLE] falls on [INAUDIBLE]. RICARDO CABALLERO: Exactly because yeah, it goes to imports. Perfect. OK, good. So this is smaller than it was in the closed economy. And it has no impact on the exchange rate. That is, the UIP has nothing to do with government expenditure. It's all about financial markets. It's about expected returns, things like that. So unless the fiscal policy somehow affects interest rate, then there is no effect. What may happen is that, for example, is that treasury becomes very expansionary. And this output becomes too large for what is consistent with a zero output gap or no inflation. And then the Fed may react and raise interest rate. And that will lead to inflation of the exchange rate and so on. And that's the reason why, in practice, when countries have sort of expansionary fiscal packages, the currency tends to appreciate, is because investors expect the Fed to react to that or the central bank to react to that and raise interest rates. But if the Fed says, no, no, we need that fiscal expansion, I'm not going to move the interest rate, then the exchange rate won't move. So let's use a little more of this model and look at other shocks within this model. So let's start with-- suppose that we increase the expected exchange rate. What moves in this diagram? Let's go-- does LM move? No, the LM is controlled by domestic central bank. Doesn't move. Does the IS move? When I asked you whether it moves, you should always fix something. So you say, OK, let me fix the interest rate, say, pick a point like this one. And now I have to ask the question, what happens to output now that I have moved the expected exchange? If I get the same output back, means that IS doesn't move. If I get a different output, equilibrium output, then I know that the IS did move. So what is the answer? If the interest rate doesn't move, the foreign interest doesn't move, and expected change rate goes up, what happens to the current exchange rate? Appreciate. What happens when there is an opposition? Net exports decline. That means that moves the IS to the left. So this movement will move the IS to the left. That's the first effect. What about the UIP condition? Will it move or not? We have taken that as a parameter. Will it move? I mean, remember, I give you a clue because I said, we are taking these two as parameters here. So if I move a parameter, most likely I will move the curve. But in which direction will it move? To the right. Yes, because for the same interest rate, now I need the exchange rate to move one for one, the current exchange to move one for one with the expected exchange rate. So this was the exchange rate before. And now the expected exchange moved to the right. Well, in order not to generate the expected capital gain or loss, I have to move the current exchange rate by the same amount. And so that means this curve will shift to the right. What if I move foreign output down? What happens? Which curve moves? Well, this is not a parameter here. So this is not moving. This is not a parameter here. So this one is not moving. Only one can move. The IS, where? AUDIENCE: It would move to the left. RICARDO CABALLERO: It will move to the left because net exports will decline. Now, for any given level of the interest rate, now we have less net exports. And therefore the IS move to the left, so output falls. But there is no movement here. Unless the Fed reacts to that, the central bank reacts to that, it won't happen. I mean, and it may well be the case that you want to react to that. If the whole world goes into recession, the US is very likely to lower interest rates because it's very contractionary. The whole world goes into recession. When the US goes into recession, the rest of the world, everyone wants to cut interest rates because the US is a big player. So it really drags everyone down. The last one, I'm going to repeat this in the next lecture is, well, what happens if I star moves up, the foreign interest rate moves up? Well, the LM doesn't move. This one will move which way? Because that was a parameter here. To the right. You said to the right. That's right. So think what happened here. If the foreign interest rate goes, up at any given interest rate, now the domestic bond is doing worse than otherwise. So I need to depreciate the exchange rate today in order to expect an appreciation. That means this curve moves to the left. It moves to the left because I have to expect an appreciation to compensate for the interest rate differential. So these will move to the left. What about this curve here? We solve it in the next lecture. Very good.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_19_The_Goods_Market_in_the_Open_Economy.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALIERO: So now we're going to go back to the first part of the course in the sense that we're going to go back to the short term. So we're going to essentially do the IS-LM model again, but now in the context of an open economy. But before I get into that first model of this part of the course, I want to finish the previous lecture in which I was introducing the concept of openness and the key relative prices in open economy. And we stopped after discussing this and says, well, one of the things that opening an economy means is that now you can buy goods, both at home or abroad. So you need to be able to compare these two different kinds of goods and controlling for quality and all these differences and all that. At the end of the day, you want some sense of relative prices, which good is more expensive than the other one. And we said for that, we use what is called the "real exchange," And we define the real exchange rate as well, essentially it's the relative prices of goods at home versus abroad. But we have to put them in a common currency. That's the reason we couldn't just directly compare the prices at home versus the prices abroad. We have to convert the prices at home to the unit of account of the other country. And then we could compare these two things. And that's what we call the real exchange rate. When that thing goes up, we call-- that's a real appreciation of the local currency, of the local economy. And that means that the goods-- domestic goods-- become more expensive relative to international goods. When that epsilon goes down, then we call, we say the real exchange rate has depreciated. And that means that the domestic goods become cheaper relative to foreign goods. So as a key concept in what it means when you open an economy, you need to have this price is very important to decide whether you're going to, again, whether you and foreigners are going to buy goods abroad or domestically. The second concept of openness that we're going to explore in this course is openness in capital account, in financial market. And openness in financial market means something very similar, which is now you when you have a to invest-- financially invest, not physical investment, not real investment-- well, you can decide whether to invest in domestic assets or foreign assets. We're going in this-- later on, we're going to talk about equity. But for now, bonds. So suppose you have a domestic bond and a foreign bond. Well, you can decide whether to invest in the domestic bond or in the foreign bond. Now to make that comparison, it's not enough to have the current exchange rate because it doesn't mean much if I tell you that the British bond is more expensive than a domestic $1 bond. What you really need in order to decide where to invest is some sense of what is expected relative return of these two things. Do I expect to make more money in the dollar bond or in the pound bond? And that's the comparison I need to be able to make. There are also risk considerations and so on that we're not going to discuss in this course. But the very basic comparison is not the value of things when you're talking about financial, but what is the return you expect to get in one or the other. So this is what you need to do. Suppose you have a dollar to invest and you have two options. One is you buy $1 bond. The dollar bond gives you an interest rate of it. So you know that, say, 5% nowadays, more or less, that if you have a dollar today and you invest it in a dollar bond, you're going to get next period, you're going to get a-- how much is it, $100? So suppose you have $1, then you're going to get a $0.05 on that dollar at the end of the year. And so that's what you get if you invest in the dollar bond. Now, you're giving now an option because we are open to financial markets, to also invest in a British bond, in a pound bond that is as safe as the US bonds, say. And so we're going to call that the UK bond. And I know, for example, that the UK bond is offering a 10% interest rate. So suppose that I think i star is 10%. Then I asked you the question, well, that does mean that obviously, since you want to compare returns that you should be investing in the pound bond, in the UK bond rather than the US bond. So suppose this is 5%, and that's 10%. And I'll tell you, where do you want to invest your money? Do you want to invest it in the US bond or in the UK bond? What is your answer? I told you, you know, they pay you 10%, the other one pays you 5%. AUDIENCE: [INAUDIBLE] as well? RICARDO CABALIERO: Exactly. Why do I need to know that? AUDIENCE: [INAUDIBLE] RICARDO CABALIERO: Excellent. So this is not enough information for me, because it may happen that what I get in terms of return, I lose on the currency exposure, for example. So let's see how that can happen. So how would I do this? I have $1 of wealth that I want to invest. And suppose I want to go the UK bond route. So the first thing I have to do is I have to convert the dollar into pounds today to buy my dollar, which is my UK bond, which will be in pounds. So the first thing I have to use is suppose that I get a 0.8 pounds per dollar, then I can invest 0.8 pounds in, UK bonds with my dollar. That will give me 0.8 times 1.1, plus i star tomorrow, but what are the units of this? What do I get there? Next year, what do I get next year? I invested 0.8 pounds and I got 0.8 times 1.1, say, pounds back. I just told you. So what I get next year is pounds. I cannot compare a return in dollars, which this was $1.05 with a return on pounds. I need to be able to convert those pounds in the future to dollars in the future. And the best I can do here, we're not going to open four world markets or anything, is well, I can-- that means I have to divide by the exchange rate next year to convert from pounds to dollars. I don't the exchange rate of next year, so the best I can do is use expected exchange rate. So I divide this by the expected exchange rate next year and then I get-- thus, my return in dollars of having gone the UK bond route. And so what I need to compare in order to decide where do I want to put my money is that return here versus that one there. That's in the same units of account is I invest the same today, $1, and I get dollars tomorrow. So now I can compare. And as you correctly pointed out, this thing here therefore requires that you think about what the exchange rate is likely to be tomorrow. So, for example, in my example here, when I said suppose i is 5% and i star, the interest rate in the UK bond, is 10%. Is it obvious that I should invest in the UK bond? And the answer is no, not so fast. Because I know I'm going to make 5% more in terms of return of the bond. But then when I convert it back to dollars, I may lose all that gain because the pound has depreciated vis-a-vis the dollar. In particular, is the pound, I expect the pound to depreciate relative to the dollar or if I expect the dollar to appreciate relative to the pound by 5%, then I'm indifferent. Yeah. In one case, I'll get-- because if I go the US bond route, I get 5% next year from this year to the next. If I go to the UK bond route, I get 10% in return on the bond minus 5% in the capital loss due to the appreciation of the dollar. So net I get 5% as well. That's what appears here. That's what I've done here. So what I did here is says if the markets are very integrated and the function fairly well, those two returns should be more or less similar in equilibrium, because prices are going to adjust, exchanges are going to adjust and so on. So there's two ways of investing are more or less the same. And I'm going to take the extreme assumption that they are exactly the same here so that this holds. That in equilibrium, we have to find that equilibrium, these two things are going to be equal. That's called-- and it's a very important concept in international finance-- the "uncovered interest parity condition." Don't ask me why it's uncovered, but it's the interest parity condition. And again, that one in particular is called the uncovered interest parity condition. If you do a little bit of-- and this just tells you that in equilibrium you have to be indifferent between investing in one bond or the other. If you do a little algebra here of the kind that we have done in the past, like 1 plus i is i is approximately equal to 1 over 1 minus i or this one, 1 plus i star is approximately equal to 1 over 1 minus i star, and so on. That's a kind of approximation, Taylor expansion, when these terms are small. Then you can write this as this expression here. And this says exactly what I just said in words. It says, look, if the interest rate in the US bond is lower than the UK bond's interest rate, that's OK in the sense that we can be indifferent when we do those things as long as you're expecting an appreciation of the dollar. That is equivalent to the difference in these two interest rates. So that's what's called the interest parity condition. The two, in equilibrium, the two are going to be the same once you adjust for the expected appreciation or depreciation of the currency. So in my example before, we had this interest rate of 5%, that was 10%. Then the only way that that's going to be in equilibrium is that if we also expect the dollar to appreciate by 5%. OK, so dollar appreciation, remember, is this guy going up. So if this is 5%, then it's fine to have 5% here, 10% there, because they both gave me the same return in expectation, at least. If I do it in dollars, I say I'm going to get 5% either way investing in American dollars, or through the UK pound because in the UK bond and the bond, I'm going to get 10% and then lose 5% because of the currency. Or I can do the comparison in pounds and then I say, well, I'm going to get 10% in pounds directly. And if I go the US way, I'm going to get 5%, but I'm going to get also 5% in the currency appreciation. And that gives me 10%. Key concept. Anyways, so these are the two senses of opening that we can have, opening in goods market, opening in financial markets. And the key relative prices and things that are going to equilibrate both markets. And one is the real exchange rate in the goods market, and this one is the uncovered interest parity condition. I want to shut down this part of openness for a lecture or so, and I'm going to focus now on the goods market opening only. And then I'm going to come back to this. But I just wanted to show you the two senses of opening. So now let's forget a little bit about financial opening and let's just focus on opening the goods markets to international trade. So that means we're going to have now imports and exports floating around. So this is, we'll go back again to our IS-LM model to-- actually, we're going to go back to our goods market only model, the very first model we saw in this course. But we're going to bring back a couple of terms that we shut down there. Now something that will be-- that we didn't need to worry about, but what we're going to have to worry about here a lot, is that there is a distinction between the demand for domestic goods and the domestic demand for goods. This can be tricky. But there is a difference between demand for domestic goods versus the domestic demand for goods. This is what residents say, US residents, households firms, government demand in terms of goods. This is how those same agents, plus the rest of the world, demand of domestically-produced goods. That's the distinction. When the economy was closed, they were the same, but now they're not. So the domestic demand remains the same as before. Domestic demand is whatever the households demand-- consumption-- plus firms investment, plus government expenditure. That's the same we had in closed economy. This hasn't changed. The domestic demand is the same. It's a function of the same behavioral functions that we had there. And the only behavioral function that was-- the only two that we had was the consumption function and the investment function, remember? So that remains the same. Nothing has changed. What does change is that this is no longer what determines the demand for domestically-produced goods. And remember, that's very key in the short run, because this is a Keynesian model with very sticky prices. Demand determines output activity. So if we're going to determine domestic production from demand, we better be very careful about what is the demand for domestically-produced goods. This is demand for both domestically-produced goods and foreign-produced goods. Some of those demands will be satisfied by imports. That's not demand for domestic production, and therefore we will not be determining equilibrium output domestically. So this is going to be a new concept, which is demand for domestic goods. And demand for domestic goods is the same as demand-- as domestic demand for goods that-- the thing we have in a closed economy-- minus that part of demand that is satisfied by imports. So minus imports and divided by the exchange rate because imports may be priced in euros, say, and I have to convert them into dollars. That's the reason. It's very, very changing. But don't worry about this for now. So I have to subtract from that imports. Because that's demand by residents, US residents. That doesn't go to demand for domestically-produced goods. Is demand for BMW, whatever. So that's not going to affect the demand for Ford cars and therefore, will not affect the production of Ford cars because it's not demand for that. But against that, we also have a component of demand for domestically-produced goods that we didn't have before, which is what foreigners demand from the US. Part of the demand that US would-- probably not Ford a lot, at least. Part of the demand that US goods perceived-- US production perceived-- is not due to residents, it's due to foreigners that are importing US goods. Apple sells a lot of phones to the rest of the world. That's determined by foreign demand for domestically-produced goods. That's what I call X, exports. So this is our new key concept here, Z. We're just the same as we used to have. But now we need to understand two more terms-- the export, and that's going to be a function, and the imports, which also will be a function. So let me introduce that. So exports, we're going to assume to simplify things, but it's a sensible behavioral assumption. We're going to assume that exports are increasing in foreign output. That's what Y star means. And it makes sense. It's the rest of the world. I mean, emerging market, the commodity-producing economies today are very excited about the recovery in China. China is reopening so it's a big boom domestically. That's great news for the emerging markets, commodity producers, because that will increase the demand from China for goods produced around the world, in particular in commodity-producing economies. So that's what this is capturing. It's an important trading partners' output goes up, income goes up, then they're going to consume everything, their domestic goods, but they also going to consume the goods they import, which are our exports. So that's the reason this is increasing. Exports are decreasing on the real exchange rate. That's a sensible assumption. Why do you think it's a sensible assumption? So why do you think that exports-- US exports-- are decreasing on the real exchange rate? [INAUDIBLE]? AUDIENCE: Major [INAUDIBLE] for foreign customers to buy? RICARDO CABALIERO: Exactly. Because then US goods become more expensive relative to foreign goods. That's where the real exchange rate appreciation is. And therefore there is less demand for domestically, for US goods. That's the reason we have that. What about imports? Well, imports are the dual of that, meaning of the export function. Is actually, our imports is what the other countries sees as their export. So our imports will tend to go up when domestic output goes up, because if domestic income goes up, domestic consumers will both consume more goods at home, but they will also consume more goods abroad. They're going to scale up their consumption and they're going to consume from both places. So imports is an increasing function of domestic output. What about the real exchange rate here? Well, imports are an increasing function of the real exchange rate. Why is that? It's the same argument of exports, but seen from the other side. Remember when-- why do we use this epsilon for to decide where do we want to buy our goods? If epsilon goes up, it means our goods become more expensive. If our goods become more expensive for any given level of domestic consumption, where do you think you'll buy your goods? You'll buy more abroad, because they're cheaper. So then that's an increasing function of epsilon. Good. Any question about that? Because these are the only sort of new behavioral equations that we're going to have for this model. No? And what I'm going to do next is I'm going to start from the same model we had in, I don't know, lecture 2 or 3. And I'm going to add these terms and see how things change. Good? Good. So let's do that. So remember, I think the first curve that we-- the very first diagram we had in this class was this one. This was just a demand for domestic demand, sorry, which was just C plus I plus G. It's an increasing function here because consumption and investment are increasing function of output. And then in closed economy, what we did is we had a 45-degree line here and we said in equilibrium, output equal to demand. And therefore, the intersection of this curve with the 45-degree line gave us our equilibrium output. That's what we had. We need to change things a little bit. We're going to put the 45-degree line in the next slide. But we first need to-- this is not the relevant demand for domestically-produced goods. So we need to go from here to the demand that is relevant for domestic producers. So the first thing we need to do is we need to subtract imports. Because part of that demand will go for foreign goods. And so that's what I'm doing here. To this domestic demand, I'm subtracting, the part that is going to foreign goods, not domestic goods, because this is not demand for domestically-produced goods. So obviously, this is a shift down, but there is also a rotation. Why is that? You see? Obviously, we're subtracting imports from domestic demand, so that moves us down here. But it also is not a parallel shift. This curve becomes flatter. Why is that? In other words, the decline is larger for the difference. The gap is larger for high levels of income than for low levels of income, or output. Why is that? AUDIENCE: [INAUDIBLE] dependent on output? RICARDO CABALIERO: Exactly. It's because there is a positive marginal propensity to import. And so you'll import more if the output is higher. And that's the reason we have this curve. Notice that this also means, well, let me get to the end of that and of these diagrams and then I'll get back to this. So one step more. Still, this is not what I need to integrate with my 45-degree line because this is not the demand that domestic producers will face. We still have to add the demand that comes from foreigners, and that's exports. So to this AA function, I have to add exports. And exports is a parallel shift because it didn't depend on domestic output. It depends on foreign output. So foreign output is going to be a parameter in this curve, but it's not-- it doesn't change the slope of that curve. So here we went from the DD curve to the new curve, which is the relevant for equilibrium domestic equilibrium output, which is this ZZ curve. OK? Now notice two things, or one thing about this ZZ curve relative to DD. What is the most obvious difference between these two curves? No, this is the one we use in lecture 2 or 3, I don't know. And then what we did yesterday and all that. And this is the one we're going to use now, the ZZ. It's flatter. Yeah, why is that? Flatter is low. Slope will mean lower multiplier. Why is the multiplier lower then, in open economy? AUDIENCE: Because part of the demand falls on foreign goods. RICARDO CABALIERO: Exactly. Because part of the-- remember the way we got to the multiplier is that income went up, consumption went up, that increased income again and so on and so forth. But if part of that increase in consumption is going to foreign goods, that's not reflected in demand for domestically-produced goods, and therefore there is less of a multiplier. And that's one characteristic of an open economy, is that the multipliers are smaller. The other distinction is that-- you don't see it here, but we have more parameters. In particular, a very important parameter here is Y star, Y star. We didn't worry about what was the income in Germany When we look at the IS-LM, the closed economy model. Now we worry about what the income of our main trading partners is. There is an extra parameter there. OK, good. Now we still haven't found equilibrium output, but there is a point that is already interesting here, which is this one. What do I know of this point? Well, in this point, domestic demand for goods is the same as the demand for domestically-produced goods, for domestic goods. And that also means that the trade balance is zero, meaning at that point, exports is exactly equal to imports. So net exports are equal to 0. OK, so that's what I'm plotting here, actually. This is the net export function. The net export function is simply that minus that divided by the exchange rate. So that's what I'm plotting here. Is a decreasing function of output. Why is that? Why is this decreasing in domestic output? Remember, this is export minus import divided by the exchange rate. But we're not moving the exchange rate for now. Why is this decreasing? That means here, exports exceed imports, here, imports exceed exports. So here you have a trade deficit. Here you have a trade surplus. Why is that? Why is that the shape? Why is it downward sloping? AUDIENCE: Because import grows when outgrossed by exports. RICARDO CABALIERO: Exactly. Exports is not a function of domestic output. It's a function of foreign output, while imports is an increasing function of domestic output, and net export is exports minus imports. OK, so that's why this is decreasing. And this point here happens to be when the two things are exactly balanced. Thus, trade balance happens to be the point where DD is equal to ZZ. That's just-- there's no reason why equilibrium output should be at that level, I'm saying. That's the point where that happens. OK, now we're going to find the equilibrium output. And to find equilibrium output, I'm going to erase all these extra curves here. And I'm just going to keep the ZZ here, because that's the demand for domestically-produced goods. And I'm doing short run here, so I know that domestic production-- that is the Y-- is going to be equal to the demand for domestic goods. It's a demand-determined model. That's what the short run is all about. So erase all these curves, and I'm going to just keep the ZZ curve there. There you are. So now I have my 45-degree line because in the short run, equilibrium output is equal to aggregate demand. Aggregate demand for what? For domestically produced goods. That's the reason I'm using ZZ, not DD. OK, but there you are. Then you do exactly the same as we did before. That's our equilibrium output. And here you can do all sorts of experiments and you're going to get the same type of things that we did there, the multiplier, it's a small multiplier, but you're going to still get a multiplier and all these kind of things. Now, this is just-- in this example, it happens that this equilibrium output, this country has a trade deficit. I just made up that. So this is the equilibrium condition is output equal to Z, output equal to domestic demand plus exports minus imports. And then the net export is just, I'm plotting this term here. That's what we have here. But equilibrium is just Y equal to Z. Is not this equal to 0. You can think about equilibrium. But this is what it is, given that that's equilibrium out. This is clear? I mean, this is the start diagram of this part of the course. So you need to understand this diagram. Go over it. Play with it. Think, what is a parameter in there, and so on. I'm going to do a little bit of that now, but make sure that you understand this. So there's a few things here. So let's do things that we did in closed economy. So suppose that you have a fiscal expansion. So what did we do when we had a fiscal expansion in lecture 2 or 3? Well, that moves the ZZ curve up. Output will go up, and then there will be a multiplier. So output will go up by more than the initial increase in government expenditure. No? That's what we had before. It will go up by more, but not as much as it did in the closed economy. So the increasing output will be more than the increase in government expenditure, but it will be not as much as it would have been had we had a closed economy. Why is that? Why is the last part true? Why not as much as it would have been in the closed economy? Well, you can read it here. It's because part of that extra energy, the demand for consumption, will go to foreign goods. So it will not come back to demand more domestic production. And that's reflected in the trade deficit. In this particular example, we start with a situation where the trade was balanced. We had no-- net export was equal to zero and we ended up with a trade deficit. That trade deficit is exactly the same reason why we got a smaller multiplier is because part of the extra demand that came from the extra income that's created by the additional expenditure, by the aggregate demand effect of additional expenditure went to the demand for foreign goods. Good. So do the same things we did in closed economy and just practice here. Increase taxes, do things like that, increase [INAUDIBLE] and see what happens both with the equilibrium output. Qualitatively it will be exactly the same as you had in closed economy, except that the effects are going to be smaller, but you're going to get something new, which is what happens to the trade deficit as a result? This is a shock we couldn't do in the closed economy case, which is what happens if foreign demand comes up? That's what I'm saying, even when it's jubilant in emerging market world because China's output is going up. So what are all these economists thinking and say, well, China's output is going up. That means they're going to import a lot more from us. That is, they think our exports are going to go up because Chinese consumption is going up. Well, exports going up means that our ZZ curve moves up. OK, so then what do you get? Well, you get now an increase in exports for any given level of income means that eventually you're going to get higher output immediately. But higher output also has a multiplier, although smaller. But at the end of the day, you're going to get higher equilibrium output. So it's great news. That's the reason they're so happy. It's great news that China is expanding because that also leads to an expansion in the rest of the world. That's what you get. So in that sense, you know, that if China decides to do an expansionary fiscal policy, it also expands US output or more important, for Chilean output. It does that. So it's the same-- Chile could have done it by having their own fiscal policy that would also have expanded output. But it's wonderful that China decides to do it because that expands output as well, with one advantage-- two advantages. What is-- but there's one that you can see here, which is what? Why is it that they prefer that China does therefore rather than me? What looks better here? Assume that they're of comparable size and so on in terms of the impact in the top diagram. Suppose you generate the same increase in output as a result of one policy, which is by domestic expansion in G, which is what we did in the previous slide, or because China goes into a boom and it starts importing a lot? That is, we can export a lot to them. So suppose we get the same increase in output. What looks a little better? Not a little better, it can look a lot better. There are two things, but one is in this diagram, which is, remember, if I did government expenditure, the net export function wouldn't have moved and I ended up with higher output. I would have ended up with a bigger trade deficit. OK? In this case, it's export-driven, so it's the opposite. Because now the net export function is shifting up. You know, if I move Y star up, I'm moving exports up. That means the net export function is moving up, shifting up. And then I'm losing some of that because in increasing domestic output goes into imports. But at the end of the day, in this case, I end up with a trade surplus rather than a trade deficit. So lots of-- that's the reason when you open the world, there's a lot of free riding here. You want the other one to do the policies for you because then you're a lot better. And you can get the same increasing output. But here you end up with a trade surplus rather than a trade deficit. And there is a second thing that I'm not showing you here. There is a big difference between doing it domestically by increasing government expenditure versus the other one doing it for you and then pulling you through export. What else will look better in the US in this case, relative to the previous slide? AUDIENCE: Do interest affect it? RICARDO CABALIERO: But that's too sophisticated. We're still keeping the interest rate constant. Aha? That's even more sophisticated. This is short run, completely sticky prices. Forget all that. Fiscal deficit. In the other one, I need to increase G so I had a fiscal deficit. Here, I don't need to do that. And in fact, in reality, taxes are typically indexed to output-- domestic output. So that probably will improve the deficit in the US. So anyways. The last point I want to talk about is another variable that we didn't have in the closed economy, which is the role for the exchange rate, what the exchange rate can do. And so for this, we need to look at, you know, the only term that depends on the exchange rate is this net export term. It's the exports minus imports. So what you know from net exports, it's very clear what happens to net exports when we increase Y star. We did an experiment before. That's what increased exports. So net exports will increase if you increase Y star. We also know that net exports will decrease if domestic output goes up because imports increase. But from this expression, it's a little ambiguous what happens to net exports when there is a-- when the real exchange rate appreciates. And it's a little ambiguous for the following reason. The volume expression is clearly increasing in the real exchange rate. If US goods become more expensive, you want to import more. That's what we discussed before. But the value may not be such because if you are importing those goods in euros and now the euro is cheaper for you, than you are paying less for each unit you import. Now, we're going to assume from now on that this second effect is not as strong as the volume effect. And that's a very realistic assumption except for the very, very, very short run. So that's going to be our assumption. Our assumption will be that net exports decrease when the currency appreciates. If your goods become more expensive, then on net, you're going to have less net exports. That's an assumption. It simply says that this guy in the numerator responds more strongly than the denominator to a depreciation-- to an appreciation of the exchange rate. So the quantities effect is much more important than the price effect. Again, I'm not going to have three questions about this or anything. I'm going to assume that from now on. That's your assumption. If I make a mistake and I try to trick you in the quiz for that, you can charge me the points. I don't intend to do that. Just as I did in a spot when one of the TAs sort of wrote something there. And because this is a very realistic assumption. Good. So now let's see what happens, then, when the exchange rate moves. And let me use it. Suppose that you are in a situation where you want to reduce the trade deficit. What would you do to-- so the experiment I have here has two components. But let's talk about the first one. Suppose that your country has a big trade deficit and you want to reduce that, and the only tool you have is the exchange rate. What would you do? Because suppose you have a trade deficit, you don't like that. And you can move the exchange rate around. What would you do? Yes, you depreciate. You make the domestic goods cheaper relative to the rest of the world. So depreciate your currency, which is the prices are completely sticky, fixed. Then nominal depreciation means also real depreciation. And that will increase net exports. So what that will do if you depreciate, the exchange rate is also a parameter in this net export function. And given my assumption, when you depreciate the exchange rate, that then moves the net export function up. Now, the problem is that if you do that, that's also going to be expansionary because now you have certain equilibrium level of output. And now there's going to be expenditure switching all around the world towards your goods. So you're going to end up producing more. And so suppose that you didn't want that extra production. You just wanted to fix your trade balance. Then you have to set that. And that's what I've done here. So typically-- that's very typical. So suppose you're in a situation where you have very large trade deficit, but you're OK with the equilibrium level of output you have. Then a typical package is you depreciate your currency, but you also reduce government expenditure. Because the depreciation of the currency is expansionary. It's expansionary. It improves the trade deficit, but it's also expansionary because you relocate expenditure, both of residents and foreigners, towards your good. That increases demand for your good, increases output. But I mean, if I don't like that, I have many ways of setting that. One of them is by reducing government expenditure. So that's what I've done here. Again, this is a great package. You see, this is doing-- remember I told you here that people tend to prefer. Remember I said, this is one way of increasing output. If you want to increase output. And another way is to do it by exports rising. If I don't want to increase out, but this is better because this increases the trade balance, well, suppose I do-- which is what is [INAUDIBLE] is I do the converse. Suppose I don't want to change output, but I want to increase the net export. Well, these two charts, this and the previous one, tell me exactly how to do it. I use this for the expansion of output and to improve the net exports. And I use the other one to offset the effect on output, but with the opposite sign. I use this, but with declining G. That's exactly what I did here. So that's very tempting for a country to do, to depreciate the currency and at the same time, if you think that you need to cool off the economy, then you can use some other instrument, domestic instrument to. For a long time, China was accused of doing just this. It was called the mercantilist policies of China. And especially sort of in late '90s and 2000s and so on, China had massive amounts of exports and the US had a huge trade deficit. It was called the time of the global imbalances because big deficits in the US, big surpluses in China. And the rest of the world kept accusing China of really maintaining their currency at artificially low levels with the purpose of doing that. And anyways, I'm not going to take sides on that. I think that-- but the reason why the currency, the Chinese renminbi was so depreciated was different from that. But that's a different story. But the result was this. It was that they had very large trade surpluses. And so the result was this, that they had very large trade surpluses. And they grew a lot because they had this. But it was very export-driven. It was the rest of the world pulling. In fact the domestic economy in China, they were saving a lot. So domestic consumption was very low, but they had massive amount of exports, and that's what was pulling their output up. So open economy, you get new tools. OK, so that's all that I want to say for today. So summary. A very important demand for domestic goods is no longer equal to domestic demand for goods because part of the latter will go to foreign goods. And also, part of the former will come from foreign demand. So that's what is new of this part. You have an extra component. And then the other thing that is new of this part of the course is that, well, these extra components, the exports and the imports, are functions of things that we didn't have before. For an output, the exchange rate, in particular. So equilibrium output, again, is determined by a output-- domestic output-- equal to that, not to that. The difference between the two is reflected in the trade balance. So the trade-- another way of thinking about the trade balance is simply the difference between the demand for a domestic goods and the domestic demand for goods. So the trade balance is nothing else than that DD minus the ZZ curve. That's the trade balance, OK? Sorry. The ZZ minus the DD curve. That's net exports. OK? Sorry. Let me write that down here. So remember that we started from the demand, which was C plus I plus G. That's the domestic demand for goods. We went to Z is equal to demand plus net exports. So what I'm saying is that net exports is just equal to Z minus Z. So that's the reason you can-- very early on when I want to show you this thing here-- the distance between ZZ and DD is this net export here. That's the reason when the two of them are the same, that also means that the export is equal to 0. Very important also, message from this part of the course is that a depreciation improves the trade balance and increases the demand for domestic goods. Again, that's why it's called expenditure switching mechanism. The expenditures-- both of domestic residents and foreigners switches towards domestic goods. And that's also very important. For a given exchange rate, changes in aggregate demand in one large country induced by policy or the private sector-- in this case, China reopening-- affects other countries through Y star, through exports. So I'm going to stop here. And in the next lecture, what we'll do is we'll integrate this with the financial opening. And that will get us to what I think is one of the most important models in this course, which is called the Mundell-Fleming model.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_6_ISLM_continued.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: OK, so let's continue with this IS-LM model. Remember in the previous lecture, we set up-- we set it up. We built the IS-LM model. And we'll go over that very quickly in this lecture because I think it's very important for you. And then, we're going to use it. And eventually, we're going to talk a little bit about the policy response, the macroeconomic policy response during the COVID-19 shock, or recession, all of the above. So the starting point-- remember, the first thing we did, we constructed the IS relation. And the IS relation was just the same as lecture 3. But we sort of spelled out what is inside that investment that we had taken as a constant there. We said, well, far more realistic is to make investment itself increasing in output because it's increasing in sales. That won't change the analysis that we had in lecture three. All that will do is change the slope of the aggregate demand curve, and therefore change the multiplier. But we could have solved everything in terms of lecture three. What made this a little different from lecture three is that we also said the investment-- real investment-- remember, this has nothing to do with financial investment-- real investment is also a decreasing function of the interest rate. And that led to the IS relationship, which essentially says these are all-- the IS curve traces all the combinations of output and interest rate that are consistent with equilibrium in the goods market. That's the definition of that IS. Now, of course, in lecture three, we were able to determine the equilibrium output. Here, we can't. Why can't we? AUDIENCE: Because we're tracing a curve [INAUDIBLE].. RICARDO J. CABALLERO: Yeah, we have to announce its output and the interest rate. And we have only one relationship, the IS curve. So the reason for the LM curve is that we need to pin down the second variable. And that's what LM will do. And it will be sort of very brutal about it. In the past, you remember it was some upward-sloping relationship [INAUDIBLE] said in the previous-- no, that's not what the central banks do today. They just set the interest rate. So if the central bank sets the interest rate, then you can use lecture three to pin down equilibrium in the goods market, which is what you would effectively be doing here. If you fix this interest rate at whatever level the central bank wants, then now you have one curve for one unknown, which is output. And that's exactly what we saw in lecture three. OK? Good. So I said, lecture three now, this ZZ curve [INAUDIBLE] a little steeper because investment also responds positively to increases in output. Importantly, now we have an interest rate, which is a shifted of this aggregate demand. In particular, if the interest rate goes up, what happens to that curve? So as the interest rate goes up, what happens to the aggregate demand curve? I have two candidates here, down or up. AUDIENCE: Goes down? RICARDO J. CABALLERO: Down. Yeah, because investment drops. So you can tell me even more by how much-- if I tell you how much a change in the interest rate is, and I tell you what is the sensitivity of investment to the interest rate, you know exactly by how much this thing will come down. It's going to be the change in the interest rate times the sensitivity of the investment function to the interest rate. That's not the end of the story, as you well know. That's the horizontal shift in aggregate demand. But the final decline in output will be larger than that initial decline in investment as a result of the higher interest rate. Why is that? So someone, say the Fed, raises interest rate, that immediately reduces investment because investment is negatively related to the interest rate. That immediately decreases aggregate demand, which immediately decreases output. Because in this part of the course, output is determined by aggregate demand. Thus, the adjustments stopped there. No, that's what the multiplier was about. Because now with lower income, there is lower consumption and actually lower investment as a result of that, and we keep going. So the final decline in output is a lot larger. And doing that kind of experiment, moving the interest rate around and seeing what happens to equilibrium output, is that we derive-- we constructed the IS curve. Here is for-- a cut in the interest rate. Oh, this is exactly the experiment I just described. So if you raise the interest rate, then aggregate demand comes down. And then output declines by a lot more than the initial decline in investment because of the multiplier, but eventually, we get to another equilibrium output, which is that. So now we know that this point belongs to the IS curve because it's a combination of output, Y-prime, and interest rate, I-prime, that is consistent with the equilibrium in the goods market. That's what lecture three told us. That's what equilibrium in the goods markets look like. That's another point of the same IS. Because I have higher output here, lower interest rate. That's another point of the IS. That's the reason this is downward sloping. Look at what I just said. I said, I have another point of the same IS. How do I know it's the same IS and not some other IS? All that I told you there is I found two points, two combinations of output and interest rate, that are consistent with equilibrium in the goods market. But I said a little more. I said, and that's part-- that's the way we construct one IS. AUDIENCE: Because you're holding everything like tax [INAUDIBLE].. RICARDO J. CABALLERO: Exactly. Because there is a lot of other parameters that we're keeping constant there. That's the distinction between a movement along an IS curve, which is-- the only thing I move is the interest rate that allows me to trace a movement along a single IS. If I move something else, like taxes, foreign expenditure, or autonomous consumption by-- something like that, then I will be shifting the aggregate demand for any given interest rate. And I'm going to get a different level of output for any given interest rate, which means I'm going to be in a different IS. And that's what we did there. In that case there, we said, look, I can fix the interest rate, any interest rate you want. Let's pick this one. But I could have done it at an interest rate here, there, whatever. And now, I say, what happens if I increase taxes? Well, again, you know from lecture three exactly what happens when the interest rate is constant because there, we didn't even talk about the interest rate. Nothing was a function of the interest rate. And you increase taxes. Well, that would reduce disposable income for any given level of output, and that will lead to a contraction in aggregate demand and output, and so on, and so forth. So that means that for this interest rate, now I found another point-- another point that is an equilibrium in the goods market. But it belongs to a different IS because I moved one of the parameters, which is the taxes. And now for this higher level of taxes, I can play around with the interest rate. I can say, well, what happens if I cut interest rates? Well, if I cut the interest rate, we're going to find another equilibrium, say here-- if I cut the interest rate from here to here, I'm going to find another equilibrium level of output which is consistent with that very same IS. Why is that very same IS? Well, because I haven't moved taxes again. So the reason I'm repeating this is because, I told you, it's very important to understand what is a movement along the IS versus what shifts the IS. OK? Good. Then, we move to the LM relation. And the LM relation is just equilibrium in the financial markets. This is combinations of output and interest rate that are consistent with equilibrium in financial markets. And we constructed from our money supply equal to money demand in nominal terms. Then, we divided by P, which is not very interesting in this part of the course because P is constant. We are assuming that P is not moving. That's the price of goods and services. And then we have this equilibrium here now stated in real terms. So real money supply is equal to real money demand. And as I said, had you taken this course a few years back, or perhaps in other places, that would have been an upward-sloping relationship. So the LM would have been an upward-sloping relationship. How do I know it's upward-sloping? Well, because if I don't change money supply and I increase output, then I need to bring LI down. And since L-prime is negative, the way to bring L down is by increasing the interest rate. So that's what it would have given you, an upward-sloping LM curve. I said, we don't do that now because, really, banks, central banks abandoned a long time ago in most parts of the world-- not everywhere-- this idea of targeting M. But they target directly the interest rate, and then they give you whatever M that you need-- they need in order for the equilibrium in financial markets to be consistent with the interest rate the central bank wants to set. So I said the modern IS curve really looks like that. The Fed and the US central bank, anywhere else, sets the interest rate. Turkey is a little different. [SOFT LAUGHS] But sets the interest rate. And that's the LM. Now, this particular LM says-- it's a flat curve. It's not a function of output. The Fed sets the interest rate. That's it. That's the reason. It's flat. It's not upward-sloping or anything. So I asked the question, what shifts the modern LM? Only the central bank. Because the central bank is the one that sets the interest rate, certainly in-- well, let me not complicate things. So it's the interest rate. So if the central bank doesn't change its mind, then the interest rate is whatever it is, and the LM will remain there. OK? Good. So we put the two things together, and now we can pin down equilibrium output. Because remember, we had-- when we just looked at the IS, we had combinations of interest rate and output that were consistent with equilibrium in the goods market. Now we have an interest rate which is consistent with the equilibrium in financial markets, that's what the central bank is there to ensure. And so at that interest rate, we can look into the IS, what is the level of output that corresponds to that. That's what we get here. So now, we found an equilibrium. We found a combination of interest and output that is consistent with the equilibrium in both goods markets and financial markets. And that's what the IS-LM model is about. It's about finding those combinations. Good. Is this very clear? Yes? OK, good. So now we can begin to play with this stuff. One of the main purposes of the IS-LM model is to understand policy, macroeconomic policies, what you should do in certain environments or not. Well, before knowing what you should do in certain environments, you need to understand what is it that the different macroeconomic policies do to equilibrium output and interest rate, and so on. And so that's what we began to do. And the first experiment was one of fiscal policy. So that's an example of a contractionary fiscal policy. That could happen as a-- contractionary fiscal policy is essentially increasing taxes, like we illustrated before, or a reduction in government expenditure. Either of those will lead to a shift in the IS to the left. Remember from lecture three, if I increase taxes or reduce government expenditure, equilibrium output will fall. Thus, lecture three. Remember? And so I can trace-- using lecture three, I can tell you, well, the IS will shift to the left. No, we just did that. But now we know more because we know that the central bank is also pinning down the interest rate. And in this particular example here, the central bank did not go along with the Treasury Department and say, OK, I'm going to keep the interest rate wherever it is. You do whatever you want with the fiscal policy. So this is an example of a situation where fiscal policy is contractionary and the central bank remains with its previous target, interest rate target. So as a result of that, a contractionary fiscal policy, as the word says then, is a contraction in aggregate demand policy ends up also leading to a lower equilibrium output. And then I ask, I already told you two things, T up or G down. But what else would do something similar to this, which is not policy? AUDIENCE: Just any shock of aggregate demand, right? RICARDO J. CABALLERO: Exactly. I want anything that is a shock to aggregate demand different from interest rate or anything like that. So, for example, consumer confidence, that thing that we put in C0. Or wealth, something that wasn't in the model but clearly what is behind C0, that would lead to a shock like that, and that's contractionary. That's the reason central banks and financial markets are all the time looking at the releases of surveys of consumer confidence and things of that kind because these are the implications of shocks to consumer confidence and so on. OK, good. So what is the mechanism here, well, we have discussed it multiple times. The contraction in fiscal policy lowers the aggregate demand down. Then, we multiply and you end up lowering output a lot more. And this happens for a given interest rate. I'm having the same interest rate here and there because I'm looking at two points along for a fixed LM, for a fixed interest rate. Good. So that's the contractionary monetary policy. Needless to say, an expansionary fiscal policy is just a shift in the opposite direction. So what will an expansionary fiscal policy do to equilibrium output? Expansionary. It will increase output, again, contractionary fiscal policy reduce output or will it do the opposite? Obviously, it will increase output. So that's expansionary fiscal policy. And it's a very important tool to move output around when the economy's in a recession, or so on and so forth. The other canonical macroeconomic policy is monetary policy. And that's an example of an expansionary monetary policy. So an expansionary monetary policy cuts the interest rate. Why is that expansionary? Well, look, it is expansionary-- let me take this as-- I'm going to do things in a step. So claim first. An expansionary monetary policy is a reduction in the interest rate. So the central bank now decides to set a lower interest rate than it used to. As a result of that-- if output didn't change, what would happen in the goods market? So suppose that the Fed cuts the interest rate and output doesn't change. Is there an equilibrium in the goods market? Is there an equilibrium in the goods market? Suppose that the Fed cuts the interest rate and now we say, OK, well, nothing will happen here. Output will stay where it is. We will have a lower interest rate, that's nice. Why is that not the final outcome of the monetary policy expansion? AUDIENCE: Because then the [INAUDIBLE] market reacts and aggregate demand [INAUDIBLE].. RICARDO J. CABALLERO: Exactly, this is an imbalance. No, because aggregate demand now, at low interest rate, investment will go up. This fiscal investment, remember, purchases of goods and services by firms for the purpose of building capital, structures, and things like that. So aggregate demand went up, so now we have a disequilibrium there. Output is less than aggregate demand. And we know that output is determined by aggregate demand, and then we go on through all the mechanism. So at this point, it's not at equilibrium. We're going to end up with a higher level of output. At that lower interest rate, we have a lower level of output. Therefore, it's not surprising that we call this an expansionary monetary policy. So when the Fed cuts the interest rate, that's an expansionary monetary policy. This will expand aggregate demand. So how does the Fed implement this? Sorry. AUDIENCE: They can do expansionary open market operations. RICARDO J. CABALLERO: Yeah, there you are. Perfect. So what they need to do is do some expansionary monetary open market operation. Again, now it's a little more sophisticated than that, but let's stick with this. That is the first thing they'll do is they'll shift money supply. They'll go out there and they start buying bonds and giving money to-- injecting money into the system, particularly through the banks. So that's the initial response. That's what will cut the interest rate. What happens next? This will allow me to illustrate the modern IS. Remember, the Fed's decision was not to increase the money supply by 35%. What the Fed communicated to the market was that it was going to cut interest rates by 50 basis points. That's the communication. So initially, the way it does that overnight is it goes out and does exactly that. So what it did is what we have there. We had some interest rate, I0. The Fed now wants to go to I1. So in order to do that, well, you have to look at this money demand and increase money supply to achieve the lower interest rate. The question I'm asking you now, does it stop there? So the Fed says, OK, I did my job, I want to lower the interest rate. I'm going to increase M. I increase M. And now, I manage to bring the interest rate down to that point. And that-- they're intervening overnight market, so that happens very quickly. Do you think that the Fed now can sleep for a while? Why not? There are many reasons why the feds cannot sleep for a long time, but in this particular case. AUDIENCE: [INAUDIBLE] RICARDO J. CABALLERO: OK, yes. Money demand will increase. AUDIENCE: [INAUDIBLE] RICARDO J. CABALLERO: Why? AUDIENCE: Because now, [INAUDIBLE] increasing? RICARDO J. CABALLERO: No. So the first shock was an increase in money supply. The point-- the thing you want to say is that because now the interest rate is lower, equilibrium output will go up. But if equilibrium-- output goes up, then what happens in this diagram? Well, the money demand goes up. Because remember, one of the parameters in this curve was output. Remember? This was output times LI That's that curve there. In this money demand, I had output fixed at Y0. But now equilibrium output is higher, so this curve will also shift out. Now you're going to have Y1 LI there. So what will the Fed do? See, the Fed doesn't do anything and it stops here, then the interest rate goes back up, not necessarily to the old level, but it will go up. So what the Fed will have to do is keep expanding money. Sorry, it's an ugly diagram. But it will keep expanding money up to-- so it can preserve the interest rate. In the old analysis, you would have stopped in the first shot, but nowadays, that's not. The Fed says, look, I'm going to provide money, and I know it takes time for output to expand and all that, so I will accommodate all that comes. It will not come overnight, all this extra demand for money, but I know there will be more demand coming along if I'm successful at expanding economic activity. So the central bank knows that if this ends up happening, then that it will have to provide more money than initially just to preserve the interest rate at the lower rate level. Again, we don't have any concept of time in this course. And I don't think that-- well, we'll do a little bit later. But things happen in reality-- in the financial markets, they happen very quickly and then they take time. The real side is much slower. I mean, this expansion in output takes a couple of years, for example. It's slower. The reaction of interest rate, asset prices, and so on happens over night, instantly. When people do analysis of the impact of monetary policy on financial asset prices, you look at the small windows, minutes around an announcement or something like that to understand what is the impact. When you look at the impact of monetary policy or prices on real activity, you look over the span of quarters. That's your unit of-- and you begin to see effects a quarter later. And you keep seeing effects, you know eight quarters later. So different time scale. In this course, we're not worrying about that. But everything happens at once. So really, what will happen in this course is that it won't be enough to increase money supply to this point. In order to have an interest rate at the final equilibrium level of output, at this level, I'm going to have to expand money supply a lot more. That's what I'm saying. Good. So again, I can always go back to my lecture three. Remember, I always-- I told you that that diagram in lecture three was going to be very important. The expansionary effects of an expansionary monetary policy can be analyzed in the lecture three diagram, because there, we take as given an interest rate. And now, we know that when we have a higher interest rate, a lower interest rate will bring this aggregate demand up, and then we get the multiplier, and blah, blah, blah, blah, blah, blah. That's what happened. So this is a movement in the-- when monetary policy changes-- that's another thing that is very important when you do IS-LM analysis. Whenever you ask a question, the first thing you need to think about is, which curve is this policy moving or which curve is this shock moving? And what I know is that monetary policy-- fiscal policy will always move the IS. Will it move the LM? No, it has nothing to do with things that happen in financial markets. That doesn't mean that the Fed may not wish to respond to the fiscal expansion or whatever, but that's a response that the Fed decides. It's not a direct consequence of the fiscal policy. Fiscal policy is not bundled with interventions in the financial market. Contrary to that is monetary policy. I tell you the Fed decides to cut interest rate, that's a movement of the LM. It has nothing to do with the IS. So anything that happens in the IS is going to be a movement along the IS, not a shift of the IS. So that's what we saw here. When the Fed cut interest rate, we end up with higher output, but that was a result of a shift along the IS. Because monetary policy is not an IS policy. It's an LM policy. Fiscal policy is an IS policy. That is something that shifts the IS and not the LM. So that very important to understand, again, what moves what. OK, so let's look at the-- anyways, so let me pause here. Because if you understand what I just said, it's 2/3 of your quiz. So make sure that you understand it. I mean, if you really understand it-- obviously we're not going to ask you exactly this, but small perturbations around what I just said. So now, we can use this stuff even more. Now we understand what the basic monetary policy does. We're going to understand what basic fiscal policy does to the economy. Let's look at some scenarios. This, I'm calling all in. What am I representing there? In that diagram? So I'm saying all that you see in that diagram is a result of policy decisions, macroeconomic policy decisions. AUDIENCE: Expansionary fiscal and monetary policy? RICARDO J. CABALLERO: Exactly. That's the reason I'm calling it all in. That's the case in which both want to be very expansionary. And so you see that the expansionary monetary policy already sort of increases equilibrium output. But then you add to it expansionary fiscal policy, which moves the IS to the right, and you further increase output. So you end up with a big increase in output as a result of this powerful policy package. When do you think you may see situations like that? Sometimes you see it out of pure irresponsibility. I mean, people go to Argentina, this happens all the time for the wrong reasons. But if in normal times, normal environments, when do you think that-- I shouldn't have said "normal times." In normal environments, sort of with sound macroeconomic policy, when do you think you would see something like this? AUDIENCE: Recessions? RICARDO J. CABALLERO: Recessions. And the biggest-- during recessions, you need to get the economy out of the hole. And then you probably-- you first try monetary policy because that's the most direct and quick. I mean, that's a decision that can be made overnight. But often, when the recession is sufficiently deep, that's not enough and you need more. And that's what you do with fiscal policy. There are other reasons-- there are other differences between the two policies. Because we're not looking under the hood here. But, for example, in COVID, it was very-- a certain group of people were much more affected than others. I mean, people that work in restaurants, those guys just lost their jobs and there was nothing they could do. So there was a reason to target the transfers-- when you use interest rate, it's a very blunt policy to everyone. When you use fiscal policy, you can also-- it's not only the amount you spend, but you can also target the expenditure in certain directions. And so there are other reasons why you may want to use the two tools. But the main one is-- the first order one is if you're in a deep recession, you need everything to try to lift the economy out of that. And so that's the kind of packages you see in big recessions. Now, there's a slide that I think I have pending from two lectures ago. And this is a good opportunity to bring it back. Remember when we looked at equilibrium in financial markets, we came up with this downward-sloping demand, money demand. And then we said, well, if you lower the interest rate, there's more money demand, and so on, and so forth. And then we said, therefore, the way the Fed lowers interest rate, or the central bank lowers the interest rate is by increasing money supply. The point of this picture is that there is a limit to that. And the limit is, more or less, when the interest rate reaches zero. Because when the interest rate reaches-- the nominal interest rate reaches zero, then there is no cost in holding bonds-- in holding money, sorry. The only reason for you not to hold all your wealth in the form of money is because you were giving up some opportunity cost of investing in bonds, which were inconvenient financial assets because you couldn't transact with them, but they pay you higher interest rates. That's the reason you want to go there. But once you reach zero interest rate, then you're indifferent and you might as well hold. If the central bank goes out there and does an open market operation, you don't need to be compensated for that because you're totally willing to hold your wealth in the form of money. And so monetary policy is no longer effective when you reach what is called the zero lower bound. And that's what we call the liquidity trap. It's called the liquidity trap. Let me not get into why. But essentially, it's that. So you can inject more and more liquidity, but you cannot move the interest rate, so you lost a policy tool. This was the tragedy of Japan for many decades. They were stuck against the zero lower bound, the liquidity trap. And so they had to go through massive fiscal expansions because they didn't have-- they were in recession, chronic recessions, and they didn't have a very powerful monetary policy tool because they were against the zero lower bound. So why did I use this opportunity to bring this about? Because that's, for the reason I just described, in the case of Japan. But I asked the question, what would you advise the government to do? Well, I already told you the answer. If you have an economy in a recession, and this means you use all the monetary policy that you had, conventional monetary policy that you have-- now we have unconventional monetary policy, but I'll tell you a little bit more about that later. But once you run out of this and you're still in a recession, what would you tell the government to do? AUDIENCE: Use fiscal policy? RICARDO J. CABALLERO: You use fiscal policy. That's the other tool that you have. So that's a typical situation you see. When countries-- the interest rates are already very low, they tend to use much more actively fiscal policy because it's the only policy they have left. And that has been the case of Japan, again, since the crash of their financial bubble in the late '80s, early '90s. So look at the COVID-19 response. Something happened to my figure here. But anyways, this is zero, essentially. So this is COVID. The COVID shock happened. Clearly, the economy was imploding into a recession. The Fed immediately reacted and cut interest rates very, very aggressively to zero. And then, we were stuck there. This is effectively zero. I mean, there are technical things-- the thing moves a little, but this is effectively zero. So the US was, during that period, against really a liquid-- against the zero lower bound. There was no more power for the kind of monetary policy that we have described here. So let me-- so that tells you that there's going to have to be lots of fiscal policy if you want to get out of that. And I'll show you that later. There was a lot of fiscal policy. But before getting there, I'm going to show you something that you don't need to really know for the quiz, but so you can understand what is going on in the newspapers a little better. The Fed-- that was not the only-- precisely because the situation with Japan was so chronic, people began to develop lots of tools, alternative tools for central banks to use when your interest rate-- the main interest rate you use is stuck against zero, against the zero lower bound. And that's what you may have heard-- it's called sometimes unconventional monetary policy, QE, quantitative easing. All those kinds of things, they represent essentially policies that are like monetary policy, but they are not exactly the way we have-- because they are not interventions in the very short-term bonds. They are interventions in other assets out there. In this course, we have it very simple. We have only one interest rate. In reality, there are multiple bonds. There are risky bonds. They're spread. Somebody asks about risky bonds in a few lectures ago. They are spreads. There are lots of interest rates floating around. So in principle, a central bank could intervene in those other rates as well. In fact, in Japan, they have even intervened in the stock market, and that tells you how far they can go. So in a richer environment with more financial assets, in principle, the Fed could go beyond the standard short-term bonds that they go for their open market operation. And that's exactly what they have been doing. A way of thinking about that is, remember, when we look at the monetary-- expansion in conventional monetary policy, we start with a balance sheet like that. Remember, we said the central bank has bonds and the money. If it wants to have an expansionary monetary policy, it goes out there, it buys more bonds and gives them the-- it gives the bank's money. And that expands the balance sheet. You end up with more-- the balance sheet of the central bank ends up with more bonds and also with more liabilities because it gave more money to people, banks, and so on, so it owes more money. So monetary policy naturally-- expansionary monetary policy naturally leads to an expansion of the balance sheet. Now, for years, outside of Japan, nobody really cared too much about that because this effect, relative to what you saw in the interest rate, was very small. I mean, yeah, the balance sheet was moving a little bit, but it was mild. And also here. We hit the zero lower bound. And essentially, the Fed went out and bought all sorts of things. First of all, when you hear QE, quantitative easing, that means mostly that the Fed goes out there and buys not only short-term US Treasury bonds, but long-term bonds. Because there's something called the term spread. Typically, interest rates in the long run are higher than the interest rates in the short run, typically, controlling for a bunch of things, and that's called the term "premium." Well, they went and bought those kind of bonds. They also bought bonds issued by Fannie and Freddie-- Freddie Mac-- no, Freddie and Fannie, mortgage-backed securities, a bunch of stuff, even loans. In fact, they created a facility to buy corporate bonds. And at some point, they created a facility to buy fallen angels bonds. Initially, it was only investment grade bonds, so all the companies that have the best possible rating. But that wasn't enough, so they went out there and created a facility to buy fallen angels bonds. Fallen angels were essentially companies that were prime companies before COVID, but after COVID, they didn't look so good; airlines, cruises, and stuff like that, hotels, and so on. So that was a massive expansion of the balance sheet. So in terms of this, this guy grew a lot. But the purpose, that's like monetary policy. That's what we call unconventional. It's different from the standard one, but they were doing-- trying to operate very much like monetary policy operates. Here, you see the balance sheet of the Fed. You see, before the global financial crisis or the Great Recession of 2008, 2009, the balance sheet wasn't an interesting thing to look at for the central bank. Because, yeah, they did their regular open market operations and for anti-cyclical policy, but you would see small wiggles in the size of the balance sheet relative to the size of the balance sheet. In the global financial crisis, they hit the zero lower bound for the first time, the US. And so there, you saw a massive expansion of the balance sheet. This is the number of assets. The same happened to liabilities. The other side of it is they're injecting massive amounts of money into the economy. So there, you saw a big expansion. The recovery from the global financial crisis was hard because the financial sector was very compromised, so it took them a while. And they kept doing this kind of policies. Then they began to unwind the balance sheet. And then COVID came, and that's what I was showing you before. Massive expansion. They sent the interest rate to zero. That wasn't enough. And then they went out and bought lots of other financial assets, which work very much like monetary policy. Big thing. And now, they're unwinding the thing. Now we're in the opposite process. We have inflation. We want to get out of this situation, so they're unwinding. But you can see the size of that. It's huge. Huge. I mean, this is-- the balance sheet a couple of decades ago was of the order of $1 trillion, which was more or less of the money that is circulating around. Now it's $9 trillion. Massive intervention. And all central banks, major central banks look like this. I mean, the ECB also looks like this. The Bank of Japan looks like this. But actually, you don't see these blips as much because they began to do them here. So they have been accumulating for a long time. They have been using these kind of policies. So coming back now to the course. What about fiscal policy? Well, I'm showing you different countries around the world. Massive fiscal expansions during the COVID episode. Massive. I mean, this is-- the US, the fiscal expansion, if you combine all the packages, it's of the order of 20% of GDP. That's huge for fiscal-- you don't see things like this, and this happened almost everywhere. Now, you don't see things like that outside of wars. This was really like a war, there's no doubt of that. The amount of expansionary fiscal policy we saw was comparable to what you see in a war. So there you have it. Big recession, huge recession, massive policy response, both monetary of the conventional and unconventional kind and fiscal. And again, this was not unique to the US. It happened essentially everywhere. China is a little different for reasons I think I mentioned in the first lecture. But I may talk more about that later. Good. OK, so another policy mix. This is different. So what do we have there? That's another policy mix that we see fairly frequently. So what is that? LM going down, that's expansionary monetary policy. IS is going to the left, that's contractionary fiscal policy. So when do you think you would do such a thing? Or countries would engage in things like this? Yeah? AUDIENCE: What if you wanted to reduce government spending but you wanted to ward off a recession? RICARDO J. CABALLERO: Exactly. That's exactly the conditions when you want to do this. It's called consolidation of the fiscal deficit. Sometimes, you have a large fiscal deficit that's leading to accumulation of public debt, that doesn't look so good. So the government-- the Treasury, in the case of the US, may decide that it wants to reduce fiscal policy but is afraid because in doing so, it's going to cause a recession. And the purpose-- and there is no problem of output being over [INAUDIBLE]. It's just that the fiscal accounts look very weak. So if that's the situation, that is if the economy is not going-- is not going through an overheating period and so on, and you want to reduce the fiscal deficit, in some places, it will be explicit, in some places, implicit. But the central-- because the central bank has a goal to keep prices stable and output close to the potential output. So even if there is no explicit coordination, if the government announced a massive fiscal consolidation package, say it reduces government expenditure by 10%, the central bank knows that that's going to cause a recession. And so the central bank naturally will respond by cutting interest rate to that because the recession is not needed. If the US announced today a fiscal contraction of 5%, I'm not sure the Fed would do anything. It will just stay there, put, because we have an economy that's overheating. So that's what you would do in a situation in which you want to fix the fiscal account and the economy is more or less at a normal time. It's not overheating. When will you do the opposite? It's not, when would you do the opposite? It's, when are you likely to see the opposite? So first of all, what is the opposite? The opposite is a combination of a fiscal expansion with a monetary contraction. When do you think you would see such a thing? AUDIENCE: Either maybe when the government has a budget [INAUDIBLE] and wants to increase spending, or maybe when they-- the interest rates are too high. They want to reduce interest rates. RICARDO J. CABALLERO: Yeah. OK, that's true. But I'm not sure that that's-- yeah, but that requires a concerted decision and so on. It's true. Yeah, valid question. It's not the one I wanted. I wanted something more interesting, more exciting, but those are valid answers. AUDIENCE: War. RICARDO J. CABALLERO: War? No, war typically is all in. No. OK, let me note, I know it's a strange question, but I know where I'm heading. Suppose that the government decides to spend, for whatever reason. And the central bank says, whoa, we don't need that expenditure now. We don't need this fiscal expansion now because we're on the margin of overheating, and now we're going to get this big fiscal expansion. Then, the Fed, the central bank, is likely to react to that and hike interest rate. That will make very upset the government. It always happens. The government gets very upset. And there's a guy that says, look, I'm trying to expand the economy, and you're fighting me. But that's the nature of the-- that's the reason central banks are meant to be independent. That's where they government-- they can offset that. And the reason I wanted to highlight that example is I think some of that-- and somebody asked that question, I think, in the previous lecture-- happened to the US economy. One of the reasons we are in an overheating situation right now is because the US had a big fiscal expansion early in 2021. And that fiscal expansion was, at the time, in which there wasn't much spare capacity in the economy. So we were very close to full employment. The supply side was very constrained and so on. And so there may have been good reasons for the fiscal package, transfers to people that you need to transfer and so on. But the macroeconomic consequences of that very naturally was going to lead to overheating. And the Fed did not respond to that. And I think that's one of the reasons people think sometimes that the Fed-- well, there's no doubt-- exposed that the Fed was behind the curve. But one of the reasons they were behind the curve is that there was this big fiscal expansion, which naturally was going to span output, and they did not react to it. Eventually, they reacted, but it took them a long time. And by then, we had inflation and all that. So that's a situation in which we should have seen a picture like the opposite of this, but we didn't see the picture. We didn't see the monetary part, and that's the reason we ended up with an economy that is overheating. Oh, yeah? AUDIENCE: Did the Fed ever elaborate on why they didn't do it? Or was it just for [INAUDIBLE]? RICARDO J. CABALLERO: I mean, it's always-- it's a very uncertain environment here. Yeah, they thought this was going to be very transitory, that there was enough inflationary dynamics, dis-inflationary dynamics, that would offset all that, exposed it obviously it was a mistake, but it's exposed. I mean, there was a lot of noise and so on. Then, the came the Russian war that sort of increased the price of oil dramatically, and that sort of created lots of bad dynamics. So they were unlucky. That part is the part that I think that-- again, they thought we were going through a temporary situation. They didn't think that it was going to be strong enough. They thought the supply side was going to expand a lot faster than it did. So they may have been right in not fighting it, but over a horizon of three years. And they found everything very compressed into three months, and that led to a problem. So the last thing I want to show you is that this model-- how this model works in practice. I mean, obviously, you're not going to estimate exactly the model I show you. You have a real model, we'll have dynamics and many more things. But the more complete version of what I just showed you, the IS-LM I showed you, many people have estimated sort of-- for example, I've estimated the response of the economy to monetary shocks, or to fiscal expansion, and so on. And they trace out different dynamics, different variables, and check whether that's consistent with the IS-LM framework or not. And the point of this figure is that it is very consistent with that. But let me show you a little bit about time. So this is the effect on different variables of a surprise increase in the federal funds rate. That's the monetary policy rate. Federal funds rate is the interest rate that the Fed sets. And where you see that in practice, what you see is this is the impact on retail sales, on sales output, really, more or less. And, yeah, in practice, the output doesn't respond immediately. It takes a while. It takes several quarters. But eventually, it hits you. And that's one of the big issues with monetary policy today is that clearly, inflation is not under control, but they have done a lot. And we know that it takes time for the economy to really perceive the full impact of a monetary policy. And so that's the tension now because lots of people are pushing the Fed to do more because we still have 6% inflation, but they have done a lot. And they know that monetary policy works with lags. "With long and variable lags" is a famous sentence. And so it takes about six quarters to really see the mess, how much mess has been caused. It will take a while, so we have to see. You see, output, well, it's more like sales. It's the same thing. Initially, they trend slowly, but it takes a while, but it does have a very large effect. This is employment. Same thing. These diagrams, you don't-- well, this is unemployment. Naturally, the other side of it is-- unemployment also will build up slowly. So unemployment is very low now. But we don't know when the economy really feels the impact of all the monetary policy has been done in the last eight months or so. Where will unemployment end? And the big problem for the Fed today is something that you don't need to understand until the second part of the course is that prices do decline eventually, but it takes a long time. So to control inflation with monetary policy, it takes a while, a long time. Let's see whether the economy, consumers, and so on have the patience to hang in there. OK.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_13_The_Facts_of_Growth.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: OK, so today we're going to start talking about the long run. I've been talking about the business cycles, and today we're going to start talking about things that happen over decades. But before I do that, before we finish with the short run, medium run, I don't want to give you the impression that once you understand the IS-LM-PC you can start managing monetary policy immediately. There is a lot of noise of all sorts of complexity in the real world, of course, that can make policies very hard to manage in practice. Macroeconomic policies. And one fundamental principle, I would say, is that policymakers understand that speed can kill, and that's very obvious during financial crisis. There we all understand that the response needs to be large. It has to be a response with overwhelming force. Essentially, because things are happening so fast that very few corporations, even healthy corporations that can adjust quickly enough to the pace at which things are changing, prices become non-informative, fire sales take place, and obviously it's very difficult to make economic decisions in that context. And so that's the reason they are-- the speed on the policy direction goes very clearly in one direction. Do it quickly and very large. Now, on the other hand, when you're going through a period in which you're hiking interest rates, for example, like we're going through now, the tendency is towards gradualism, to do it very slowly, because something can break along the path. And it's often the case that for sufficiently large adjustments, something breaks. And so here you have an example of major episode of hiking in the US and things that have happened among those major episodes. This one actually, I have a personal attachment to that one, because I was studying in Chile around then. Everything was going wonderfully. Massive capital flows through Chile. Emerging markets were very popular. We all felt very wealthy, rich, and so on. And right after I finished college, I was not planning to come to the US. Why? Things were going very well in Chile. But the US decided to hike interest rates very aggressively. All of a sudden, capital flows to emerging markets disappeared. We went into an enormous financial crisis. I lost-- I had no opportunity cost, and I had to come to study to the US. So I know that aggressive hikes can matter, can make differences to people. And there is a decade that followed that episode that is called the lost decade of Latin America, essentially. So things did break. And one of the main reasons they did break is because at the time, most of the capital flows-- that's not what happens today-- were really being managed by global banks. And banks can get very distressed when interest rates rise very quickly. And so that was essentially a problem with the US banks, the major global banks, but the US banks in particular that triggered an emerging market crisis. This one also had huge consequences, actually. And it's interesting because this episode is similar to what is going on right now or what may happen soon. So this episode of hikes that ended what is called the savings and loan crisis. And those, the best parallel to today, are the small regional banks, if you will. And they weren't able to withstand this sharp rise in interest rate, which is very much what is going on right now in the US. This one actually ended up with also another problem, which is the bubble burst in Japan. Episode of hiking in the US. And there we had a major crisis in Japan. The price of real estate collapsed, and essentially they, since then, they have never been able to grow as they used to before that episode. That's called sometimes the tequila crisis. This is a Mexican bond crisis, and it was, again, the result of a hiking episode in the US. Conditions tightened to emerging markets. Their bond market essentially exploded. This is a global financial crisis, the Great Recession. It was again preceded by episode of aggressive hikes, which eventually let to turn around as house prices were rising steadily throughout the episode and a lot of financial assets were created around that housing wealth that was being created. That hike in interest rate eventually put a stop, an end to that appreciation of house prices. In fact, they turned around, and it led to a very significant financial crisis. And this is where we're at right now. And so we are already seeing some tremors and so on. So the point is when sometimes people say, well, why isn't the fed more aggressive if we have high inflation and go very quickly at it? Well, it's because things can go wrong. And it typically happens that things do go wrong. You don't know exactly what will blow up, but something may blow up. And typically it's associated to some financial market that is very hot, and the market and the banks are always involved in that because the banks are very lever. They have little capital relative to the assets they have, and that means a small variation in the price of assets can lead to very large changes in the value of their capital. Anyways. So just a warning. So if you get a job at the fed, please be careful. OK, now let me switch gears. We're going to talk about something a little different from what we have been discussing up to now. So this is growth projections for different regions in the world. This is something that is published by the IMF. It's called the World Economic Outlook. I think I mentioned it before. And here you have some forecasts. Well, this is actually what happened. So growth in the global economy was about 3.4% 2022. Advanced economies grew at 2.7%, emerging markets and developing economies at 3.9%. And then you see forecasts. And the further out you go is less related to the current cycle. It's more related to what is structural. The structural growth of the different parts of the world. And you see that for 2024, the global economy is projected, expected to grow at around 3.1%. These forecasts were made before financial mess that were going on right now, so probably the next World Economic Outlook will have, at least for 2023, will downgrade the growth. Probably not for 2024, but yes, for 2023 Anyways, advanced economies expected to grow at 1.4%, emerging markets at 4.2%. So these forecasts are based on a combination of cyclical factors, fluctuations of the short run, medium run, the kind of things we have been discussing up to now. Some economists will have to go through recessions. Some economists are going through booms. That probably dominates the forecast on 2023. But as I said before, the further out you go, the less relevant is the current business cycle and the more relevant is the structural trend of the different regions of the world. So I would conjecture that in this, when they formulated this forecast, this is very much based on more longer-run growth model, the kind of models we are going to discuss now. While this is probably totally dominated by the kind of things we discuss up to now. There are several things that are interesting here, aside from the fluctuations in year to year. One thing you can see, for example, is that regardless of year, on average, emerging markets tend to grow faster than developed economies, advanced economies. So one of the things we want to understand is why is that the case. But it's very clear here. The first model we're going to look at, which probably will happen on Wednesday, will try to explain essentially that-- why is it that these guys tend to grow faster than the advanced economies? So growth is important. Understanding economic growth is hugely important for the world, for understanding the health of an economy. Here you see-- this comes from the textbook-- the US GDP in 2012 dollars from 1890 to 2017, I think is this one, the end year. The important thing to notice here is how large is the change in GDP in during this period. GDP here, measured in the same prices, so 2012 prices, is 50 times that in 1890. That's a big thing. When we talk about business cycle fluctuations, we're talking about, in an economy like the US, 2%, 2.5%, 3% up and down. This is 50 times. So over longer periods of time, you can almost ignore the business cycle, and it's all about that long-run trend. Here, what is this episode? So here, if you look at this picture, especially the further out you are on the room, what dominates here is clearly the trend. The only action you see, really significant action different from the trend, is around here. What happened there? It's the Great Depression. So even the Great Depression doesn't look that big relative to what the trend can do. So of course, it's very difficult to affect the trend of a country, but the trend makes a huge difference for the welfare, for the economic well-being of a country. Good. Now, a lot of that is because also the US population grew up and grew a lot during this episode. So often when you look at long-run trends, rather than looking at the level of GDP, you tend to look at the level of GDP per person, per capita, or something like that. And that picture is exactly the same picture as the previous one but divided by population at each point in time. OK. And that's an important-- over long periods-- at the business cycle frequency, you can almost ignore changes in population-- no, changes in population you can not ignore completely unless you are in a war. You worry about other things-- labor force participation and stuff like that. But population is irrelevant to the business cycle. Growth is relevant to the business cycle frequency, but not over long periods of time. In this period here, population in the US increased from 63 million to 320 million. So that's a lot more workers in principle that you have for that economy. So a lot of that trend is explained by population growth, and that's one of the reasons-- sorry. A lot of the strength in this picture here is explained by population growth. That's one of the reasons we're in a tricky time in the global economy, because there are many important regions of the world where population is no longer growing. So we got used to a period in which population growth was very steady and high, and now, many parts of the world, important parts of the world, have negative population growth-- Japan, Korea, China, most of continental Europe, even places in Latin America, and so on. So this is a big change for the world. But anyways, during that period there was a lot of population growth. And the US in particular, again, as I said before, from 63 to 320 million. So if you really want to measure welfare of the economy, the well-being of individuals in the US, the previous picture is misleading because you have to-- yeah, the final pie is 50 times larger than the first, than the beginning pie, but you have 320 million people to split it among as opposed to 63 million. So this picture captures that. A statistic that is often described when you talk about long-run growth is GDP per person. And you still see that what dominates this picture is a trend, but the difference between this GDP per person in the US at the end of the sample versus the beginning of the sample, it's 10 to 1, not 50 to 1, so it makes a difference, population. It's still big. It's still what dominates this picture is that. Of course the Great Recession looks bigger now because you're comparing it with a number that grows by a factor of 10, not by a factor of 50, so it looks bigger naturally. The same 30% decline in output is a lot bigger when you're comparing it with a factor of 10 than when you compare it with a factor of 50. But still, it looks bigger. But the picture is dominated by the trend. So all this to say that what we're going to study now is very important. It's not what dominates the day to day news because it happens slowly and over time, but it is very important. So how do we measure these things? When you're looking within a country, you do reasonably well, not perfect, but reasonably well, and perhaps not over periods as long as the one I showed you, by looking at GDP per capita. That's fine. You measure it real GDP per capita, that's about fine. But when you compare across different regions of the world and so on, those comparisons is very misleading. So to say that the US has-- I don't know. What is the US GDP per capita today in the US? Somebody should check it. But it's about maybe $70,000, something like that. I don't know. And then you see another country that has, say, Italy, $50,000 per capita. That comparison is not that meaningful. It's indicative of something, but it's not completely meaningful. And I'm going to show you an example which is much more extreme than that. But the reason it's not very meaningful is essentially because the prices are not the same across different parts of the world. So we have a method to do that, to be able to compare across countries. And again, even for a given country over long periods of time, we make a correction to the GDP numbers we have and we correct them by what is called PPP, Purchasing Power Parity. And I'll explain what that is. So whenever you see comparisons of GDP per capita across countries, when somebody is doing a growth analysis, it's going to be PPP adjusted. Now let me explain the logic of PPP. And again, I said, within the same country or periods, perhaps not 300 years, but over periods of 40 years, it's reasonable to use just real GDP. But when you start comparing Botswana versus the US, it gets a lot trickier because there is a lot of goods that are a lot cheaper in poorer countries, in particular food. OK. And so you have to be careful with those comparisons. So I'm going to give you this example, which is somewhat hypothetical, but the numbers are not crazy. So suppose you have two economies, the US and Russia-- well, anyways. And suppose that in both economies, households and firms consume cars and food. And suppose that the average consumer in the US buys one car a year for $10,000 and a bundle of food for $10,000 as well, so the total expenditure in consumption for this household on average is about $20,000 a year. That's what a US household consumes. These numbers are fantasy numbers, but the big picture is not that fantasy. Russia, the average consumer buys 0.07 cars a year for 40,000 rubles and the same bundle of food that in the US. Same, assume that. Same bundle of food good for 80,000 rubles. So the total expenditure of this average household in Russia is 120,000 rubles. Suppose that the exchange rate is 60 rubles per dollar. This thing has moved a lot in recent times, but suppose that's the number of rubles per dollar. So you divide $120,000, and you want to convert them into dollars, you divide the 120,000 rubles by 60,000 rubles per dollar, and then you get how much a Russian household on average spends on consumption in a year, and it's $2,000 a year. So here you have 120,000 divided by 60. it's 2,000. That's the number of dollars that an average household in Russia consumes. So the question is, you have a US household spends $20,000 a year, a Russian household spends $2,000 a year. And the question is then, is Russia 10 times poorer than the US? OK. If you were to compare real GDP, that would be answer. Yeah. And it's true. If you look at, again, in this example, if you look at the real GDP numbers on the same year converted all into dollars, that answer is correct. But it doesn't represent-- the point is that it doesn't represent really the well-being of the average household in Russia for this reason, at least. Why not? But you ultimately matter is how much real goods the household consumes. That's what really matters. If you live in a country where the price of everything is zero, your consumption expenditure consumption will be zero. But that doesn't mean that you are as unhappy as somebody that consumes zero. You're consuming whatever it is. It happens that the prices tend to be very low. And that's essentially the story here. As I said before, it tends to be the case that in poorer countries, a lot of things are cheaper. There is certain very high-tech things that are not even consumed in poorer countries, so you have to adjust for that as well. But a lot of the regular things, the bulk of the purchases tend to be a lot cheaper in poorer countries. And that's exactly what is behind the reason why in this example, the answer is no. It's not true that the Russians are 10 times-- the Russian household in this example is 10 times poorer than the US. Let's check it. So that's our example. And I said, not so fast. Let's use-- so assume that the goods are the same, so the cars that the Russians buy is the same as the cars that the US households buy. That was truer a few months ago than now, but assume that's the case. It's just that the Russians change their cars less frequently. In this example, the US household is changing the car once a year, while the Russians are changing the car no less than once every 10 years, one every 15 years or so. Let's assume also that the bundle of food is exactly the same in both places. So since the car is the same and the bundle of food is the same, I can use US prices to measure ration consumption. And that's comparable to what US consumption is because I'm trying to convert the goods they're consuming into something that's comparable to what the US consumes. Since the goods themselves are the same, if I value them at the same price, either of the two prices, but at the same prices, then I'm going to be able to make the comparison that I really want. That's what PPP adjustment means. So look at our particular example. Here, the Russian household would be consuming 0.07 cars times $10,000, which is the price of a car, plus 1 unit of a bundle of food. And the US price is $10,000 for that. So the total consumption of the household, PPP adjusted, the Russian household is $10,700. OK. That's not 1/10. It's 53% of US consumption. So true, Russian household is poorer than an average US household, but it's not 10 times poorer. It's 53% as rich as the US household. And so this is big. And all the numbers I'm going to show you next, especially when we compare across countries that are very different in terms of level of development and so on, have these kind of corrections built in. If you need the data for these kind of things for whatever reason, you find them in what is called the Penn Tables. The Penn Tables is essentially collects all the national accounts of all places and makes these corrections. The problem is they don't update them very frequently. But if you look in, [? Fred, ?] for example, which is using one of the P sets, there will be numbers for a few countries that have this PPP adjustment. OK, so that's going to remain in the background now, but I just wanted to tell you how you construct numbers when you want to talk about long-run and comparison across countries. First set of numbers here. These are obviously all, today, at least, developed economies. Look at the growth between 1950, 2017. Obviously, the war created a big mess there, but before that, so let's start from 1950. And what you see here is France, on average during this period, France grew-- 2013 I think is the last-- yes, it's the last year. I think they were recently updated, but at least when the book was published, that was the last year they had Penn Tables for. But France grew, on average, 2.6% per year on average. They also had a business cycle and so on, but on average, 2.6% per year. Japan during this period grew by 4.1%. UK, 2.1%. The US, 2%. So the developed world essentially grew around 2.7% on average during this period. Look at the effect that this has on the level of GDP per person. And all these PPP are adjusted. For the case of France, 5.6 times. So they started with $7,000, and they were close to $40,000 in 2017, so the ratio is raised 5.6. Look at the US. The US is 2%, and that ratio is-- it's still richer than France per person in 2017, but the ratio of that to that is smaller than that. So over a long period of time, that's what the trend in the picture capture. A small difference in the rate of growth, if they are sustained for a long period of time, can make quite a bit of difference for the change in GDP. And so what is the first-- what is the pattern-- let's find the pattern here. There's a very clear pattern in that picture, in that table. What is it? Can you spot it? I hadn't actually realized it when I was looking at my notes and then realized that it's very clear in this table. That's the reason I added this line, I updated the slide this morning. Do you see a pattern? Yes. AUDIENCE: The higher growth rate have a higher multiple. RICARDO CABALLERO: Yeah, Yes, but that's math. OK. So it's a true statement, but that's just math. There's an economic thing that I want to-- so you're right, but I want-- I should have clarified. There's an economic pattern there. Let me simplify. Just look at these two columns, because a higher number here simply-- sorry. A higher number here simply means that you have a higher rate of growth. That's your math facts. So ignore this column. What I suggest is that you just look at these two columns. Do you see a pattern? Just look at these two columns. This one, in a sense, just repeats information that is here for the reason you described. But just look at these two columns. Is there a pattern there? Exactly. Very important. Richer countries tend to grow slower. The richest country here is the US. Had the lowest rate of growth on average. The poorest was Japan there, and they had the highest rate of growth. OK. So that's a very important correlation. And again, the first model we're going to see of economic growth is going to explain that correlation. Why is it that we see that? Those were for five economies. You could say it's an axiom. But look at this. This is rich countries in general since 1950s. And you look at here in this axis, you have the annual rate of growth, the average rate of growth, and here, the GDP per person in 1950. So at the beginning of the sample, 1950, these countries have this level of GDP per capita. And then here is the rate of growth on average from 1950 to 1987. And it's very clear there that there's a downward sloping pattern, no? So that's the same fact now for many more countries, this it's a downward-sloping relationship. The richest countries tend to grow much slower than the countries that were poorer at the beginning of the sample. There are some interesting outliers, like Mexico, and it's interesting in itself. I'm not going to say a lot about why that's the case, but let me for now stick to the pattern, the dominant pattern, which is a downward-sloping relationship. That's another way of seeing it. And this is for just a bigger variety of countries. I have Botswana, China, Thailand, and so on. And you see here, GDP at the beginning of the 1950 and GDP by 2018. And the pattern here, which is essentially a repetition of the pattern that I showed you before, is that there is much more compression here than here. How can you have more compassion here than here? Well, because there is some sort of convergence. There is a sense of convergence. It's that those that were poorer tend to grow a little faster than those that were richer, and therefore they tend to converge to each other. So that's the point I'm highlighting here. A lot of this pressure, 1950, much less dispersion in 2018. That means that on average, the poorer countries are growing faster than the richer countries. And again, all this is per capita, PPP adjusted, and all that. This picture, again, sort of makes the point, but now it takes many more countries. The point of this picture-- it's in the book-- is to highlight that-- it's a little messy, the picture, but to highlight that if you look in different regions, OECD is the major economies, tend to-- the pattern I showed you holds. If you only isolate only the blue squares, you tend to see that negative relationship. If you look at within Asia, it's a little bit noisier, but you also tend to see a negative relationship. If you look at Africa, that relationship is lost completely. OK. So when you look at the world as a whole, the picture is not as neat as the one I showed you because there are certain pockets of the world that are not behaving according to the kind of models I want to discuss in the next few lectures. And the reason they're not behaving, is entirely-- it's almost outside economics. It's political conflicts, wars, and things of that nature which continuously disrupt the economic forces that I'm going to highlight in the next few lectures. OK. So that's a different issue. All the models I'll show you next are about the blue and the green squares and triangles there. Not about the red ones. So I show you what happens across countries over a certain period of time, which is long but not that long. Here you see what happens in longer history. There are two patterns that I like to highlight here, is that first, for a while you didn't see much, but you tend to see a big acceleration, in the Western world especially, around the 1950s or so. So clearly the Western world was growing faster than the rest of the world. The Western hemisphere. This is a more [INAUDIBLE] type and grouping. And you see that there's a very fast acceleration in growth in this episode here. Western Europe was also flattish and then picked up very strongly there. And you see the different regions of the world. And again, you see the sub-Saharan Africa region that sort of hasn't really picked up. OK. Much longer history. Well, that's the way it looks for the world as a whole. OK. Exponential pictures tend to look like that, but this is more dramatic than exponential. And again, what happened here is going to be very different from the kind of models I'll describe next. This period here is mostly dominated by what's called the Malthusian era, which is essentially people live, population grew, and so on, depending on how good was the harvest that year and so on. So you had this model in which this population grew faster. There was a main driver of growth. Well, but, you know, there wasn't enough food to sustain a higher population, and then you stay. So there was a fight between food and people and not much space for-- most people were in agriculture, and there wasn't much to build on. Nowadays there are pockets in the world, and we had the severe situations during COVID, but food is not really a constraint for growth for the world as a whole. OK. So, in other words, had you taken this course in year 1,000 or in the Renaissance, nobody would have talked about growth. It's not something that happened, really. It's a very modern thing to think about these pictures with these long trends and so on. OK. You would have talked about a lot more interesting things than this, but not about growth, that's for sure. And growth, the last point I think I want to make about this is it makes a big difference. I don't know if you can read that. I can't. But what I have here is GDP per capita in 1950 versus GDP per capita in 2016. And you have these isochrons here. As you move up-- so this line here is what happens to countries. It's a 45-degree line, so if you are on the 45-degree line, means that you haven't grown at all during this period. No, because that means that your GDP per capita in 1950 is the same as your GDP per capita in 2016. That means on average, you didn't grow. But as I keep moving these lines up, it means you grew faster and faster and faster. And if you move along this line here, that means you have negative growth on average during that period. So each of these lines represents multiples. For example, this top line here is 30 times richer. These are guys that grew very fast. I cannot read either, but I sort of know who is in each place. This is an example here. This is Taiwan. This is Taiwan, and this is Singapore. They have a name. How do we call those countries? No, well-- the Asian Tigers. They grew very strong for a long time since the '60s or so. But there you see. You can compare. If you could see, you would see that Taiwan and the Democratic Republic of Congo had the same GDP in 1950. Now the Democratic Republic of Congo has less GDP than it had in 1950. It had $1,700 here and $800 today, while Taiwan has 30 times what it used to have. And so today, Taiwan is one of the richest economies in the world. It's close to $50,000 per capita, while the Republic of Congo has $700-- $800 per capita. So growth makes a big difference. And these are not that many years. This is just 70 years. And I can assure you that these people, they have other concerns, but their standard of living is a lot higher than these people. And at some point, they were the same. The big difference is some countries grew and some countries got stuck. Where is Argentina here? I don't know. Somewhere here, probably. It's Argentina. I don't know. I cannot see. I'll have a better chance here. I cannot see. OK, good. So growth does make a difference. And it has made a huge-- the world we see today and the countries we think as rich or poor were not the same countries that you thought in those terms in 1950. Asia is one of the most prominent differences. They have massive growth through the '60s starting with Japan, but then the rest. And again, were the famous Tigers-- Hong Kong, Taiwan, Singapore, and Korea. South Korea. Good. So let's start building some models of what we have just seen. Remember, when we look at the short run, we really didn't care about the supply side of the economy. Remember, it was all about demand. They said, well, demand, look what consumers, investment firms, and governments do with demand. That determines output. And how output happens? Well, it happens. We didn't really care too much about it. Then when we talked about the medium term, we say, OK, no, no, we have to care because to produce, you need workers, and workers are not going to work for any wage, and so we have to begin to talk about the supply side of the economy. But we made it very simple. We just looked at the problem of wage bargaining and price setting. But the production function itself wasn't that interesting. It was output equal to labor. And I told you it's very unrealistic, but it was convenient for that part of the course because capital doesn't grow that fast. So typical production function will have both capital and labor, but at the business cycle frequency, investment, the change in capital can be large, but the stock of capital doesn't move that much and so you can ignore it for business cycle type fluctuations. But if we want to look at the long run, capital plays a huge role, capital accumulation. And so we have to be explicit about the role of capital in the production function. And now we're going to forget about aggregate demand. We're going to say, look, we're going to focus about aggregate supply, and demand will do whatever it needs to do. So we get what the supply side says. So output now will be an increasing function of both capital and labor. [INAUDIBLE] Now this function will have a bunch of properties, many of which are-- no, at a broad level, they are empirically validated, but they are also very convenient from the modern point of view. The first and most important property is constant returns to scale. We're going to use a lot of properties, so please get that concept. Constant returns to scale means simply that if you scale the factors of production, you also scale the output. OK. So say if x is 1.1, that means if you increase capital and labor by 10%, you get 10% more output. OK. So that's constant returns to scale. If I scale all the factors of production by the same amount, the same proportion, then output grows by the same proportion. It's scalable. That's what it means, constant returns to scale. Very important property what comes next-- decreasing returns to capital. That is, as you increase capital for a fixed amount of labor-- so constant returns to scale is a property of a scaling everything up. The property I'm describing here is what happens if we increase only k. What happens to output if we increase only k but fixing n? In other words. set this to one, and it start moving this up. You're not going to get x here. You're going to get something different from x. What this tells you is that, yes, you're going to get more output, but less and less the more capital you have. So this says, for example, suppose that you start with 100 workers and 100 units of capital, and it happens that this produces 100 units of goods. If you add now 10 units of capital, say you're going to get seven units of output. Not 10, seven, because you didn't increase labor. Had I increased labor also by 10, I would have gotten 10 of output, but I've been increasing only capital by 10, then I'm keeping output fixed-- then labor fixed, then output will increase by less than 10. But what this decreasing returns to capital says is that now if you increase again from 110 to 120 units of capital, you're going to get less than seven units of output more. You're going to get five. And if you increase again from 120 to 130, you're going to get less than five. You're going to get three. And so on and so forth. That's decreasing returns to scale-- decreasing returns to capital. And the reason for that economically is that more and more capital is working with a fixed number of workers, so labor becomes very scarce for capital. And that's the reason. So you have very little of-- these are factors of production which are complementary. They need each other, labor and capital. You fix one, and it starts increasing only one, then it's harder and harder for each extra new unit of this one to work with fewer and fewer of the other factor of production. So the same principle applies to labor. If you fix capital and you only increase labor, then initially you're going to get a big jump in output, but it's going to be smaller and smaller and smaller the more you keep adding labor. So one x that we're going to use throughout is we want to make x-- one of our favorite x will be 1 over n. You see what I'm trying to do. When we set x equal to 1 over n, so that x equal to 1 over n, what I get here is output per person. No? That's what I get. y over n. So if I set x equal to 1 over n, I can use some constant returns to scale. I know that this is equal to y over n, k over n, n over n, so that is one, so this guy doesn't move. And I have now that output per person is increasing in capital per worker. Worker and population, this part of the course are the same. Forget unemployment. Any population is employment. Labor force is everything. This is not a place to worry about unemployment. OK, so remember that all the plots I showed you, the different figures, were about this variable-- how it change over time, how it was different across different countries, how it grew at a different rate in different countries. But from this very simple model, you see that in order to explain the change in this growth in y over n and why one country grows more than the other, you have with the simple model only two options. So if I tell you country A grew more over this period than-- grew more per person than this other country over this period of time, there are only two options here. The first one is in that country, there was more capital accumulation per worker, so k over n went up. If k over n goes up more in one country than the other one, y over n will go up more in that country than in the other one. And the other option is that this function itself shifted up. So for any given amount of k over n, now you can produce more y over n. And that's what we call technological progress. That's the second thing. So is the difference in growth of output per person is due to an increase in k over n, well, we call that a capital accumulation mechanism. If it is because the function f shifts up, that's technological progress. And what we're going to do is in the next lecture, we're going to talk about this channel, the capital accumulation channel. And in the lecture after the spring break, we're going to talk about shifts in the function f. So in figures-- so fixing the technology, that this is the function f is fixed and you just move k over n, this is the picture you have. Now that's a production. I'm plotting this function here as a function of k over n for a fixed function f. And that's what you get. So output-- I have here capital per worker and output per worker. And you see that obviously it's in an increasing function. The more capital per worker you have, the more output per worker you produce. But it's also concave. Why is it concave? That's decreasing returns to capital. When you have very little capital per worker, a change in capital per worker gives you a big jump in output per worker because there was very little capital. That was a problem with that economy. When the economy has more and more capital, the same change in capital leads to a much smaller change in output. Here at this level, when capital per worker was very low, the economy was very poor. Then this change led to this change in output per capita. At this level of wealth you get as capital, economists would hire capital richer, capital per worker, the same change. This change is of the same size as that. Leads to a much smaller change in output. And that's a result of decreasing returns to capital. And that's the other option. Again, that's what we're going to talk about in the next lecture. This one. And two lectures from now, we're going to talk about growth that comes from shifts in the production function, this technological progress. OK. Very good. See you on Wednesday.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_7_An_Extended_ISLM_Model.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: OK, let's start. So by now, you know the IS-LM model, and if you don't fully control it, please spend a lot of time on it. As I said, 2/3 of your quiz will be about that. But we're going to start adding a few-- it's a very basic model, but still we can squeeze a lot of insight from it. And there are some very natural extensions that I think we should also go over and cover because, again, they have a high return in terms of investment to knowledge acquired from them. And today, I want to extend this IS-LM model along two realistic dimensions. The first one is to make a distinction between nominal and real interest rates. Now, nominal, up to now, since we assume in the model, since we assume that prices were completely fixed, constant, there is no inflation, and then there is no distinction between nominal and real interest rates. But needless to say, we live in an environment with inflation is positive, typically, not always, but typically. And in fact, nowadays we're having very high inflation. And that's part of one of the big macroeconomic headaches that we have at this moment is a very high inflation rate we're experiencing. Now, we're not going to talk about the determination of inflation until later in the course. I'm going to start talking about that in the next lecture, and it will not be part of your quiz, though. Sorry, not in the next lecture, next week, but will not be part of your quiz. It will be a very important part of quiz 2, but not of quiz 1. But we can still say a few things about what happens to the framework we have, conditional or taking as a parameter inflation. We're not going to determine inflation [INAUDIBLE],, but we say, well, what happens if inflation is not really 0? And more importantly, what happens if people don't expect inflation to be really 0? And we'll see how that modifies the analysis. The second extension is that we simplify financial markets enormously and we customize it to-- we could have simplified along many dimensions, but the simplification that we have is we look at something that is closest to what central banks do in setting monetary policy. And that's really the trade between cash deposits at the central bank and bonds. US government bonds, in the case of the US, typically have very short maturity, and that's what we had in mind, and that's the way we determine the interest rate. Now, needless to say, there are many, many interest rates in the economy, different duration, one-year rate, two-year rate, three, ten, 30-year rates. Some countries have 100 year rates. But there is also another dimension, which is very important, is the one that I highlighted there, which is riskiness. US Treasury bonds, especially of short duration, are riskless assets. There's no risk associated to it. Now we have a little event with the debt ceiling fight that may happen in August, September. But nobody's really concerned that something major will happen, except for a few disruptions for a few days. Let's hope that's true. Up to now, if you look at all the risk, markets are behaving as if nothing will happen there. But corporations don't typically borrow at those rates. Corporations issue their own bonds or take loans from the banks, and those bonds often have a risk premium. That is, they're equal to the safe interest rate, the Treasury rate, if you want, plus something else. And so you can anticipate that will be important because interest rates enter into our IS-LM analysis precisely through the borrowing costs of firms in the investment function. So if there is a wedge there, if there is a spread between what the rates we've been talking about and the rate at which firms can actually borrow, then that wedge will matter. And so that's what we want to do. So we want to introduce this. I want to explain what these things are, and then I'm going to modify our IS-LM model to take into consideration these extensions. So what is the nominal interest rate? Well, we have been talking about the nominal interest rate, which we typically denote by little i, is the interest rate in terms of dollars. So if the interest rate is 10%, you buy a bond today, that bond will give you 10% of whatever amount of money you invest in the bond at the end of the year. So it's a one-year bond. So that's a nominal interest rate. So if you buy 100 in bonds today and the nominal interest rate 10%, you receive $10 of interest payments one year from now, $10 of interest payment. A real interest rate is the interest rate in terms of a basket of goods. So the CPI or something like that will be important in that. Ex-ante, that is at the moment in which you are deciding where to invest in the real bond or the nominal bond. The difference between the two, the main difference-- there are other issues that have to do with risk [INAUDIBLE] I'm not going to talk about. But the main difference between these two is expected inflation. In other words, if you expect no inflation, then the distinction between goods, that is, if you expect p to remain constant, the distinction between an interest rate in dollars or in goods isn't existent. They're the same. But if you expect inflation, then that's not the case because the goods are going to become more expensive over time. And if the goods become more expensive over time, that means something that pays you in dollars is paying you more per equal unit. So if the r, little r, which is the interest rate, is equal to i, and you expect inflation to be 10%, really you're expecting the real instrument to pay you 10% more than the other. That cannot happen in equilibrium, but that's what it means because one is paying you in dollars and the other one is paying you in goods, so it will be 10% more expensive next year. OK, good. So why do we care about the distinction between nominal and real interest rate? Well, because the private sector, important decisions of the private sector, like the purchase of durable goods for consumers-- we're not modeling that in this course, but investment in the case of physical investment, not financial investment, physical investment depends on real rates, not nominal rates. So what determines whether the opportunity cost of a real investment is high or low is the real interest rate, not the nominal interest rate. Why do you think that's the case? Why do you think it's the real, not the nominal interest rate that matters? AUDIENCE: Because when they're borrowing, they're going to be [INAUDIBLE]. RICARDO CABALLERO: Not really. I mean, most of the borrowing in the US is done in nominal rates. So it has to come from something else. Why do you invest? You invest to produce more goods in the future. So if those goods are going to be more expensive in the future because of inflation, then what matters to you is the difference between the cost of borrowing and what you'll get for those goods. And the goods are going to be 10% more expensive, so what really matters is the net for you. In other words, if the real interest remains constant, and now you give me interest rates that are 10% higher, but you also tell me that the goods are going to be selling are going to be 10% more expensive, I don't change my decision. If it was a good project with zero inflation, it's also a good project with 10% inflation. That hasn't changed. If I tell you 30%, the same thing. No, because I'm going to be investing now in order to get things that are going to be 30% more expensive a year from now. So the decision that doesn't depend on that. So that's the reason the real interest rate is what you really care about in the case of real investment. And remember, we're talking about real investment at the aggregate level. All this can make a difference at the level of individual goods because when inflation goes up, not every goods price goes up by the same amount. Some goods go up by more and some prices go up by less. But on average, it's what I just said. So let's try to look at this equivalence more formally, how to derive the real interest rate. Well, [INAUDIBLE] I said, not in the US, but in many places, you do borrow in real terms. For example, in Chile, we have a unit of account because we had very high inflation many years back, which is called Unidad de Fomento, and that unit of account is indexed to inflation. So you borrow $10 million equivalent in Unidad de Fomento. And those 10 million pesos equivalent Unidad de Fomento that means the interest rate is indexed to that. But in the US, that happens very rarely. The US government does do that. It's called TIPS. So you have nominal bonds. The great majority of the US Treasury bonds are nominal bonds, but there are also some real bonds, and those are indexed to inflation. But firms very rarely can issue bonds in the US that are in real terms. So that's that [INAUDIBLE]. Sometimes this is even a-- so the reason I made that clarification here is I'm going to derive the real interest rate, but that doesn't mean that the instrument exists. I'm saying, given a nominal rate that I see out there, how do I construct a real interest rate from that nominal interest rate? That's what I want to hear. It doesn't mean that there is an instrument that is traded in real terms. But when I go to the bank as a firm and I borrow at 10% nominal, I need to calculate, well, what does that imply in real terms. And that's what I'm going to illustrate now. OK, good. Or maybe I shouldn't use the word good since we're going to do this. So what we want to pin down is this real interest rate, r. So the real interest rate in terms of goods means if I borrow, say, one unit or if I buy an instrument that-- if I spend one unit of the good, the aggregate good, in a bond, then I receive 1 plus rt units of goods one year from now, then rt is the real interest rate. It's an interest rate in terms of goods. Now, suppose that I go this route instead. So OK, that's what I want to get to, but let me do it through the only instrument I have, say, the nominal interest rate, the nominal bonds. So if I buy one unit of goods today, that means I'm really buying Pt dollars in that bond. Pt. Is the deflator we have. We have Pt dollars. Well, Pt dollars invested in a nominal bond will give me 1 plus it, the nominal interest rate, times those dollars. So say the price index here is 2 and the nominal interest rate 10%, then next period, I get a 2 times 1.1. That's the number of dollars I get. Now, that's still-- I cannot compare with this up here, because at this point, I have dollars. And really, I want to convert it into goods. I want to go from goods to goods. So how do I convert dollars into goods? I divide by the price of the goods, but not here, by the price of the goods at t plus 1 because I'm going to get this amount of dollars at t plus 1 one year from now, I have to divide by the price of goods at t plus 1 in order to get the number of goods I'm getting at t plus 1. So I have to divide by plus 1. But the problem is that at time t, I don't know what Pt plus 1 will be. The best I can do-- and here's where I'm simplifying things a lot-- is to have an expectation of what the price level will be one year from now. So the best I can do when I want to compare things today, whether I want to go this way or that way, is to use expected price here. So these two things are equivalent in the sense that they require exactly the same investment. I'm now going this way. And then in expectation, at least, these two things are also equivalent because this is one I'm going to get in terms of goods from having invested a good. This is what I expect to get in terms of goods, but I'm ignoring all that uncertainty around that. And this is what I get if I go directly to the route, the goods route, and this is two things have to be equal by indifference. If two things give me the same, they have to be priced equally. They have to have the same price. And so these two things have to be the same, because here I'm going from goods to goods. Here, I'm going through this channel, but also from goods to goods. These two things should give us more or less the same return. And we're going to assume strictly that they give us the same expected return. So this relationship holds. Is this diagram clear? OK, good. Because what I'm going to do now is I'm going to take this expression here and play with it a little. So we arrive in the previous slide to the conclusion that 1 plus the real interest rate is equal to 1 plus the nominal interest rate times Pt over Pt plus 1 expected. I'm going to denote expected inflation, the inflation we expect, the log change in the price level or the rate of change of the price level from year t to year t plus 1 as pi e t plus 1 is equal to that. So this is expected inflation at t plus 1. See that? Well, do a little algebra, and I can rewrite this guy here as 1 plus expected inflation between t and t plus 1. I just replaced this for 1 over 1 plus pi e t plus 1. Just algebra, I got that. So now I have a relationship. And these things, if this interest rate is not too high and this expected inflation is not too high, not too large, as it happens in most countries but a few around the world, then this implies approximately that the real interest rate is approximately equal to the nominal interest rate minus expected inflation. I'm just taking approximations here. And that is an intuitive expression. The real interest rate is equal to the nominal rate minus the expected inflation. So if the interest rate is 6% and expected inflation is 3%, well, the real interest rate is only 3%. In terms of good, you're going to get 3% less because that's inflation rate. Good. Or if you're borrowing, in terms of your borrowing costs, well, it's going to cost you 3% less effectively because the goods you're going to be selling out of your investment are going to be 3% more expensive. OK, good. So, look, this is what happened. I'm showing you what happened around the years of the Great Recession. Remember, the Great Recession happened end of 2008, 2009, 2010. Several things you can see in this picture. The white line here is the nominal interest rate and the yellow is the real interest rate in the US. And since in the US you can actually trade real and nominal bonds, the difference between these two is expected inflation, as priced by financial markets. And they're called in the US are called inflation breakevens, so swaps, inflation swaps, inflation breakevens. But anyways, so several things you can see in this picture. The first one is that typically, unless you're in Japan probably, the white line, that is, the nominal rate is above the orange line, which is the yellow line, which is the real interest rate. Why do you think that's the case? Or what does it tell you, the fact that on average sort of the nominal interest rate is above the real interest rate? Yeah, on average, in most advanced economies and even more so in emerging markets, inflation is positive and therefore people expect inflation to be positive. Again, Japan went through this long periods of deflation, but that's a rarity. That was an anomaly, what was going on in Japan. But you see something else here. There is an episode very clearly when the opposite was holding, when the real interest rate went much higher than the nominal interest rate. And this is despite the fact that you see even they cross in opposite direction. Here there was a sharp decline in the nominal interest rate and a sharp rise in the real interest rate. What happened? What was happening there? First of all, forget about the picture. What was happening around 2008, 2009? The Great Recession. So that's one observation. Typically, especially in modern recessions, certainly in recessions caused by financial crisis, as this one was, real interest rates go above nominal interest rate-- can go above nominal interest rates. What does it mean in terms of inflation? Remember, what the Fed is setting is this one, more or less. This, I think, is a one-year rate. So it's not exactly what the Fed set, but more or less. So why do you think the Fed cut interest rates there very aggressively? Yeah, we were in the middle of a big financial crisis, so we wanted to boost the economy, so cut the interest rate. And this is, when you map it into the very short rate, this is effectively the hit, the zero lower bound. They couldn't lower it more. They lower it as much as they could, and that was it. So what must have happened for this real interest rate to go up like crazy? How can it be? There's the Fed bringing down the nominal interest rate, and the real rate, boom, jumps up. Expected inflation went down a lot. So what I was saying is expected inflation is typically positive in developed economies, around 2%, 2.5%. That's the type of numbers. But in deep recessions, it can go even negative. And that's what happened. There is the expected inflation as extracted from inflation breakevens, from the swaps. And you see, typically it's around 2% and so on, because that's more or less the Fed inflation target in the US. But during this episode here, we entered into a very deflationary episode, expected inflation close to minus 4%. That was very deflationary. It was very scary. Deflations can be very complicated objects for [INAUDIBLE]. And we'll say more about that later. But that's what happened there. Good. So that's nominal versus real interest rate. Now, let me talk about credit spreads, and then we're going to put everything together. So most bonds issued by corporations are risky. They are not-- US treasuries are as safe as it gets. That's considered the safest assets in the world, together with the German bonds, government bonds, and Swiss. There are a few, but the US in terms of liquidity and everything, is the premier safe asset in the world. But most corporations don't issue at those rates. They have to pay a premium because they're not as safe as those, as the Treasury instruments. So let me call that the real interest rate paid by this bonds by issues by firms on average be equal to the safe real interest rate plus a premium, xt. Now, the point that is important is that this risk premium moves a lot over the business cycle, especially when you have a financial crisis. People really want to run away from risk. And so it tends to be higher in recessions, especially when recessions are caused by financial crises and things of that kind. Now, why do we care about the risk premium? Again, because important private sector decisions depend on that real interest rate, on the risk-adjusted interest rate. If a firm has lots of credibility problems and is considered very risky, the cost of borrowing is going to be very high and therefore it's going to have to have a higher threshold for any physical investment. It's more costly for that firm to borrow. So that's a reason to worry. So the risk premium, that x there, is determined by two things, essentially, in the case of bonds. There's also risk premiums in equity. But in the case of bonds, one thing is the probability of default. It may be that the firm doesn't honor those bonds and defaults on them, so one thing is the probability of default. The other one is the degree of risk aversion of bond holders. There are sometimes in which you say, look, I don't want to hold any risk here or very little risk because everything looks very complicated to me. I'd rather go safe. I go to Treasury bonds. I don't want this stuff. So those two reasons make that spread grow. The second reason, on average, to me is the most important reason. But it's easier to model all this stuff as a probability of default. So that's what I'm assuming. What I'm going to do here is I'm going to ignore this, the degree of risk aversion of bondholders, and I'm going to just concentrate on the probability of default of a bond. But in that sense, you can model both as the same because you can think of risk aversion as somebody exaggerating the probability of default of a bond. If I get very nervous about investing in risky stuff, there is some true probability of default that some agency is calculating out there. But if I'm very nervous about that, I may as well put a marker up, say, well, you know, these guys have messed up in the past. They may think that the probability they followed this bond is 5% during the next year. I'm going to treat it as 10% because I want to penalize for the risk I'm incurring. So think of this p here as a probability of default, but as perceived by-- you don't know what is the true probability of default. That's an abstract concept. But it's whatever you use in your investment decisions that I'm modeling here. So by the same principle we had before between nominal and real bonds, what we need to have is I need to be indifferent-- in equilibrium, I need to be indifferent between investing in Treasury bonds, the safe bonds that pay an interest rt, and investing in risky bonds that are paying an interest rate rf, which is greater than rt. So I have to be indifferent between these two things. And the spread here will have to adjust so I'm indifferent between these two things indeed. It's obvious, if the probability of default is greater than zero, that this rf is going to have to be greater than r, because otherwise I don't want to invest in a bond that pays me the same as that. And on top of that, I can experience a default occasionally, not get my money back. So what we have here, this indifference condition means, OK, during the next year, there is a probability of default p. That means with probability 1 minus p, I'm going to get this high interest rate. I'm going to get my money back. I invest one in a bond, I get my money back, plus an interest rate, which is higher than the safe interest rate, this rf. That's a good thing. Against that is a probability that the bond there is a default, and I'm going to assume always in practice there is some recovery of a bond which is much less than the principal. I'm going to assume it's 0. So if p is positive, as I said before, then it better be the case that this rf is greater than r, otherwise I'm not going to invest anything in the risky bond. So I'm going to replace this rf by rt plus x just to calculate xt. And you can solve this out here, and you get that this risk premium is xt is an increasing function of p. Naturally, if I perceive bonds to be more likely to default, I'm going to require a higher compensation if the bond doesn't default. And that's what you have here. Now, what happens is that during severe recessions, actual defaults go up. So the probability of default objectively goes up and people get a lot more scared also that this will happen, and so p tends to go up a lot. So during severe recessions-- is always almost in recession, but especially in severe recessions, p can rise a lot. It can rise a lot. r may fall or not. We shall see. But this stuff dominates, actually. This x can move up a lot during recessions. And in fact, if I show you what happened during the Great Recession, same episode as before, there you have it. This is our x really. Look how it jumped in 2008. So the average-- and this is for-- I think it's high yield, I think. But it's not junk. It's high yield, though. I think it's a weighted average of things. But think of this as the median bond out there, corporate bond. It had to pay 20% more than a Treasury bond, so big difference if you are in the private sector and wanted to borrow, than if the government wanted to borrow. Big thing. This was a big issue. OK, good. Now, it's almost always-- oh, about that level, this is high yield. So you see, typically because these high yield, these are not the prime companies. They have a probability of default. There's a risk out there. They typically have to pay a spread, 3%, 4%, things like that. But during severe events, that can go very, very high. So if you're a corporation and you're trying to borrow here, it's going to be pretty difficult to borrow. That's the point, not a good time to invest in that sense. It pretty expensive. So that takes me to the IS-LM model. I want to now bring in these two ingredients. So the two modifications I introduce are relevant for the IS. The LM doesn't change. The central bank keeps setting the nominal interest rate, and that's what it does. So that's not changing, and that's the target of the central bank. The central bank may decide to react to things that happen in expected inflation and credit spreads. But the LM is the same as it used to be. In the book, at some point, the book makes a simplification and it starts setting the interest rate in terms of the real interest rate. I think that's a bad idea, so I'm not going to do that. I'm going to keep our IS-- our LM as it was, but now with the extensions, we have to modify, well, the only place where interest rate enters for us, which is in the investment function. And so the investment function now is not a function of the nominal interest rate. It's a function of the real interest rate adjusted by credit risk, because that's the relevant opportunity cost of-- that's the real cost of borrowing, if you will, of firms when they want to invest. So that's the modification. Now, for this part of the course, as I said, I'm going to take this as two new parameters. We're not going to look at equilibrium and the termination of that. When we get into the next part of the course, then we're never going to do much about that, but yes, about this. But for now, these are just two new parameters. So in our equilibrium lecture, three in the goods market equilibrium, now we have two more parameters, expected inflation-- and remember the ZZ curve, where we have g, p, interest rate, all those things are constant? Well, now we have two new parameters, expected inflation and the credit spread. So that's it. That's lecture 3. Now, so what I'm showing you here is what happened in lecture 3. If the credit spreads comes down or expected inflation rises for any given nominal interest rate, then that shifts the ZZ curve up. Why is that? I'm sorry. And if aggregate demand goes up, then the multiplier kicks in and we end up with an expansion in output. So I'm saying, for a given nominal interest rate, if now expected inflation goes up or the credit spreads go down, then that acts almost like an expansionary monetary policy, you see. You get an expansion in aggregate demand. Yes. AUDIENCE: Increase the [INAUDIBLE] because for increased inflation, the firms will be expected to reduce cost more, so they would be more inclined to invest? And for a risk premium declining, they can take and borrow [INAUDIBLE].. RICARDO CABALLERO: Exactly. That's it. The cost of borrowing went up or down for firms. So that's what I'm saying. Those two things operate almost as monetary policy. That has not been done by the Fed, by the way, by the central bank. But they have the same effect because that's the way they enter. They enter exactly the same as an interest rate. So saying that this guy is going up or that this guy is going down leads to the same analysis as when we lower i because they're identical. They enter exactly in the same place. No? So what I showed you here, I had done diagrams like this before. That's what you get when you lower the interest rate. Well, the two strokes describe is effectively like lowering the interest rate that is the relevant interest rate for the firms because lower credit spreads, higher expected inflation means lower real interest rate. Now, the episode I described you during the global financial crisis was exact opposite of this. In the global financial crisis, we had this x, boom, jumping. And I had shown you before that expected inflation came down a lot. Remember, expected inflation came down a lot when negative, from around 2% to minus 4. That's a big shock for the real cost of borrowing for firms. And the x went up like crazy. That's the reason in the global financial crisis what we got is exactly the opposite of this. We got a massive shift down in the ZZ curve for the reasons we just described. Because this is the case for x going down or pi going up. In the global financial crisis, we got exactly the opposite, and in massive amounts, massive increase in x, massive decline in expected inflation. So it's the exact opposite of this and in a much larger scale. It was a massive shock. Good. So that's the case I was just describing. That's what happened in the global financial crisis. So the first thing is, so if x goes up, as it did in the global financial crisis and the Great Recession-- by the way, when I say the global financial crisis or the Great Recession, those are the same episode. They end up being-- it started from a financial crisis, and it turned out ended up being a recession everywhere and a financial crisis everywhere as well. But anyway, so what I just described is this. In the IS-LM space, the IS is shifting inwards a lot. For any given nominal interest rate, if x goes up a lot, that means there is less investment and that means that the LM shift to the-- sorry, the IS shift to the left. And the same would happen if there is a fall in expected inflation. So for the Great Recession, we have two reasons why this thing move inward a lot. One, expected inflation came down, and the other one, x went up a lot, massive movement to the left. Now, what do you think a central bank should do faced with a situation like this? AUDIENCE: Drop interest rate. RICARDO CABALLERO: Drop interest rate. Why do you do that? Well, because this shocks negative interest rates, like, shocks to the interest rate. Effectively, it's like if you had increased the interest rate a lot. And so the central bank will try to offset that by lowering the interest rate. What problem may the central bank face in doing this? AUDIENCE: Reaching low-liquidity trap. RICARDO CABALLERO: Yeah, reaching the zero lower bound effectively lowered one liquidity trap, exactly. There's a limit of how much you can do. And I showed you that that's what happened really here. Effectively, this is-- the reason it looks so flat, it doesn't move, is because it's against the lower bound. It cannot move. Let me tell you a little bit about what's happening now. So this is now. Remember, the other one was for the period from 2008 to 2013, I showed you. Now I'm shifting everything by 10 years. So still you see, on average, the white line, which is the nominal interest rate, is above the orange line-- the orange line-- the yellow line, which is the real interest rate. Why is that? Yep. AUDIENCE: Positive inflation? RICARDO CABALLERO: Positive inflate-- expected inflation. But they're correlated. When inflation is on average positive, then expected inflation is also, on average, positive. There's an exception there. Why is that? When did that happen? There's one point where the real interest rate went above the nominal interest rate. AUDIENCE: COVID recession. RICARDO CABALLERO: Sorry. AUDIENCE: COVID recession? RICARDO CABALLERO: Yeah, exactly. The COVID recession. So as I said before, that was a massive shock, a scary shock. And the initial reaction of expected inflation was to come down enormously, and so that's what we saw. And also see this big step here in the nominal interest rate, and then flat. So what do you think happened there? Yep. Again, they went all the way down to the maximum they could do. They set the short-term interest rate to 0 effectively. It's not exactly 0, but to 0, and they stay there for a very long period of time. Now, why do you think-- and this I think helped a lot the recovery of the US economy, and it also a big reason for the rally that you saw in the equity market in 2021 you can see in this picture, which is this. Notice that the real interest rate went very, very low. You see that? The real interest rate went very, very low. And that's the reason equity markets were flying. I mean, you had effectively very low real interest rates. So what happened there? How did that happen? What must have happened during this episode? Yeah, the central bank was injecting everything possible to it, but even more than monetary policy, the conventional monetary policy. But what is the-- let me say this. This wedge reflects what? What is that wedge as a matter of accounting? Expected inflation. It's expected inflation. So this tells you-- this interest rate was at 0. The real interest rate was at minus 4 here. It means that expected inflation must have been 4%. So we had a combination in which the nominal interest rate remain at 0, but inflation was high, which is not the typical combination we get in recessions, like the previous one, demand recessions, financial crises, where inflation goes down when you are in a recession. This was a different shock. And after the initial shock, we got lots of bottlenecks on the supply side of the economy, which we don't have a good model yet. Later, we're going to have to model here. And when you have problems on the supply side, you can get a situation in which it feels recessionary because there is low activity and so on, but inflation is high. And that's exactly what we had here. inflation was high. Now, at some point, for a while we tolerated this high inflation, thinking that this was going to be a transitory phenomenon and so on. But then it began to last for too long. And when it began to last for too long, then the Fed reacted. And that's when you see they began to hike interest rates, And they began to hike interest rate, and initially it didn't do much to the real rates because expected inflation kept rising. And then eventually they convinced everyone that they were serious about this, and so real interest rates began to rise a lot here. And that's when the equity market collapsed, by the way. You don't know that yet, but I'm going to talk about the equity market later on. But believe me, that's what essentially brought down the NASDAQ, for sure, primarily, and all these memes stocks and all that. That's that. What about today? Well, Houston, we have a problem because, you see, the Fed keeps raising interest rates and inflation is not coming down as much as we expected. In fact, expected inflation initially looked like it was going to decline, and now it's beginning to pick up again. So you have a situation here where the Fed wants to be restrictive, but the real interest rate is declining, not going up. That's a problem. That's a problem. That's what is happening at this very moment. The Fed has a big problem because of that. They are trying to tighten interest rates, but financial conditions are relaxing in a sense because of an increase in expected inflation and even credit spreads were declining, in fact. So here is what I just said in terms of inflation, expected inflation. And you see here the big collapse during COVID, early on in COVID, but then it recovered very strongly and went very high. And actually, in the middle of 2022, it really went up a lot. And that's when the Fed really got scared, and that's when they began to increase interest rates by 75 basis points in a hurry. OK, good. And this is what you see recently. I told you that we have a problem now because expected inflation, they were able-- there is a famous conference happens in Jackson Hole, and it's famous mostly because most central chairs of presidents of central banks, governors of central banks around the world meet for a few days there. But there is one speech that everyone looks at, which is the speech of the Chair of the US Central Bank, the Fed. And they were very worried-- that conference happened around here, and they were very worried because expected inflation was just exploding. I mean, 6% or so, those are unheard of numbers for the US since the '80s. And so they came up with a very tough speech, a very hawkish speech, saying, look, this is unacceptable. We are going to do whatever it takes to bring this stuff down. And they were very successful persuading people. In fact, expected inflation began to decline a lot very quickly, which is one of the reasons you see real rates rising very fast, in fact, faster than the nominal rate, because nominal rates were rising. And on top of that, expected inflation began to plummet, and that led to a very sharp rise in real interest rates and the collapse in the stock market as a result. What about credit spreads in this episode? Well, here you see, during the COVID shock, again we got a big spike here. It was not as large as in the other one, which was a financial crisis per se, but it was a very large spike. And then eventually of came down, and it came down a lot. That's again, when you're seeing rallying in all the markets and so on. But then began to go up. And again, here, we began to have a problem because the Fed wanted to tighten and these credit spreads were coming down. I think I did this on Sunday or something. Today is 27? Yeah, I did it yesterday. There it is. So this pickup here is very recent. It's last week. But credit spreads were declining, and that, again, goes against what the Fed wants to do, which is to tighten financial conditions for firms. Now, as I said before, central banks typically intervene only-- the monetary policy. involves very short duration Treasury bonds, so their own bonds, the bonds of that government, in most places like that. But this shock was so disconcerting and so large, and it did affect corporations a lot because imagine you're in the airline industry, and then suddenly you get COVID. So it really was a major shock to corporations, and so they went beyond traditional conventional monetary policy. They certainly-- something that had done already in the global financial crisis, they began to buy sort of very long-duration US Treasury bonds, so 10-year bonds and so on, the US Treasury. But they went beyond that and they created a facility to buy corporate bonds. That facility was meant to deal with x. You're getting a huge x shock, and they went directly to that to try to bring that x shock down. Why do they want to do that? Well, because of the reasons we have explained here. That amounted the x shock, which came together with expected inflation coming down, amount to a big shift there. They did all they could with conventional monetary policy. They brought this down. So you can think of their policy of intervention is called-- they're called large-scale asset purchases. That's a generic number of that. Well, what they were trying to do really is to act on those interest rates that do not show up in the LM. They show up in here, in x. x is a parameter of here. If I go out there and I buy corporate bonds, then I'm reducing x, which is a way of shifting the IS back. Corporations can borrow more cheaply if the government is buying their bonds. That's the whole idea. In Japan, they even bought equity, directly equity, interventions in the equity market. So happened in Hong Kong in 1997. There was a massive intervention in the equity market. Typically, central banks don't do that. But when situations get desperate and you are against the zero lower bound, so you lost your conventional monetary tool, they tend to be a little more creative, and that's what they've been doing. OK. Any questions? That's it for today. From the-- yeah, you have a question. AUDIENCE: Could you put x into more tangible terms? I think I'm still sort of trying to figure out what a-- RICARDO CABALLERO: Credit spread. For example, if Boeing-- I don't think Boeing is a high yield. Well, let's say Boeing. It's OK. If Boeing borrows, they're not going to be able to borrow-- say, the 10-year rate-- I'm showing it here in 10-year rate spread. The 10-year rate for the US at this moment is close to 4%. If Boeing wants to borrow 10 years, it's not going to be able to borrow at 4%. They're going to have to borrow at 7%, so there's a 3% difference. That's x. AUDIENCE: And that's the credit-- is that credit? RICARDO CABALLERO: That's x. That's credit spread, which is linked to the perceived probability of default. I said, it's perceived. When you say perceived, is that, say, the actual probability of default? Who knows who can measure that? They are, again, agencies that try to measure them, plus whatever extra risk premium you want to put on top of that. AUDIENCE: And so this is tracking the relative, this reliability of [INAUDIBLE]?? RICARDO CABALLERO: Yeah, how unattractive it looks to lend to a corporate versus lending to the US government. And when this line is very high, it looks very unattractive to lend to corporations, and therefore you need to be compensated a lot for that. OK, good.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_3_The_Goods_Market.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: OK, let's start. So what you have there in that picture is the result of a survey to a bunch of economists on-- which are asked to assess the probability that there is a recession within the next 12 months. Recession means, essentially, a decline in aggregate output. And well, the first thing to notice here is that it's not very good news. There are very high chances that, at least according to these experts, that the US enters a recession within the next 12 months or so-- pretty high probability. You can see that that number typically is very, very low, and it goes very high sort of next to recessions. And now we're not in a recession, but there is a sort of very high perceived probability that we may go into a recession in the near future. So how is it that these people come up with this forecast? Well, at some level, either explicitly or implicitly, they must have some model of the termination of equilibrium output. And they need to understand-- they need to see certain things that suggest, once you go through a model, that output will decline. So that's what we're going to start doing today. And in fact, that's what we're going to do throughout this course. We're going to try to find evermore complex, perhaps, or richer models of output determination, aggregate output determination. So that's essentially what this course is about. And so an understanding. We're going to try to understand what is it that drives equilibrium output, and how is it that we get to one specific level of output. That's what it means to find the equilibrium level of output. And we're going to do it sort of in three stages. In the first part of the course-- that is, up to quiz 1-- we're going to focus on the very short run, how output is determined in the very short run, say within a year or so-- or a little more even, but that type of frame, time frame. Then we're going to focus on the medium run. That's sort of at the beginning of the second part of the course. And by the medium run, simply we're going to mean by the time in which prices begin to adjust sufficiently, OK? Before that, it's most of the action happens in quantities. There is little movement in goods prices. There's lots of movements in asset prices, but little movement in goods prices. And in the last part of the course, we're going to look at how output is determined over the long run, which is quite different from how output is determined in the short run. The determination of output in the short run is what we mostly mean by business cycle analysis. And short or medium run, the way we're going to find it here, is what we mean by business cycle-- the country's in a recession, it's in a boom, is an expansion. Those are all terminologies of the short run or short or medium run. The termination of equilibrium output in the long run is when we think about growth. When we talk about why China grows faster than the US today, well, that's a question not about the business cycle. It's a question about the long-run determinants of output growth, OK? And they are even different class of models. In more advanced models, if you were doing a PhD, those things are a lot closer to each other. But in this course. They're going to be very different type of models. It's easier to analyze these things with different type of models than trying to integrate all in one big machine, OK? But let's start with the simple part. In the short run, the key mechanism, something that will keep showing up in all the models and submodels we analyze in the first eight lectures or so or seven lectures-- the next seven lectures or so is this mechanism. In the very short run, output-- that is, equilibrium output, the thing that these economists are forecasting that will decline in within the next 12 months-- is determined primarily by what we call demand. So demand will determine output. Thus, if there's a change in demand, that will change production. But when production changes, that will also change income. That you know from national accounts. Remember, we said that we could measure output from the production side but we also measured the income side. And they're exactly the same. More production, somebody has to receive the proceeds of that-- workers and owners, capital owners, the government, whatever. So changes in-- so the second step is changes in production that were brought about by the changes in demand will lead to a change in income. But when income changes, that will change demand again and so on and so forth, OK? So that's essentially-- that's quintessential short-run macro, to try to understand this aggregate demand, because that's the main driver, and then how it gets multiplied in the short run, OK? In this lecture, we're going to talk just about that, primarily about that. But that's one-- so when I mean short-run macro, that's a structure I have in mind. And that's what most people have in mind, something where aggregate demand will determine that. That's the reason why, in the short run, you worry a lot about whether consumer confidence is high or low. That's demand. If consumers are very depressed, they tend to reduce demand. If consumers are very bullish, that will tend to increase demand. And since, in the short run, output is determined by demand, the business cycle, whether we have a recession or not, depends a lot on how demand feels. So if somebody's forecasting a recession within the next 12 months, is forecasting really that demand will decline within the next 12 months, OK? Why they're forecasting that-- that's something we're going to learn in steps as we go through the course. What are the kind of things they may be thinking about? What are the drags on aggregate demand that are likely to depress demand and so on and so forth? But we'll get there, OK? Anyways, first, let me tell you about the components of aggregate demand. The first and one of-- the largest component of aggregate demand is consumption. And what I mean aggregate demand-- OK, I'll be-- I'll pause for a slide. Let me go over the definition. Consumption-- we're going to denote by C-- is the goods and service purchased by consumers, households, and so on. Investment, which we're going to denote by I, is the sum of nonresidential and residential investments-- so equipment and factories on one side, and then residential investment is houses, stuff like that, apartment buildings and so on, which is also-- these are goods and services, as well, capital goods and so on. But they're are also goods and services. Government spending-- that's what we're going to denote by G-- are purchases of goods and services by the federal, state, and local government, excluding-- that's important-- government transfers. What is a government transfer? Many of you may have received that during COVID. The government sent you a check, for example, OK? Well, that check is not part of government expenditure. That check is like a-- is a negative tax and is going to enter somewhere else. When we mean government expenditure is things the government purchases, services the government acquires and so on, OK? And then exports, X, which will play no role until seven lectures, eight lectures from now, or actually 10 lectures from now probably, is purchases of US goods and services-- that is, goods produced by US factories-- by foreigners. IM is the other side of the story, inputs. It's the purchase of foreign goods and services by US consumers, US firms, US government. So when you buy something that is produced in Germany, well, that's an import. When the Germans buy something that is produced in the US, that's an export, OK? And then the last component is something we're not going to pay any attention whatsoever in this course, which is inventory investment. Inventory investment is certainly-- it's almost accidental. There is some planning on it, but there is a lot of-- it's just when there's difference between sales and production. And over the very short run, there's lots of differences. I mean, you're not producing-- unless you're in a bakery, you're not producing and selling immediately. There are certain lags. That's a small thing. It's volatile, but it's a small thing. So we're going to ignore it for this course. We're going to assume, actually, unless we explicitly say the contrary, and that could show up in a pset. It would never show in a quiz, because it's not that important. We're going to assume that this inventory investment is equal to 0. Also, for this part of the course, until further notice, we're going to assume that exports and imports are equal to 0 as well. That's not realistic, but it's easier to analyze what we call a closed economy, an economy that is not interacting with the rest of the world. In, again, 10 lectures from now, we're going to open the economy to the rest of the world, and then we're going to have to talk about things like import/exports, exchange rates, things of that kind. But for now, let's keep it simple, OK? So now, so you get a sense, this is for 2018. But I mean, the totals change, but the composition doesn't change very much, of GDP-- of GDP output, aggregate demand. They're are all the same in equilibrium, but we'll get there. In GDP, you see that consumption accounts for a big chunk, close to 70% of aggregate demand. That's the reason people worry so much about consumer sentiment and so on. University of Michigan has many claims to fame. But one of them is that they produce its index of consumer sentiment. And everyone is watching that thing. Anybody that worries about macro or finance is watching that thing because it tells you a lot about one of the main drivers of output, equilibrium output. Then you see investment is substantially smaller, but it's large, in particular, nonresidential investment. Government expenditure is a big component of aggregate demand. And then I'm not going to worry too much about-- for a country like the US, the openness part is relatively small. If you go to a-- a small economy typically will have sort of very large exports relative to GDP and so on. But that's not the case of the US. And there you see why we're going to set inventory investment to 0. It's a small thing. It moves a lot more than its size, so it can account for fluctuations in sort of the monthly level of GDP. But it's not that important in a slightly longer period of time, OK? So that's more or less the story. So now this is the model. Please stop me. Is there anything here you don't understand, because everything that we build from here to quiz 1 will build on understanding this, what I'm about to say. Very simple, but if you miss a step here, everything is going to be confusing in the next few lectures. And you're not supposed to understand it in the first run, so it's OK that you ask me. But let's make sure that you understand what is going on here. OK, so that's aggregate demand. First, definition-- so we're going to denote aggregate demand by this Z, letter Z. And aggregate demand is going to be-- when we say aggregate demand, remember what is the exercise we're trying to do. Ultimately, what we want to determine is the output, the production of the US economy, say. So when we meet-- when we talk about aggregate demand, we're trying to determine the demand for domestically produced goods, for goods produced in the US. That's what we're trying to pin down. And so that's the reason aggregate demand looks like that, is, well, consumers. Consumers are going to demand goods. Investment, G, plus exports. If foreign demands US goods, that also increases US production-- minus imports because imports is goods and services that consumers, firms, and governments sort of buy from foreigners, but they're not produced by US companies. So they do not affect the determination of equilibrium output in the US, OK? That's the reason you subtract it. Now, that distinction is not going to matter until 10 lectures from now, because we're going to set X and IM equal to 0 from the point of view of modeling. So all demand is demand for domestically produced goods in this part of the course, OK? So aggregate demand for US will be this C plus I plus G. So we need to understand what determines C plus I plus G. And at least initially, we're going to keep it very, very simple. We're not going to think too much about what determines investment. In fact, we're going to assume that it's a constant, is given. So it's determined somewhere else, not in the model I'm about to solve. Government expenditure, the same-- I'm going to assume it's determined by some other priorities, green agendas and stuff like that. It has very little to do with what we're doing here. And then taxes is something that doesn't show up there, but it will show up very shortly. We're also going to assume that they have been determined somewhere else. In psets and later on in the course, we're going to endogenize all that, but not now. Let's assume-- I'm trying to come up with the simplest possible model of aggregate demand. And I'm making two of these terms trivial, just constants, OK? And I'm going to focus all my effort here in this component here, which I already told you is the most important component of aggregate demand, which is consumption, OK? So we're going to assume-- here we're going to have a function. Something has to move for the model to be interesting. So this-- we're going to assume that consumption is an increasing function of disposable income. I'm about to define what disposable income is, but you can imagine what it is. It's something you can use to consume and so on. So very naturally, if you have a higher disposable income, you're going to consume more. That's what this says, OK? In reality, that consumption function is a lot more complex. There are lots of things that enter there that we're not modeling for now. But let's start from the basics, OK? So that's going to be the only behavioral assumption we're going to make for a while, that the consumers consume more when they have more disposable income. And then we're going to get even simpler. I'm going to assume that consumption is a linear function of this disposable income. So there's going to be some constant C0 which captures lots of things that we're not modeling here-- for example, the fact that, for any given level of disposable income, if you-- if you're richer, suppose you have some shares, and now the shares double in value, you probably are going to consume more as well. There are lots of other things that affect consumption which are different from-- aside from your disposable income. But we're not going to model that. So thus, we're going to call it autonomous, autonomous in the sense that we're not going to determine it here. We're going to take it as a parameter that comes from somewhere else. We may do some experiments moving that variable around, but it's not going to be part of what we model. C1 is a more interesting parameter for this part of the course. And it's what we call the marginal propensity to consume-- out of disposable income, in this case. That is, C1 tells you the share. If you get an extra dollar of disposable income, how much of that do you spend in consumption? So say you get an extra dollar of income. If you spend $0.60 in the things you normally consume of that extra dollar, well, then your C1 is 0.6, OK? That's the marginal propensity to consume. And that's what gives us our increasing function. If you get an extra dollar, you're going to do-- you're going to save part, but some of it you're going to spend. That part you're going to spend is the C1 that we have there, OK? Good. Now let me tell you how we define disposable income. Disposable income is just equal to income, which is equal to production, minus taxes. That's disposable income. It's whatever you earn as a worker or as a capital owner-- well, then the government takes some out of it. That's your disposable income, and that's what you have to decide how much to save and how much to consume. So that means that our consumption function can be written that way after all these assumptions I've made-- equal to its autonomous component plus C1, the marginal propensity to consume, times income, output, minus taxes. Is it clear? Yes? So all these are assumptions. Now, they're not crazy assumptions in the sense that we know that there is a relationship between these two things. Again, the consumption function in practice is much richer than that. And there is lots of randomness, random terms around and so on. But that's not what we're about here. But that's-- if you want to start with a consumption function, this is a pretty reasonable one to start with, OK? OK, so that's going to look in the space of disposable income, or income-- I could have put income there, not disposable income-- so that it looks like that. OK? So C0 is that autonomous consumption. It's something you're going to consume regardless of your level of disposable income. And there is a minimum consumption you have to have. [CHUCKLES] And then the slope of that is the marginal propensity to consume, which is C1, which is a number between 0 and 1. OK, so let's determine equilibrium output. So we have aggregate demand, which is C plus I plus G, OK? There we are. That was our definition of aggregate demand. I'm going to stick in now the functional forms. Well, these guys are very boring. They're constants. And I'm plugging in here the consumption function. So what we have here is that aggregate demand is an increasing function of output, or income, OK? It's also a function of taxes, investment, and so on. But it's an increasing function of output. And this is important because, remember, the goal of this is to find equilibrium output. So here I have, on the right-hand side of my aggregate demand, output. That's good. I have one equation in which output shows up, [CHUCKLES] OK? Now, I cannot find equilibrium output just from this equation. Why is that? So remember, we're trying to build a model to find equilibrium output. That's our goal. That's what will tell us whether we're in a recession or not-- output is low, recession, output is high, we're in a boom. Obviously, I cannot solve it from this. I have two unknowns. What are my two unknowns? Two unknowns, one equation. What is my second unknown there? AUDIENCE: The aggregate demand. RICARDO J. CABALLERO: Aggregate demand, of course. We have to determine Z and Y. So how are we going to do that? Well, using a second equation, which is the equilibrium condition. It's not a function. This is a function. This is not a function. This is an equilibrium condition. It says, in equilibrium-- not outside equilibrium, in equilibrium-- output is equal to aggregate demand. OK? That's what this equilibrium condition tells us. Off equilibrium, this doesn't hold. That's the reason this is not a function. This holds everywhere. It's a function. This is an equilibrium condition. It says, at equilibrium, aggregate demand is equal to output. So now we're done because we have two equations, two unknowns, OK? Good. And the reason I post on this is that I see that mistake made often, that this is interpreted as a function. It's not. It's an equilibrium condition. At equilibrium, it holds. And that you can see, actually, I'm going to illustrate the same point in the diagram. So this is the-- let me keep going. So this is clear, no? So this is just a summary of what we had in the previous slides. And this is the new thing here, which is, in equilibrium, output is equal to aggregate demand. And again, that's what makes this really a short-term model. You see, I'm saying output, in the short term, is whatever demand wants it to be, which is different from the long run that says, no, no, hold on a second. I mean, but how much output you can produce is a function of the capital you have, of the workers you have. Yeah, yeah, that's true in the long run. But in the short run, you have lots of flexibility because you have lots of unused capacity and so on, OK? So this is pretty-- is a big assumption. And there are schools of thought within macroeconomics that are split by this assumption, whether you believe that, in the short run, output is aggregate demand-determined or not. Ultimately, we tend to believe that in the short run-- the long run, no. But in the short run, that's what it does. Now, sometimes the long run gets to you very quickly. And at this point, we're in a situation like that. That's the reason we're seeing inflation and so on. But that's something you'll understand later on, OK? But for now, so this is important. We're saying here, output-- I don't need another equation. I could have done aggregate demand like this and then output a function of capital, labor, lots of things. I'm not going to even do that. I'm going to say, no, no, output will be whatever demand wants it to be, OK? And that means, in equilibrium, they have to be equal. [CHUCKLES] Good. You had a question. AUDIENCE: [INAUDIBLE] can you [INAUDIBLE] output and GDP [INAUDIBLE]? RICARDO J. CABALLERO: No, they are the same for us. AUDIENCE: They are [INAUDIBLE]? RICARDO J. CABALLERO: That's our definition. GDP, for us, is output. So when they say aggregate output, I mean GDP. Remember? Real GDP-- we're talking all about real GDP, OK? And it's also equal to income-- not disposable, but it's equal to income. Remember when we did those little tables where we look [INAUDIBLE], the three different ways of doing it? Well, the first two were output. And the last one was income. And they had to be the same, OK? So Y is real GDP for us? That's real GDP. What happens, in the table I showed you, I already used the fact that real GDP is equal to aggregate demand. And that's the reason I showed you the different components of Z I showed you that, that, and that, [CHUCKLES] OK? But in equilibrium, they are equal. There's really a figure that will clarify, I think, a lot of that. But let me keep solving this. So we have that-- so what I'm going to do next is just solve it. So we have this equilibrium condition. I'm going to plug in my aggregate demand function here. And so I can solve out for equilibrium output. And here we have, for the first time in this course, an equation for equilibrium output. There you are. That's your equilibrium output in this economy. OK? Now, this guy here is very famous and is very macro It doesn't happen in micro. It happens in macro only, this guy here. Another guy there is called the multiplier. And it's a very important macro concept. It's a huge concept in macro. Now, why do you think it's called a multiplier? Well, obviously, it multiplies something. But a multiplier sounds like it multiplies-- that makes something bigger. [CHUCKLES] So what happens if C1 is greater than 0? What happens if C1 is greater than 0. Remember, it's between 0 and 1. But what happens if it's greater than 0? What happens with that number there, 1 over 1 minus C1? AUDIENCE: Bigger than 1. RICARDO J. CABALLERO: It's greater than 1. It multiplies. So the reason we call it a multiplier is not nothing deep there. So this thing here is sort of autonomous stuff. It's what the government spends, what firms are spending, capital. This is autonomous consumption. And this actually is a typo there. There should be a C1 in front of that-- typo. It comes from there, C1 times T. So fix that typo, please. I'm going to upload the slides again with the typo fixed, OK? I'm just-- it comes from here, C1 times T. OK, so that's what this does. It multiplies. So whatever it is that is happening here, whatever it is that the government is spending or whatever, this term is multiplied. And that's a huge thing. There was a big debate-- almost always, when you're trying to get out of a recession, and the government starts spending, a big question is, well, how big is the multiplier? If the multiplier is small, you're going to have to spend a lot to get the economy out of the recession. If the multiplier is large, then you're going to have to spend very little, and then the multiplier will take you away from that recession. So what is it that makes a multiplier large or small? Well, mechanically, when is it that multiplier large? AUDIENCE: When C1 is closer to 1, so when people are spending more of their income. RICARDO J. CABALLERO: Exactly, when C1 is large. And that gives you the logic. And that's very important in macro, is why is it a big multiplier? Well, because think what happens in macro. The government spends-- not increases output. But now output increases income. And if consumers spend a big share of their extra income, an output, and consumption again, then that increases output again, which increases the income again, and you keep going, OK? So that's the sequence. On the contrary, if consumers are very scared, they don't want to spend any extra dollar they receive, anything of the extra dollar they receive, then you don't get any multiplier, because this initial increase in output that comes from the government expansion, that does lead to increase in income. But if consumers don't spend it, it doesn't recirculate into the economy, and then you don't get a multiplier. So that's the reason we call it the multiplier. So that diagram is an important diagram. I'm just doing this, actually. In that diagram, I'm plotting the aggregate demand function and then this equilibrium condition, output equal to aggregate demand, in the space of aggregate demand and output, production, and income here. But remember, income is equal to production, OK? So there's your aggregate demand and, thus, your 45-degree line because this output equal to-- so whatever is in this axis equal to that axis, that's the 45-degree line, OK? So that's your equilibrium conditions. It's, at equilibrium, these guys here, aggregate demand, Z, will have to be equal to Y. Those are-- that's traced there. This is aggregate demand. Why is this line flatter than that? Why is aggregate demand flatter than-- AUDIENCE: Because people don't spend their entire dollar. RICARDO J. CABALLERO: Exactly, because C1 is less than 1. So the slope of the aggregate demand in this space is C1, is the marginal propensity to consume. How much more they demand if they get an extra dollar? Well, they don't get-- they don't demand one extra unit. They demand C1 unit, and C1 is less than 1. So that's the reason this. So if C1 is very small, this line is going to be very flat. If C1 is very large, very high propensity to consume, this is going to be very steep, the red line. The other one doesn't change, the 45-degree line. And what I said is that, at-- so you see, if I take an off-equilibrium level of output-- say, this-- aggregate demand is different from output. It's only at equilibrium that these two things will hold. This function, I can plot it everywhere. But this one will hold only at equilibrium, OK? That's when these two things are equal. So what I solve here-- here, I just found this point. So parameters here are C0, C1 times T, and G. They all shifters of this aggregate demand, up and down. And that point here is exactly that. And all those things are parameters in my aggregate demand. I really want you to internalize this diagram. Any questions about it? Just stare at it a little because it's going to show up repeatedly. And later, it's not going to show up. But whenever you get confused, the way to get yourself out of that confusion is going to be to go back to the diagram. You'll see. I'll remind you when that's likely to happen. So you better understand this diagram. Play with it. Move-- here, the only thing you can move around is the ZZ, the aggregate demand curve, OK? The other thing is an equilibrium condition. [CHUCKLES] You can't move that 45-degree line. But ZZ, you can move it around. So let's do a few exercises. One, the most obvious, suppose that C0 increases by $1 billion. So autonomous consumption-- that is, that level of consumption which is independent of income-- goes up by $1 billion. And that could be we're only in a better mood. Disposable income is whatever it is today. But there is great expectation that the economy will enter a boom next year. And so then you feel richer and so on. And you may decide to consume-- not wait until next year. You may decide to consume more today. That kind of thought experiment can be captured by a C0 type shift, go up. And that's when I talk about consumer sentiment. Well, consumer sentiment is a lot about C0-- for any given level of income, whether consumers are likely to consume more than they would otherwise or less. And that's what C0 captures. So let's go-- everything in this model-- there's no dynamics in this simple model. So we immediately-- what we know is, if you just were to solve the equation, and I tell you what happens to-- if output-- what happens to output if C0 goes up by $1 billion, you know that output will rise by how much? Let's keep it simple. Just staring at that equation, if I tell you autonomous consumption goes up by $1 billion, what happens to equilibrium output? Goes up by more or less than $1 billion? Or exactly $1 billion? AUDIENCE: [INAUDIBLE] RICARDO J. CABALLERO: Exactly. And the multiplier is greater than 1. So we know that the output will increase by more than $1 billion, will increase by 1 billion times the multiplier. If C1 is 0.5, then it will increase by $2 billion equilibrium output. Now, I'm going to get you from the $1 billion to the $2 billion in steps using the diagram. That's what I intend to do next. OK? So this shift here-- so we're starting from this equilibrium output here. This shift here, boom, is increasing C0. That's at $1 billion. So distance A to B is $1 billion. That's because what I did is, for any given level of output. I shift this aggregate demand up by $1 billion. Thus, autonomous consumption, up, OK? Well, because output is whatever demand wants, that immediately increases output by $1 billion. So the distance between B and C is also $1 billion. Demand increased by $1 billion, boom, output immediately catches up. So output increases by $1 billion. But if output increases by $1 billion, what has happened to income? It also increased by $1 billion. Income is the same as output. So income has increased by $1 billion. Well, if income has increased by $1 billion, and C1 is different from 0, that means part of that extra billion is going to be spent in consumption, second round. So say C1 is 0.5, then now you get $500 million more of expenditure. But if it's-- of consumption. And it's $500, that's the CD. That's $500 million. Obviously, this C1 here is less than 0.5 because otherwise this would be half of that, but it's not. But anyway, you get $500 million more. But if now there's $500 million more of demand, since output does whatever-- production does whatever demand wants, then you get $500 more of production. And if you have $500 more, million dollars of more production, then you have $500 million more of income. And if you have $500 more of income, and you have C1 is greater than 0-- say, 0.5-- you're going to spend $250 million more. But $250 million more will generate $250 million of production, which also will generate $250 million more of income, which will generate $125 million more of consumption, and blah, blah, blah, blah, blah. You [INAUDIBLE] OK? So that's-- and that's what is happening here. [WHISTLING] Boom. Yeah? AUDIENCE: [INAUDIBLE] the movement from C to D [INAUDIBLE]? RICARDO J. CABALLERO: From C to D. OK, so this is the initial shift in aggregate demand up, $1 billion. That leads to a $1 billion more of production as well, which means $1 billion more of income, OK? But now these consumers not only have this C0-- $1 billion higher in C0, but they also have $1 billion more of income. And since they have $1 billion more income, and they're going to spend part of it-- C1 times that, and I assume C1 was 0.5-- that's what gives me CD. That's the extra $500 million. And then that thing there is also $500 million. And there was $250 million, $250 million, $125, $125 $62.5. That's the way you get there. There is an alternative way of finding equilibrium output, which is entirely equivalent. And it's the way it was initially done, by the way. And you'll see later on a very important curve in this course will be the IS curve, which is a curve that describes all the equilibrium in goods markets. We'll get there. But the reason it's called IS because of this alternative way of deriving the same I have derived, which is through-- you can arrive to the same equilibrium by saying, look, equilibrium output is that output at which investment is equal to savings. That's the reason that curve is going to be called IS, investment equal to savings, S. So let me very quickly do it for you and then make a point and connect the two things. So private saving is what consumers do and so on, and firms. It's just disposable income minus consumption. That's your saving, OK? So it's equal to Y minus T. That's disposable income minus C. Government saving is taxes minus government expenditure. So if the government has a deficit, that thing is negative. Governments often have negative saving, OK? If it has a surplus, then taxes are greater than G, then you have a fiscal surplus. Again, rarely happens in the US or in the Americas in general. It happens a lot in Asia, but it doesn't happen very much in this part of the world. But there we are. So in equilibrium, investment, I, has to be equal to saving. So that's what you are going to use the saving for, to invest, OK? So investment is equal to the sum of savings. I can replace all that in here, and you see that I get exactly the same equilibrium condition I had before-- output equal to aggregate demand. So this is an entirely equivalent way of deriving this. And I just want to show you this because it's the way it was originally done. And you'll understand better the terminology we use later on if you see that this is an equivalent way. This is also a nice way of illustrating something, why macro can be counterintuitive sometimes. Microeconomics is very intuitive. I mean, things make sense. It's like physics. It makes sense. Macro can be confusing. For example, there is a well-known paradox of savings, in the short run, not in the long run. In the short run, you have the paradox of saving. So we all think that you save more is a good thing. Our parents teach us that is a good thing to save more and so on. And in general, that is true. You'll do better in life if you save a little more and so on. But it's not true for the macro in the short run. It's not good for macroeconomics in the short run, unless you are in an overheated economy. Now it could help. But otherwise, it's not very good for equilibrium output. And let me show you that very quickly with the expression I just showed you. Remember I said, equilibrium output is pinned down by investment equal to saving. And saving-- the private saving here is an increasing function of output, OK? It's equal to, actually, 1 minus-- the function has a slope of 1 minus C1. C1 is the share of income that you spend in consumption. Therefore, 1 minus C1 is the share of your income that you spend in saving, OK? So this function is increasing with a slope of 1 minus C1. So suppose I tell you now that all we decided to learn the lessons of our parents and say, OK, we should all save more. So that means, for any given level of income, now we all decide to save more. That means the S function shifts up. For any given level of income, we save more. But we have a problem there because now we have more savings than investment. So how do we restore equilibrium? That's not in equilibrium. How do we restore equilibrium? So now we all decided to be more prudent and save a little more. At the level of the economy as a whole, now we have more saving than investment. That can't happen. It's not an equilibrium. What restores equilibrium? Well, in this very simple model, our investment is fixed. So nothing can adjust on the investment side because it's fixed. Later on, it's going to move, but now it's fixed. Nothing can adjust in the public savings part because it can't move. We assume that it's exogenous. So something has to happen endogenously here that the reverses the increase in savings, the only thing that can happen. And the only thing can happen endogenously here is a decline in output-- output declines, saving declines. So here you end up in a situation in which we all decided to be better people, save a little more. And we ended up sinking the economy in a recession-- output decline. That's the reason it's called the paradox of saving. That's not going to happen to you individually, but to an economy as a whole, that's the reason I said it's counterintuitive. It can happen. So look, if you don't like this way of-- and it's not the main way we're going to use. If you don't like this way of finding equilibrium output, just ignore it. I just wanted you to know it. Go back to-- the thing you really need to understand is not this. It's this, that, that. That you need to understand. So let me illustrate the paradox of saving in the model we're using, in the one I want you to really remember. Well, the paradox of saving I can capture by a decline in C0. For any given level of income, now we decide to consume less. If we consume less for any given level of income, that means we're saving more. So I can capture in this diagram the fact that we have all become sort of more prudent by a decline in aggregate demand. But if aggregate demand declines-- so suppose we start at this equilibrium level of output, and then, all of a sudden, we say, OK, enough is enough. We need to start saving more. Then what happens? Well, aggregate demand declines. I mean, for any given level of income, if you're going to save more, that means you're going to consume less. So aggregate demand declines. But what happens when aggregate demand declines? AUDIENCE: Output declines. RICARDO J. CABALLERO: Output declines. What happens when output declines? AUDIENCE: [INAUDIBLE] RICARDO J. CABALLERO: Income declines. What happens when income declines? Well, part of that income you consume. So you're going to consume less. C1 times that. So then-- and then you get the multiplier working against you. So not only-- if now we all decide to save more, not only output falls by the same amount that we increase savings, but actually declines by more than that because you get the multiplier working against you, OK? That's the reason a big role of policymakers, really, in recession is to try to maintain the calm because you can get into these kinds of things. It's everybody gets scared, and we all get scared. So the economy can implode just out of bad sentiment and so on. Now we're on the opposite side of the cycle. We would want output to decline a little because we are having other problems, inflation and so on-- again, something we'll discuss later. So now you may want to scare consumers a little. And in fact, the governance of the Federal Reserve-- and the same is happening in other places in the world-- are doing just that. I mean, when they go out there, says the economy is too hot, we're going to have to mess up this economy a little. [CHUCKLES] They're telling us that. And the first one is listen to these things is the financial markets. So every time they come out and make a speech of that kind, equity markets decline. Well, equity markets capture before the mood that consumers will have in the future, captures it early. But that's the message, OK? So they're trying-- at this moment, really, policymakers-- at least the central banks-- are trying to do just that, depress a little bit consumers so we can cool off the economy a bit. OK. Any questions? Again, very important lecture because we're going to build on this and, later on, this is going to be always in the background. And of this-- until we actually go to the third part of the course, the key model will be this. This will be in the background. More things will be happening on top. But whenever I ask you a question-- for example, later-- ah, one example-- what else would produce a situation like this? What else-- what could happen? What kind of policy would generate that movement? Well, at this point, we haven't introduced monetary policy. So you cannot talk about monetary policy. But we do have other kinds of policy we could talk about. Remember? Here, fiscal policy. OK? Fiscal policy. G and T, those are fiscal parameters. When G goes down or T goes up, we call that contractionary fiscal policy. Why contractionary? Because it contracts aggregate demand. If G goes down, clearly aggregate demand goes down immediately. If T goes up, well, disposable income for any given level of income goes down, and therefore, consumption goes down. So we call an increase-- a decline in G or an increase in T a contractionary fiscal policy. The opposite-- if G goes up, and T goes down, we call that an expansionary fiscal policy. So I take you back to this diagram here. And I ask you the question again, what kind of fiscal policy will generate this type of-- this picture? Contractionary or expansionary? AUDIENCE: Contractionary RICARDO J. CABALLERO: Contractionary. I mean, a good mnemonic, the output decline. So contractionary-- so that is a decline-- a reduction in G, in going expenditure. Or an increase in taxes will shift that curve down. And then the multiplier will make it even more contractionary than the initial fiscal impulse. OK? Very good, see you on Wednesday.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_23_Asset_Pricing.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: OK, so let me continue with the topic of the previous lecture, which is asset pricing. And we said the tricky thing with asset pricing is that the payoff of having an asset comes in the future. And that implies at least two things. The first one is that we need to have a method to value returns in the future as of today, OK? So what is it equivalent to? After all, if you want to buy a financial asset, you need to pay for it today with dollars of today, and you are expected to receive some payoff in the future. You need to be able to compare these two things. And the second is that-- is related, is that because this payoff is in the future, you need to have expectations about it. So those are the two concepts we play with. And it's a third related concept, which is, because it comes in the future, many things can happen in between. And so there is also a concept of risk. Those are the three elements we discussed. And remember, we did-- I'm going to go very quickly over what we did in the previous lecture because I could see some faces. So let me go quickly over that and then continue with equity, which was the next step. So the first step says, OK, ignore the expectations part for now and risk and so on. Assume that you know the future. And we ask the question, well, how do we value $1 next year? In particular, we want the question, is it equivalent to having $1 today? And the answer quickly became no, because imagine that you had the dollar today. Then you can invest it for a year, and you get the return of the one-year interest rate return. So with a $1 today, you can do more than with $1 in the future. In fact, that calculation gave us the exact recipe to valuing $1 in the future, because in order to get $1 in the future, I don't need $1 today. I need 1 over 1 plus the interest rate. I invest this in the one-year bond, and I get a return of that over that amount. That gives me exactly $1 in the future, OK? So that gives us a very natural way of valuing $1 next year. It's just 1 over 1 plus the interest rate. And by the same logic, if I have $1 today, and I want to invest it for two years, well, I'm going to earn that interest rate for the first year. And then I'm going to earn that interest rate on the full product, not on the original $1. On the 1 plus i t dollars, I'm going to earn 1 plus i t plus 1. And so I can generate a lot of-- [CHUCKLES] if the interest rate is 10% on average, $1 today generates $1.21 two years from now. So that tells you, by the same logic, that 1 over $1.21 today is equivalent to $1 in the future. So then we said, let's pick a very general asset, an asset that has-- that pays Zt dollars this year, then Zt plus $1 one year from now, Zt plus $2 two years from now and so on and so forth, up to n years ahead. What is the value of that asset today? Well, you apply exactly the same logic that we apply here for every single year in the future. And you get-- that's the-- that's called the present discounted value of those cash flows. That gives you the value today. Present, discounted-- those are the discount factors, 1 over-- and then that's the value that you get out of that. So that asset has that present discounted value of future cash flows. And that should be more or less the price that you are willing to pay for that asset, OK? And then we introduce expectations. OK, but we're talking about cash flows in the future. In many cases, we don't know. Well, we don't know two things. First, we don't know where the cash flows may be. In a very safe bond, you do know the cash flow. [CHUCKLES] But in almost any other asset, you don't know exactly the cash flows you receive. And you don't know what the future interest rate, one-year interest rate will be. So that took us to the concept of expected present discounted value, in which you just replace all the things we don't know today for the expectations of those things. So we don't know the cash flows in the future. That's the reason-- we have an expectation. That's what you put there. And we don't know the future-- we know the current interest rate, but we don't know the future one-year interest rate or two-year interest rate, end-year interest rate and so on. So we put an expectation there, OK? So that's that. And then I gave you some special cases. That's a case in which the interest rate is constant. That's a case in which the payoff in the future, the coupons, are constant. And that's a case in which both are constant, the interest rate and the payoffs in the future and for a bond-- for a financial asset that lasts for n years, that matures in n years. That's the case in which an asset that does not mature, that goes on forever under these conditions. And then I said, well, suppose that we look at the x dividend price. That means after this year's dividend or this year's payment, then that's the value. And I made the point there that shows immediately something that will be present in almost any asset price, which is, as interest rate goes down, asset values tend to go up, OK? And the reason for that is because assets pay something in the future. And if the interest rate falls, then you discount less the future. So whatever you're going to receive is valued more because you discount less the future, OK? So that's the effect. In fact, in this particular case, if the interest rate goes to 0, the value of that goes to infinity because you don't discount the future, and you're going to receive infinite payments in the future, OK? Then we look at the specific financial assets. And we first look at bonds. And we said one important distinction, something-- whenever you buy a bond, one of the things you want to know is the maturity of that bond. Is it a one-year bond, a two-year bond, five, 10, 30? Those are the typical US Treasury bonds, at least. You want to know the maturity of the bond. So then we went into pricing different bonds. We say, well, a bond-- the simplest bond is a bond-- is a one-year bond that-- suppose we have a bond that pays $100 at the end of the year, and then it matures. Well, the price of that bond should be the present discounted value of the flows of that bond, which is $100 divided by 1 plus the one-year interest rate at time t, OK? And notice again here, by the same logic of what I explained before, is that, if the interest rate, one-year interest rate, goes up, the price of that bond will decline because that bond pays you one year from now. And one year from now, when the interest rate is higher, it's worth a little less than it was when the interest rate was a little lower, OK? And then we look at a two-year bond, a bond that pays nothing up to two years from now. And we say, well, two years from now, it pays $100 and then matures. Well, the price of that bond will be this, OK? And notice that, in this case, the price of a two-year bond at time t goes down if either of the one-year rates goes up, OK? It can be the first-- this year's one-year rate or maybe the expectation that the one-year rate next year will go up, OK? Good. Then I introduce an important concept, which is this concept of arbitrage pricing, which is-- two instruments should give you sort of the same interest rate-- we're leaving risk considerations aside. It should give you the same return when you compare them over the same maturity, OK? So in this particular example, I said, look, a one-year bond and a two-year bond that you invest-- you hold for only one year should give you more or less the same return, OK? So that means that this is the return you get from $1 invested in a one-year bond-- should be equal to the return you get by investing in a two-year bond and selling that bond after one year. And that's the expression we had here. If you-- this is what you pay for a two-year bond. And this is what you expect to be paid for that bond when you sell it one year from now. Notice that, one year from now, the two-year bond will be a one-year bond because one year will have expired, and at that point, it will be a one-year bond. That's the reason we have a subscript P1t here. So that means that we can solve from here that P2t is simply that. But there is an expression like the one we had for the one-year bond at time t. There is one for t plus 1. We put expectations because we don't know the actual interest rate in the future. And then I stuck this into there. And we got exactly the same price that we got with the net present-- expected present discounted value approach, OK? And so this asset pricing-- this arbitrage way of pricing things is an incredibly powerful tool that is used very extensively in finance. These are simple calculations. But when an asset gets to be tricky, much more complicated, this is very useful. Then we talk about bond yields. And bond yields are defined as the constant interest rate that is consistent with the current price of that particular bond. So in the case of the two-year bond, we call the two-year rate, that interest rate that is constant over the two periods. That's the reason I square it. It's not i1t times 1 i1t plus 1. I square it. It's not constant over time. The two-year interest rate may be moving a lot. I mean, the Fed just hiked by 25 basis points. I'm sure all the rates are moving at this moment. So the rates can be moving at all points in time. But what we define as the yield is, at one point in time, you tell me the price of the bond, you tell me the payoff of the bond, then what is the constant interest rate that makes this price-- this expression equal to the actual price? That's the way we define the two-year rate. That's the two-year rate. And if you have a bond that pays 100 n years from now, then there would be a constant interest rate, i n, that gives you $100 divided by 1 plus i nt to the n power. That will give you-- when you set that equal to the price, the actual price of the bond, the one you get out of expected present discounted value or out of arbitrage, then you have found the yield or the yield to maturity. We know what the-- we already got the price from the previous slide. We know that the price of this bond is going to be 100 as a two-year bond divided by this product of 1 plus interest rate, one-year interest rate. So this has to be equal to that. That's the way, actually, you calculate the two-year yield. And numerators are the same. That means the denominators have to be the same. And this implies approximately that the two-year rate is a sort of average of the expected one-year rates, OK? So in this case, the two-year rate is a sort of average of the one-year rate. That means that, when the-- when you expect the interest rate to be-- the one-year rate to be rising over time, then the two-year rate will be above the one-year rate today. That's when the curve is-- we say the curve, the yield curve, is steep. Remember I showed you this in-- there? When the curve looks like that, so steep, means that the later-- the two-year rate, the three-year-- well, here, in particular, the three-year rate is-- the two-year rate is higher than the one-year rate. The three-year rate is higher than the two-year rate and so on and so forth. That happens when you expect-- the market expects the interest rate to be rising over time, the one-year rate to be rising because remember, the two-year rate is the average of the current one-year rate plus the expected one-year rate one year from now. For that average to be higher than the one-year rate now, it has to be the case that the expected one-year rate one year from now has to be higher than the current one-year rate, OK? So that's when you tend to get-- that's when you get an upward-sloping term structure. And when you get a downward-sloping term structure, which is the way it looks right now-- actually, right now, it looks very downward-sloping. There you are. It looks very downward-sloping-- is people expect that we're getting to the peak of current policy rate of short-term interest rate. And so people expect now for the interest rate to decline going forward. And that's the reason the two-year rate now is lower than the one-year rate. And the five-year rate is lower than the two-year rate and so on. And as you can see here, it's very steep. Then we said, well, let's add risk because yeah, sure. Here we assume that you were indifferent between investing in a completely safe one-year bond and a two-year bond in which you had to make an expectation about the price. But that price could move around. So there is risk on that price, on the price of one-year-- the one-year bond as of today. So we added risk. And there are two types of risk in bonds. One is default risk, that they had promised that they would pay you $100, but it may happen that they cannot pay you the $100, the corporation or the government or so on. Argentina defaults its bonds regularly, for example. Many of the regional banks that have gone under will default on their bonds as well-- so that kind of risk. But we'll remove that risk, and we'll focus, for now, on the-- I'm going to focus mostly on the price risk because I'm going to be talking mostly about US Treasury bonds. US Treasury bonds have no default risk, we think. I mean, there could be an event in a few weeks from now, but no one expects that to be a lasting event. I mean, if it is, there is a real mess. But in any event, but there is also a price risk because you have to hold this and then sell it at one year. And you don't know exactly the price, what the price will be. There is a risk associated to that. So that means that, really, you shouldn't equalize the return on the one-year bond to the return you expect to get in the two-year bond. You should add a little compensation for holding the two-year bond, for going the two-year bond route. And so rather than expect to make 1 plus i1t the two-year bond after one year, you should expect to earn a little more. And that's what this xb being positive reflects. And so in that case, the price of the two-year bond is a little different from what we had. In fact, it's a little lower than what we had because that's the way you compensate for risk. I sell you an instrument a little cheaper than it would have been in the absence of risk. So you expect to get a little-- a slightly higher return out of that. So this price is lower than the price without the risk premium here, no? That means-- but it still is promising you $100 So that's exactly how you get more return out of it, because you're buying something at a lower price, OK? So I can do the same logic now and see what the two-year rate is. But now that I have this-- take into account this risk, and you have that the two-year rate now is the average not only of the expected one-year rate, but also includes a risk premium. And that tends to be the case, that the further out in the curve you are, the larger is the risk premium. It's called term premium because term is the same as maturity, OK? Actually, sometimes that is negative, actually. And recently, up to very recently-- now it's positive. But until very recently, that xb was negative. And the reason for that-- you don't need to understand that now-- is because long-term bonds were great hedges. Meaning, if there is-- for any major event, for a financial crisis or something like that, because in a financial crisis or a major disaster, interest rates tend to fall. And when interest rates fall, the price of bonds goes up, [CHUCKLES] OK? And so that was a good hedge. If you wanted to protect your portfolio of equities and so on against a major catastrophic-- major event like a financial crisis or a war or something like that, it was not a bad idea to have some long-term US Treasury bonds in your portfolio, because they would tend to go up precisely when everything else was going to be losing money and so, as a result, tend to be negative. Now that's not the case, because now one of the biggest risks is inflation. And so if there is an inflationary spike, then interest rates will go down-- up, not down. And that means the price of bonds will decline. So they will decline at the wrong time, OK? So the price of bonds, of long-term bonds, now will tend to decline when everything else is also plummeting. I mean, if we get a negative, if we get an inflation surprise, and inflation is a lot higher than people expected, asset prices are going to decline, all of them, including long-term bonds. And that's the reason now this xb is positive. OK, so that's-- so I think that's where we were at in the previous lecture. Any questions about that? Then I'm going to-- next step is to talk about equity. No? Yeah? AUDIENCE: Why don't we add the risk premium to the interest rate on your note? RICARDO J. CABALLERO: Well, because next year, that one-- for this particular bond, that bond will have no risk because it will be one year to go. And at the end of that year, you're going to get the $100. So there is no risk, that $100. If it was a three-year bond, then you would have-- in two of those, you would have risk premium. But you wouldn't have it in the last one, because in the last one, you don't have the-- you're going to receive the $100. [CHUCKLES] If the bond could default-- because I'm only looking at price risk in the bond. If the bond could default, then I would add an extra term there because it's default risk. But here I'm just looking at price risk. And I'm assuming the unit of time is one year. So just one year before it expires, there is no more risk because there is no price in between, and you're going to receive $100 at the end of a year. In reality, time is continuous. So every second, there is a little bit of a risk. So you have a little bit of that risk all the time except for the last second. But I'm looking at a simple example, where things happen every one year only. In the book, I think they mess up, actually. They put the risk premium in the wrong place. There was another question. No? OK. So let's look at the stock prices now. So stock prices have two key differences with respect to-- well, several differences, but two that I want to highlight. The first is that they don't pay coupons, fixed amount. They don't promise you to pay $100 two years from now or anything like that. They pay dividends. They tell you we have a policy of paying dividends. And even different companies differentiate themselves by how much they promise to give you on average in dividends. But it's a promise that, if everything goes as planned, they'll pay you those dividends. [CHUCKLES] It's not a commitment to pay you a dividend. It's very different from a bond. A bond says, I'll pay you a coupon of this amount every six months. And if you don't pay that coupon, that's a default. There's nothing like that in equity. Equity, you buy shares of Apple, and you look at the history of dividends, what the CEO tells you the last time, in the last release. And you think, OK, these are more or less going to be my dividends. But there's no commitment. They will always tell you what's their plan. But it's a plan. It's not a commitment. So that's the first thing. It doesn't have fixed coupons or anything like that. There's no commitment. And in that sense, there is no sense of default, because there was no commitment. So there is no default. If a company has to cut dividends to 0, that's not a default. That's conditions change. That's it. There was no commitment to that. The second feature is that they don't have a fixed terminal date. 99.9999999% of the bonds do have a terminal date. They have a maturity. There is a few exceptions, which are called perpetuities, that-- I think the US has none, for example. But most bonds have a maturity, OK? Equity doesn't come that way. Nobody tells you buy a share of Apple. You don't buy shares of Apple that will be retired 30 years from now, OK? There will be there as long as Apple exists. Of course, if you had shares of First Republic Bank, you have nothing now [CHUCKLES] because that-- but that was not the original plan. It First Republic Bank had been more successful, you would have-- the shares would have survived for a very long period of time. So there's no sense of maturity. In principle, equity can last forever. So I'm going to use arbitrage to price equity. So let me-- so it lets me have the following portfolio of options here. One is our old one-year bond. So you can invest your dollar today in a one-year bond. The alternative, I'm going to say, there is some equity out there. And I'm going to call the price of that equity Q and the dividend of that equity D, OK? So this price is a stock by arbitrage. So equity is risky. I mean, it's much riskier than bonds unless you are into Argentinian bonds or things like that. But I mean, it's much riskier than bonds. So there is always a risk premium. And actually, that in itself is a trade, should trade at risk premium of equity market. So I'm going to put an xs here. So what do you expect to get from holding-- arbitrage means the same holding period. So I'm going to compare investing in a one-year safe bond versus buying equity today, buying a stock, holding it for a year, And then selling it there because that's the same-- I cannot do arbitrage depending from different holding periods. That's a one-year holding period. So I'm saying this is what I'm going to get from the bond. I'm going to require some risk compensation for that because risk equity is risky. So I'm going to want that. And this is what I'm going to get. That's my return on equity I get. This is what I'm going to pay today for the stock-- say, for a share of Apple. And when I get this, this is the dividend I expect to get at the end of the year. And then this is the price at which I expect to sell that share one year from now, OK? So that's the return I'm expecting to get from holding the share of Apple for one period. And that's what I need to compare with holding for one year, one-year bond. But I want also to be compensated for a risk, OK? Good. Is this clear? OK. I don't know whether silence means yes or no. But this is-- we did something like this with the two-year bond, except that we didn't have a dividend there, because there was no coupon that they won. We only had a final payment of 100. But we did this already when we compared the one-year bond with holding the two-year bond for one period. We had exactly that, except that there was-- the expected dividend. there was 0 because there was no payment at the intermediate date, OK? Good. So we know this concept already. The only difference here is, again, that there is expected dividend and, second, that we have a risk premium here, which we added for bonds. But for equities, I said, it's typically much larger than for bonds, especially if you're talking about Treasury bonds. So I'm going to reorganize this to solve out for the price, this Qt here. That's what I want to figure out, what is the price of the share of Apple, OK? Well, I can reorganize this, which means move to the left, divide these two guys here by 1 plus i1t plus xs, and I get this. So the price is equal to the discounted expected dividend-- I have to discount it because I expect to receive it one year from now, and I want also compensation for risk-- plus the discounted value of the money I'm going to get from selling the share of Apple one year from now, which I also discount by the interest rate, but also with a risk premium because that's a risky investment. So that's what we have. Now notice that, at t plus 1, I will have an expression like that as well, again. When we did the two-year bond, we didn't have an expression like that, because after one year, the two-year bond was going to be a one-year bond. And so we didn't need to think-- put a price there. We just put the $100, OK? Here is different because we said this equity never expires. Unless the company goes bankrupt, it's there. So in the next date, I'm going to have an expression exactly like that. I'm just going to have an expected dividend at t plus 2 and the expected price at t plus 2 and so on and so forth. That means I can replace this expression here by an expression like this shifted all by one year. And I can keep doing that. If I do that, I'm going to get, then, two expected divisions here, and then I'm going to get a-- so I'm going to get something like this, but shifted by one year and discounted by two terms in the denominator. And then I'm going to get an expected Qt plus 2 around here, OK? Well, I can do a substitution of that, as well, again, by everything shifted by two years and so on. So I can keep going. And I can keep going and going and going and going on forever. So if you keep doing it, you're going to end up with an expression that gives you the price of the asset as the expected present discounted value of all the future dividends you expect. You see? I'm summing here, Dt plus 2, 3, 4, 5. And it doesn't stop here. It stop here, I'm going to have here a Qet plus n plus 1. Well, I can replace that thing again. And I can keep going and keep going forever. So you're going to integrate the expected dividends, discounted expected dividends, to infinity. Now, each future dividend is discounting more and more heavily because the denominator is growing and growing and growing because it's further out in the future. It's worth less and less, OK? But still, it can go on forever. And in fact, even if you substitute this stuff a million times, there is going to still be a little price at the very end floating around [LAUGHS] discounted. It will never go away. So it never ends. There is no maturity. They keep going, OK? Now we did everything up to now for nominal-- in nominal terms you can do it-- and that's the reason I didn't want to spend much time with it. You can do everything in real terms as well. And all that happens there is just remove the dollars, and just be careful to replace the nominal interest rate by the real interest rate-- but nothing deep there. I can-- you can go to real pricing, nominal pricing, and so on. But the important concept is not that. It's the fact that this-- in principle-- we call that, by the way, the fundamental value of equity or of a stock. It's the expected present discounted value of all the dividends. And you have to discount it by the appropriate discounting factor, which includes interest rate and risk premium. But that's what we call, typically, fundamentals. And we differentiate that from what we call sometimes-- I'm going to show you an example later on bubbles, when the price seems to exceed any reasonable sense of fundamentals. OK, good. OK, let me sort of start going back to things that we worry about in this course. And in fact, it's a big issue. I don't know what is happening to markets now. What the Fed did was very anticipated. But markets often find a way to react to things, even if things were anticipated. What happens-- so let me ask you a following question. What is the effect-- what do you think is the effect of an expansionary monetary policy on the asset prices we have discussed, so bonds and equity? Let's start with bonds first. What do you think is the effect of an expansionary monetary policy-- that means a reduction in the interest rate-- on the price of your one-year bond, two-year bond, any year, you pick? We already talked about that earlier. [CHUCKLES] Goes up. The price of a bond is inversely related to the interest rate because if I cut an interest rate, it means-- a bond is something that pays-- the payoff is in the future. That thing in the future is worth more if the interest rate goes down. There is less discounting of it. So the price of the bond, any bond here, will go up, the one-year, two-year, three-year, five-year. All of them will go up-- assuming that nothing changes as a result of the monetary policy. What happens is sometimes markets think, oops, the Fed messed up. And that leads to lots of changes in all the term structure and things like that because they expect the market to react in strange ways to this mistake made by the Fed. But here I'm saying, suppose that the Fed just cuts the interest rate once, and everyone believes that the Fed will continue to do so and so on. Well, then you're going to get that-- the price of bonds will go up. What will happen to the price of stocks? Yeah, you want to answer. [LAUGHS] AUDIENCE: It would go up. RICARDO J. CABALLERO: Up. But-- well, but it's important to say that it will go up probably for two reasons. The first one is that-- it's also the case that a lot of the price of an equity, actually even more so than a bond, has to do with expected payoffs in the future. So if I lower interest rates, just the effect of discounting will tend to raise the price of-- so even if I don't change the expected dividends at all, the fact that the interest rate goes down, for the same reason that the price of a bond went up, the price of equity will tend to go up, OK? So that's exactly-- it's the same logic. But there's an extra kick here for equity, which is what? That bones did not have, but equity does. At least it's an equity that is positively related to aggregate activity. But that's what I'm assuming here. AUDIENCE: [INAUDIBLE] bond is [INAUDIBLE] fixed amount before stocks and can't guarantee that pays you the same dividend. RICARDO J. CABALLERO: Well, yeah, that's the logic. Here, the expansionary monetary policy is cutting interest rates. But as a result of that, output is going up. When output is going up, sales will go up, revenues will go up, dividends will probably go up as well. So monetary policy can have very large effect. I mean, people in financial markets are looking at the Fed all the time [CHUCKLES] because it can have a big impact on the price of those assets. And on equity, in particular, it can be very strong. And in fact, that's one of the ways monetary policy works. When the Fed cuts interest rates, it inflates the value of asset prices. And that creates more wealth. People feel richer, consume more, blah, blah, blah, blah, blah. Firms feel also richer. [CHUCKLES] They invest more and so on. That's deliberate, in a sense. That's one of the main mechanisms through which monetary policy affects aggregate demand. It just creates wealth. And when there's too much demand, too much aggregate demand, like is going on now-- that's the reason we have inflation and so on. In 2022, the Fed went out and deliberately destroyed wealth [CHUCKLES] because that's what [INAUDIBLE] Raise interest rates a lot, the price of equity came down, even houses began to wobble. The price of Treasury bonds sort of collapsed and so on and so forth, OK? Good. Another experiment that we did sort of early on, lecture 3, 4, around there is what happens-- what do you think happens when there is an increase in consumer spending? So suppose that now-- remember, we had a C0 floating around, an autonomous consumption component. And so suppose that that goes up. What do you think happens to asset prices? And this is a big issue these days, actually. AUDIENCE: That depends on how the Federal Reserve reacts to it by raising or lowering interest rates. RICARDO J. CABALLERO: Exactly. That's right. That's very good. It depends a lot-- I mean, when financial markets receive news-- every day, they are releases of news of all sorts of things. And in financial markets, they-- people always think, OK, this is the news. The obvious thing for this is good news because this will tend to increase output. Output will increase dividends. That's a good thing for stocks. But the immediate reaction is, whoa, but what will the Fed do about this? Does the Fed like that we have more aggregate demand or not? [CHUCKLES] OK? And so that's key here. So suppose that, in this case, the Fed did not like-- the Fed-- like today, the Fed doesn't want more aggregate demand today. There's no central bank around the world-- maybe in China, but there's no other central bank around the world that wants more aggregate demand, OK? So if the release is consumer are very bullish now, that's not good news. I mean, the financial markets immediately are saying, uh-oh, we have a Fed that is watching for inflation. This means they're going to hike interest rates, OK? So what happens to the price of bonds, then, in this environment when C0 goes up, and the Fed doesn't like it, and the markets know that the Fed doesn't like it? The Fed may take a month to react to it. But markets react immediately, say, whoa, this is what the Fed will do one month from now. So what do you think happens to the price of bonds? If we get news that consumers are very bullish, and it turns out that we also have inflation of 4% or so. So we know that the Fed doesn't like more aggregate demand. What do you think will happen to the price of bonds? Well, [CHUCKLES] again, The news happened, say, a week ago. And the Fed moves one week later. So markets are going to anticipate that, in this case, the Fed will hike interest rates. If the Fed-- if the markets anticipate that the Fed will hike interest rate, interest rates will go up immediately-- not the rate that the Fed controls, but the one-year rate, the two-year rate, the three-month rate. All those rates are going to go up immediately as a result of that, OK? And that, we know, reduces the price of bonds. Bonds and interest rates are-- the price of a bond and the interest rates are inversely related. So the anticipation that the Fed will hike rates will lead to higher interest rates at all horizons. And that will reduce the price of bonds, OK? So this thing that-- and for equity, well, look what happened for equity here. Well, for equity, you say, OK, well, I get the same discounting effect of the bond, which is bad news, goes down. Ah, but the good news is the dividend because now I have more consumers. Well, that depends on how much the Fed dislikes this stuff because if the Fed does this, that means it offsets-- it fully offsets the effect on aggregate demand, increases the serial shift [INAUDIBLE] to the right. That would have increased output to here. The Fed doesn't want more output, so it will hike interest rates up to the point in which output doesn't go up. That means dividends are not going to go up either. So we get just the negative effect of the discounting, and we don't get the benefit of the extra activity that would have come from having consumers that are more optimistic and so on, OK? So this is-- actually, this has happened a lot over the last few months. This is an environment people call-- it's an environment where good news is bad news. Good news about aggregate demand, consumers are happy, blah, blah, blah, blah. It's bad news. Or labor markets are very tight, wages are going up, all things that sound wonderful in other environments sound-- is terrible news for the financial markets, OK? For most, I mean. There is difference in different sectors and so on. But for the aggregate, for the average, it's bad news. So this is an environment where good news is bad news, good news about aggregate demand. You have to be specific about what. Good news about aggregate demand is bad news for asset markets. It's not always like that. If you're in a recession, the Fed doesn't want to fight that. It wants to have more aggregate demand. So if you get good news about aggregate demand, that's very good news for asset prices because the Fed will not offset that, and you get the positive effect of the extra dividends and things of that kind, OK? Another component that moves asset prices a lot-- so monetary policy moves asset prices a lot, OK? But monetary policy doesn't happen in some separate, isolated space. It reacts to news about the economy, about consumers, about firms, about regional banks, all sorts of things, OK? Another big driver of asset prices is this guy here, of equity in particular, is this risk premium. So that risk premium can move a lot. And it's an important driver of asset prices. This index, this is the-- it's called VIX. VIX is-- I'm not going to explain what it is. But people call it, so you get the picture, an index of fear in equity markets. It's done-- well, I'm not going to tell you what it is. It's based on option prices and so on. So this is when people realized that COVID [CHUCKLES] was coming. And so what you see is that this thing exploded up-- big risk-off, thus a massive spike in the little xs. Well, not surprisingly, look what happened to equity-- collapsed by 35% or so. Part of that was expected deviance, blah, blah, blah, blah, blah, blah. But a lot of it was the risk-off. And it's called risk-off when markets are very fearful. They don't want to take risks, risk-off, OK? The recovery, actually, also had a lot to do with the recovery on the risk environment. People sort of got used to the thing. But that recovery also was a result of very aggressive monetary policy. The Fed tried to offset this by cutting interest rates very aggressively, and that also gave a boost to asset prices. In fact, they did so much that we ended up with a big-- lots of overvaluation in asset prices. And then, as a result, when they hiked rates, we had a big decline in asset prices as a result. What is this? Look, this is-- over the weekend-- over the weekend-- we talked about this in the previous lecture. Essentially, the First Republic Bank went under, and JPMorgan absorbed it. So people thought that-- and Monday was good because people thought that this mini crisis was over. Well, yesterday, it turns out that two other regional banks, the shares began to collapse in the same way as the First Republic Bank shares collapsed the week before. So panic immediately set in. So the VIX, the fear index, [WHISTLES] this is intraday. So markets open here, And the shares of these two banks began to decline very rapidly. And so VIX went up a lot. And what-- this is-- this is the main-- this is the SPX, the main-- S&P 500. It's the main price index, equity price index, in the US-- immediately declined. So that's the xs moving. Here, xs move up, the little x. And then it began to come down, and the markets began to recover. So this risk on and off is a very big driver of equity prices. This is one of the banks, actually, that was in trouble. You see that-- by the end of the day-- this is PacWest. PacWest had declined by 28% by the end of the day. But you see things look very weird here. They don't look like normal prices. Here they look like normal prices. They're moving all the time. Here, they don't. What happens is that these prices decline so rapidly that they trigger what is called circuit breakers. So you cannot trade those shares when they decline too rapidly. And that's done deliberately so this little x doesn't get completely out of control, people to calm down. And so it triggered it several times. The whole idea is that people calm down, [CHUCKLES] that they don't-- is there a question? AUDIENCE: Yeah, just on the last slide-- RICARDO J. CABALLERO: The previous or this one? AUDIENCE: Yeah, the previous one. Is-- [INAUDIBLE] are either of them dependent on the other? Or are they more just showing the same sort of trend? RICARDO J. CABALLERO: No, no. OK, that's a good question. This is the risk component only. So this is more independent-- what I'm saying, when this guy goes up, if nothing else happens, this will decline because you are discounting things more heavily. But it is true that there were some common elements. There are also common elements, which is people got very worried about having another regional bank collapsing and so on. And so that also created fear about the economy, which is an independent reason for this to decline. And normally, in recessions as well, this risk appetite is lower. So you're right that it's a common component. But the point I was highlighting is that this VIX sort of is a big driver. It has a big impact on asset prices. But it's not the cause. It was an event that caused both. But the fact that this event came with this big spike in the VIX meant that the impact on the equity index was larger than if it had been only news about the economy, meaning that there was a recession ahead or something like that. And let me just finish with the opposite phenomenon. I was [INAUDIBLE] episodes of fear. But sometimes markets get very carried away the opposite direction. And here I'm showing you examples. I put together this picture many years back, and now Deutsche Bank keeps updating it, which is it shows you some-- it seems that the world needs a bubble somewhere. [CHUCKLES] And here, it shows you several sort of big asset valuations. Look, 500%. Here is the Nikkei. I mean, it was enormous appreciation of the Nikkei. Here was bitcoin. Then it collapsed. They always end up bad. Whenever you see this big sort of spiking up, they almost always end up quite poorly. Now, this is-- it's much more likely that it happens in equity than in bonds. In bonds, it cannot happen because there is a terminal date, a terminal value. So what happens with these kind of things, people dream that their value will go to infinity. And it could because the thing will last to infinity, and the price could go to infinity. For a bond, that cannot happen, because there is a terminal date, and at that date, they going to pay you $100. So it can't happen. But for equity, people's imagination can run very wild. In fact, there is a famous bubble, the South Sea Bubble. It's a company in the UK. It's famous for many reasons, but one of them is that Isaac Newton got involved in this one. He got carried away. He sold. He made a profit. He sold the shares at 7,000. He profited 3,500 pounds, which must have been an enormous amount of money at the time. Prices kept going up-- couldn't resist, went back in, ended up losing 20,000 pounds, which must have been a lot of money. So he famously said, "I can calculate the motions of the heavenly bodies, but not the madness of people." This is all about expectations, OK? [CHUCKLES] Let me stop here.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_18_Quiz_2_Review.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: So after doing IS-LM in the first part of the course and where we took prices completely sticky and output was fully determined by aggregate demand, we said, well, that dominates in the very, very short run. But over time, at some point the supply side starts showing up. There are constraints. The labor market gets very tight, and so on. And so we added a block that started from wage determination. And then we look at the impact of wages on prices. And then we related inflation rate. You start to relate the inflation rate to economic activity. So output above or below the potential output, or the natural level of output and things of that kind. So remember, the starting point was a wage demand equation, so what workers demand for a wage. This period depends on what is the price level they expect for the period. Because they set the wage today and they have to live through the year or two, whatever is the contracting period, with that nominal wage. So naturally, if they expect higher price level in the future, they're going to demand a higher nominal wage today. And then we said that's a function that is also going to be decreasing in the level of unemployment because obviously that weakens bargaining power for workers or makes actually becoming unemployed or not having a job more costly because it's very difficult to exit out of unemployment. And then we made normalization, this function, also an increasing function on this variable z, which captures a bunch of labor market institutions, including wage labor bargaining power. So more bargaining power means that for any given level of unemployment, workers would tend to demand a higher wage. So that's what the z variable was all about. Then we wanted to go from wages to prices, because the ultimate goal was to bring inflation into the picture. And for that, we had to produce-- we introduce a production function because in particular, we made output a function of employment. And that, very naturally, will connect wage pressure to price pressure because you need labor to produce output. So that the labor market is very tight, that means also there's going to be more expensive to produce output. And we simplify this production function a lot. We made it output equal to employment. And that meant also that one unit of labor, in order to produce one extra unit of output, you need one extra unit of labor, which means you need to pay a wage, one unit of the wage. And so then we said, suppose that the price setting from the side of the firms simply takes this cost, which is the wage, and adds a markup to it to pay for a bunch of other things that we haven't introduced in this model. So the price charged by firms is equal to the wage times 1 plus some positive numbers, 8.2 or something like that, so 1.2. And we can rewrite this price setting equation as a wage, a real wage the firms are willing to offer. And it's just equal to that. So when the markup goes up, that means the real wage firms are willing to offer is lower than otherwise. So that took us to the concept of the natural rate of unemployment. And what I said, no, there's nothing natural about the natural rate of unemployment. It's simply a definition that says that's unemployment that results when the price, expected price, is equal to the actual price. That's all that is. And when we have that condition, then we can think of the real wage demanded by workers because I can replace the expected price for actual price and divide both sides by price. So the actual wage demanded by workers is equal to a function of the natural rate of unemployment. And I stick the end there precisely because I replace expected price or P, for no other reason. But now we have two equations for the real wage, the real wage that firms are willing to pay and the real wage and workers need to demand. And we can make them both equal. And that determines the natural rate of unemployment. OK, so remember this from the point of view of the firm. This is equal to 1 over 1 plus the markup. The only endogenous variable in the markup is a constant z. This is also a parameter is exogenous. And so from here, we can solve the natural rate of unemployment, 1 over 1 plus m. And we can solve the natural rate of unemployment. And if you do the algebra right, you're going to get to a point like that. That pins down the natural rate of unemployment. Again, there is nothing natural about the natural rate of unemployment. It depends on a bunch of parameters, which, for example, it clearly depends on the markup. It depends on things that we took as constant here as given here, all the things that were in z. Those are part of that. And so then we look at things that change. And that's just done with equations. We look at things that change the natural rate of unemployment. That's one example. If bargaining power by workers goes up, they're going to demand a higher wage at the initial natural rate of unemployment. Well, that obviously, that higher wage is inconsistent with what firms are willing to pay. The only way equilibrium can be restored in this model, that's the medium run equilibrium, is for the natural rate of unemployment to rise to UN prime. There you have it. Nothing natural. Natural rate of unemployment is not constant. It depends on institutional parameters, such as bargaining power. Another example is mark ups. It depends on mark ups, as well, the degree of competition, if you will, in the goods market. If we are in some equilibrium like this one and now suddenly firms, for whatever reason, choose or need to charge a higher markup, that means that at this level of unemployment, the wage that workers would demand is higher than the wage that firms are willing to pay, the real wage. And the only thing that can clear the market in this case here, in the medium run, is for the natural rate of unemployment to rise. So here, we go two experiments where we move some parameter-- in one, the bargaining power of workers and the other one the markup of the firms. And both increase the natural rate of unemployment. Good. The next step was to look at things that happen outside the natural rate of unemployment, in particular, what happens to prices there. So what we did is we took the-- we went back to the model with the expected price here. That means that unemployment that comes out from this equilibrium is not going to be necessarily the natural rate of unemployment. That will be the case only if P happens to be equal to P. Then we simplify this function, F, here for something linear, like this. Very simple, but again, decreasing in unemployment, increasing in this institutional parameter, z. We replace this wage here from this expression here and rearrange. So we got this here. And the next step was just to go from here to the rate of inflation. And we did it through several steps and approximations. And we ended up with what is known as the Phillips curve. So this says that inflation is increasing and expected inflation on this institutional parameters. As the markups go up, that will tend to increase inflation. If bargaining power by workers go up, then that is the same. But most importantly, it's negatively related to unemployment. And that's the reason that today, nowadays, you know, there's lots of discussion about the tightness in the labor market and whether that's really necessary. Do we need to cause a recession, a situation where unemployment goes up a lot, in order to really finally bring down inflation? Yeah, there was a question. AUDIENCE: [INAUDIBLE] RICARDO CABALLERO: Yeah. Remember that I made up this function. We said this function is decreasing unemployment. I just replace that function for that. So it's the sensitivity of wage demand by workers to their employment rate. Alpha is very high, it means that wage demand is very sensitive, very responsive to unemployment. AUDIENCE: What would be the situation for an expected price? Like, could you connect that back to like, I don't know, some sort of a commodity or something? RICARDO CABALLERO: So what is the intuition for this? AUDIENCE: Yeah, just like a price feels tactile. But an expected price, I don't-- RICARDO CABALLERO: Well, I mean, imagine that workers and firms bargain for a wage that will live through the year. You're bargaining for the wage, nominal wage today. You don't set a real wage. You set the nominal wage, say $100, whatever. Well, the wage demand will depend a lot on what they expect inflation to be during this period. If I expect inflation to be 10%, you're very likely to demand a higher nominal wage because you have to live, on average, with higher prices. So that's the role of that, is the price. I mean, I would prefer-- and there are countries where that's done-- to set my wage in real terms so I don't need to worry about that. But in practice, in economies with low inflation like the US, you don't do that. You get a nominal wage and you have to live for the year or until the next negotiation for your wage contract with that level of wages. AUDIENCE: [INAUDIBLE] be dependent on time, [INAUDIBLE] with the interest rate or with the inflation rate, whereas I guess the regular price is defined by the wage is dependent on the market? RICARDO CABALLERO: No, they're both the same. But the only thing is that this price here is not the current. It's what you really expect the price to be during the year. If this is here just because at the moment in which you set the wage, you don't know the price that you're going to face as a worker. But it's the price. So you don't know the price that you're going to actually face. So the best you can do is calculate, well, I think inflation is going to be 10%. So give me what I would have had in mind with inflation equal to 0 plus 5%, so on average, I'm about right. That's the logic. But this expected price is meant to be your best proxy you have at the moment in which you're bargaining for your wage for what the actual price will be during the life of that particular wage. So we end up with that Phillips curve here. Importantly, this is a decreasing function of unemployment. And then we made different assumptions about expectations. If expected inflation, for example, is a constant, that's when we say expected inflation is very well anchored, then you get a Phillips curve that looks like this, in which inflation has a constant here and a decreasing on the rate of unemployment. And during the '60s, that relationship sort of held fairly well. It was a downward sloping relationship. It got to be steeper and steeper as we moved into higher and higher inflation levels. And then I said, but in the '70s, the whole thing broke loose. There's nothing like a downward sloping curve here. That happened for two reasons. There were some cost push shocks. You can think of lots of shocks to them. But more interestingly, expected inflation became an anchor. And then we changed the expected inflation model for rather than being a constant, being some weighted average like this. And we said, look, during the '70s essentially that theta was equal to 1. So expected inflation was really-- whatever was inflation last year, people expected that level of inflation to stay in the next year, rather than going back to that whatever was the constant or inflation target or historical constant pi. And that meant that during that period, really the Phillips curve looked more like a relationship of the change in the inflation rate as a decreasing function of unemployment. So that means that when you increase unemployment here, you reduce the rate at which unemployment, inflation is rising. That's the goal of the situation in a case in which expected inflation is an anchor. And the last step we had there is we replace-- we noticed, we said, well, what happens if we stick in here the natural rate of unemployment? Then that will give us-- that will happen only when expected price is equal to actual prices. So that means that when inflation is equal to expected inflation, from here we can solve the natural rate of unemployment as a function of these structural parameters. And once we have that, we could go back to our Phillips curve and rewrite it in this way. So you can think of the Phillips curve in this way. And this is the way we typically write it down, in which it says inflation is decreasing in the unemployment gap. So if the unemployment is above the natural rate of unemployment, that means inflation will tend to be below expected inflation. If expected inflation happens to be equal to log inflation, that means if unemployment is above the natural rate of unemployment, then inflation will be falling. Any questions? Good. You need to know this, how to derive these things. I mean, not so much-- Yeah, you should know how to derive. But you need to understand this relationship between the unemployment gap and inflation relative to expected inflation. Yep. AUDIENCE: Could you talk about uncut versus de-uncut inflation. RICARDO CABALLERO: Expected inflation. It's just a statement about a what is the model we have for expected inflation. So suppose we have the following model for expected inflation-- 1 minus theta-- theta is some number between 0 and 1-- times a constant inflation plus something that is a function of theta times whatever is previous inflation. Central banks try to set a target for the inflation rate. In the US it's around 2%. And ideally, people will tend to believe-- they may see an inflation that is above, say, 2%. But as long as people expect that to be undone in the next period, then inflation will say they are very well anchored. So that's a case in which very well anchored means theta equal to 0 here. And you always stick in there, in the case of the US, at 2%. And that's the way the economy is behaving right now. Inflation today is 5%. But if you ask people, what do you expect inflation to be two years from now? People tell you, look around 2%, 2.5% or so. An anchor expectation is when you don't have that anchor, that 2% that the Fed told you is whatever it was the previous inflation, that's what people extrapolate will be inflation for next period. And that's a lot harder when you get into an inflationary episode in that context. It's very difficult because you have 5%. People are still expecting 5% for next year. So you need to-- it's much harder to bring inflation down. You need to create much more unemployment to bring inflation back to the 2% target. That's what it means to anchor. So anchor means theta very close to 0. And anchor theta very close to 1. That's a formal definition. We then move to what I think is probably the most important model you'll see in this course, which is the IS-LM-PC, which is just the IS-LM plus the Phillips curve. And that allows us to talk about the short run, which is what we did in the IS-LM and then all the way to the medium run. A medium run understood, that's when you go back to the natural rate of unemployment, natural level of output, and so on. We got a banking crisis there. But that's-- This you may find useful. Here I was trying to explain the banking crisis. And I said, we have a model for that already. Remember, we had this x spreads in the investment function and said, well, you can think of a negative financial shock, something like a credit spread shock, as an increase in x, and that will shift IS to the left. OK, just saying. Good. IS-LM-PC model was just going back to the IS-LM model. We're going to simplify things by just assuming that the central bank sets the real interest rate and the real interest rate is that. And to that, we added a Phillips curve. But we didn't like that Phillips curve because we have everything is a function of output here and interest rate. And now we have inflation and the unemployment rate and so on-- yet another variable to carry around. So we went from the output gap to unemployment gap to an output gap. And we did that just by noticing that the output is equal to the labor force times 1 minus the unemployment rate. Similarly, you can define potential output or the natural output level as employment times 1 minus the natural rate of unemployment. Subtract these 2 and you get that the output gap is equal to minus L times the unemployment gap. And so we replace this for that expression divided by L and we end up with the Phillips curve written in the form of an increasing function of the output gap. So when the output gap is positive, then inflation will exceed expected inflation. If expected inflation is an anchor, that is, expected inflation is equal to log inflation, then that means that a positive output gap leads to an increase in inflation, in inflation rate. We look at an example here. This is the type of-- now we're going to have the real interest rate here. Just makes it simpler to think about central monetary policy in terms of the real interest rate. Otherwise, too many things move at once. So this is what we have done for quiz 1. Here you have some particular equilibrium, IS-LM. With this real interest rate, we got some equilibrium output equal to y. The contribution of this block of the course is that now we need to also check whether this is y is consistent with potential output or not, or with natural level of output. And that for that we need to see whether this level of output is, again, is above or below the natural rate of output. And for that, we need to look at the Phillips curve. And in this particular case, that's not the case because output is above the natural rate of output. You put now, given that observation, you draw here the Phillips curve. You know that because output is above the natural rate of output, the natural level of output, that means inflation is above expected inflation. If expected inflation happens to be an anchor equal to pi minus one, that means that at this output gap, that there is an inflation that is rising. Now, inflation rising means the central bank will have to react. And so you have to do something up here. You need to bring output down. And how can you bring output down? So this economy is engaging in an inflationary spiral, actually, given this model of expectation. How do you stop that? AUDIENCE: Raise interest rates. RICARDO CABALLERO: If you are the Fed, and you when-- you raise interest rates, no? Because you need to bring them back. So the equilibrium level of output, you increase the real rate up to a point in which the equilibrium level of output is equal to the natural rate of output. And you may have to do more than that. If inflation was an anchor and you find yourself with 5% inflation, you may have to temporarily actually, to bring inflation back down to 2%, you may have to overshoot, raise interest rates a lot, generate a negative output gap for a while. And then once you reach the level of inflation you like, the 2%, then you can go back to the natural level of output. OK, so that's the reason central banks worry a lot about unanchored expectations because then they know that they find themselves on inflation above their target, it's not going to be enough to bring the output gap to 0. They're going to have to overshoot in the way down in order to reanchor expectations-- well, in order to bring inflation back down to the target of 2%. But in any event, even if inflation expectation's well anchored, you still have to bring output down because at the very least, you need to close this positive output gap. And that, if you are the Fed in the US or any central bank, you do it by increasing the real interest rate. Now, in practice, central banks really don't control the real interest rate, control the nominal interest rate. So there is a little fight there between inflation and what they do to the nominal interest rate. But let's ignore that complication for now. Now, suppose that the Fed is in vacation and so somebody someone else decides in the government decides that, no, we cannot have this very high level of inflation. So what else could you do? And you're not the Fed. The Fed is in a vacation. Who else can make policy? The government, the central government, the Treasury, and so on. What is the instrument they have? What do they need to do? The problem they have is output is too high. And that's what is leading to lots of inflation. So what do you think they should do? AUDIENCE: Cut government spending. RICARDO CABALLERO: Cut government spending, raise taxes, something of that kind. But you need a fiscal contraction because that will bring the IS down. And so the equilibrium output will be lower. So that's an alternative you have. You should know this. And here I just did what we just discussed, just in steps. These things happen slowly. The Fed doesn't hike interest rates in one shot and so on. It takes a while before you get to the final equilibrium. I showed you the deflationary spiral. I said, sometimes things can get very complicated because you may hit the 0 lower bound. The Fed can bring the nominal interest rate to 0. But if inflation is already low, that may not give you the real interest rate you need in order to get output equal to the natural rate of output. Here was one example in which you need a negative real interest rate to get output to be equal to the natural rate of output. But that may not happen because you hit the 0 lower bound. And so at that point, the problem you have is that-- and that was a tragedy of Japan for so long-- is that not only you cannot bring the interest rate, the nominal interest rate below 0, but you start getting into deflationary. Inflation below expectation, and the expectation goes to number very close to 0 because of an anchor deflationary expectations. Then you start getting negative expected inflation. And when you get negative expected inflation, even if you are at the 0 lower bound in the nominal interest rate, that means a positive real interest rate. So effectively, you are increasing interest rate at the same time. And that can be a very complicated thing to get out of. Again, that's what happened to Japan for a long time. What would you do as a government if you fall into a situation like that? And Japan did a lot of that. Well, you can do lots of things. But in particular, the kind of things you know, what would you do? If you are in a situation like this in which the 0 lower bound is binding and inflation is actually falling? Here I had a benign case in which inflation expectation was well anchored. That's not what happened to Japan. After they experience a long period of deflationary forces, then people began to expect more deflation, more deflation, and so on. So what else can you do? Let me give you a hint. Japan is one of the countries at the highest levels of public debt. How do you accumulate public debt? Yeah, you need to borrow a lot. You have big fiscal deficit. So that's the way you can fight this. You can shift IS to the right by having an expansionary fiscal policy. That's the only tool really have. You lose the power of monetary policy against the 0 lower bound. But you still have fiscal policy. And they did a lot of fiscal policy. No, not interesting. This is interesting. This is a different kind of shock. Suppose you are at your medium-run equilibrium and then all of a sudden mark ups go up, perhaps, for example, because the price of oil went up a lot and something like that. So then that's a different kind of shock from the previous one, from any fiscal shock or anything like that. That's an aggregate demand. This is an aggregate supply problem because the first thing I know of a permanent at least change in m is that the natural rate of unemployment has to rise. If the natural rate of unemployment has to rise, that means my Phillips curve will shift now. In that particular case, I know the Phillips curve will shift to the left. How do I know that? Well, because I know that the natural rate of unemployment went up, which means that the natural rate, natural level of output has to come down. And the natural level of output coming down means simply that the level at which expected inflation and inflation are equal happens at a lower level of output. So the Phillips curve moves to the left. So suppose you were in this equilibrium. Here I'm doing for the case of an anchor expectations. But the same logic goes for the case of anchor expectations. So suppose you were at some equilibrium like this. What's your medium run equilibrium? But now the price of energy goes up a lot and you expect that to last for a while. That means the Phillips curve moves up. So that means that if output, with output at this level, now you have a problem because you start getting inflation out of this, because this level of output is too high relative to the new level of the natural-- the new natural level of output. So you have a positive output gap. Positive output gap means inflation above expected inflation. If you have an anchor expectations, means inflation starts rising up. So that means the Fed now needs to react to that and needs to tighten interest rate in order to go to the new level of, natural level of output. And that's the response to that. But if the Fed does not react-- and a little bit of this is what happened. We had some supply shocks and so on that were considered to be temporary, well, they weren't as temporary. So there was no reaction. But it turns out that they lasted a lot longer than the Fed expected. And so now they had to catch up. That was part of the reason we got into a high inflation episode. That was the main reason in Europe. The US is a mixture of aggregate demand, lots of fiscal policy and so on. And supply side in Europe was very much a story of this kind. Well, a financial panic, you need to offset it with a decline in real interest rates. And a little bit of that has been happening. It's not the Fed that has cut the rates, but the markets have anticipated the Fed will not raise interest rates as much as they expected before we got into this banking mess. So we had already studied the short run, the medium run. And now we want to look at the long run. And that's what economic growth is about-- economic growth theory and facts and so on. Let me go to-- So one of the things I highlighted is that we tend to see, among countries that are fairly similar along the education and variables like that-- systems, economic systems and political systems and so on-- you tend to find relationships like this, that is countries with a lower per capita income at the beginning of the sample tend to grow faster in the sample. And that captures very much the idea that there is a convergence, there is a force towards convergence of income per capita, if you will. That's another illustration of that phenomenon. Lots of dispersion here. 70 years later, a lot less dispersion. But we also said that some countries do not match that. And but we focus most of what we did in growth on understanding this process, the process of convergence and how it happened without technological progress and so on. And then we spent a little bit of a lecture, say at most 10 points worth in a quiz, or seven points, talking about anomalies and things like that-- 5 points or something like that. So the key object here, one of the key objects-- there are a couple-- but one of them was, well, now we need to be a little bit more serious about the production function, we said. Because for the short run, it's OK to take capital as given and just worry about most of the fluctuations in output will come from fluctuations in employment. That's not so over long periods of time. Capital accumulation plays a huge role. And so we need to be explicit about the fact that capital matters a lot for production. And so we postulated a production function like this, output as increasing function of capital and labor. And now we said for this part of the course we're not going to worry about unemployment. And so on, employment, labor force population, they're are all the same for us here, for this part of the course. And then they said this production function has some important properties. One, it has constant returns to scale. Things change quite a bit. If you don't have constant returns to scale. So we have constant returns to scale, which means-- you should know this-- that if you scale all output, all inputs, all the factors of production by the same factor, output also rises by the same factor. OK, so our production function we use a lot was Cobb-Douglas, output equals square root of k, times square root of n. Well, the sum of those exponents is 1. So that's a production function with constant returns to scale. Anything that has the exponents add up to 1, then that's a constant returns to scale technology. But importantly, it also has decreasing returns with each of its factors of production. That means rather than moving both factors of production up, you move only one. Well, you're going to increase output, but just keep increasing that factor alone, you're going to increase output by less and less and less and less because essentially, it has fewer and fewer of the other factor of production to work with. And so that's decreasing returns to capital or labor. I mean, if you fix the other factor of production, you move up. It's going to increase at a decreasing rate. So one normalization that we started with was, well, a scaling factor could be population, 1 over population. That's a scaling factor. And if I do that, I multiply everything by 1 over n. We go into output per person is an increasing function of-- an increasing function, but at a decreasing rate-- of capital per person. We plot that function here. And this is increasing, but it's concave. That shows the decreasing returns of capital. And we got that function there. So when you move in this, you can increase output per person, per person by simply increasing capital per person. And the more you increase capital per person, output will increase more and more but at a decreasing rate. You can see that moving the distance between A and B is the same as the distance between C and D. However, the increase in output, when you go from A to B, is enormous when compared with the increase in output that you get from the increase in capital over per person from C to D, decreasing returns. There is another way of increasing output per person, which is with technological progress, with the function, F, is shifting up over time. And we split the two main lectures in growth into part one, which shut down the second channel. And then the second important lecture here, we focus on this channel. So that's what we. So let's go to when I shut down this channel for now and focus on the case without technological progress first. So we put things together. We said this comes from the previous lecture. We can write output per person as an increasing function of capital per person. Second key equation is-- well, this is a property and it has to be, if you are in a closed economy, no government expenditure or anything. We could add that. But it's not important for the message. Then investment has to be equal. So investment is going to be very important here because what will make the capital stock grow. But there has to be funding for that. And the funding come from savings. And we simplify things by assuming that the saving function is just proportional to the level of output, which is reasonable when you think about long run. All these things scale up. When you're thinking about very short term, no, we have some constants and so on floating around. But over the long run, things do scale up. And so we can write investment in equilibrium. Investment has to be equal to savings. Saving is proportional to output. So we get that investment in this economy is increasing in output. This is a constant somewhere between 0 and 1. And the last key equation here is the capital accumulation equation. The capital accumulation equation says that capital, T plus 1, is equal to capital today, minus the depreciation-- some fraction of the machines break down in every period-- but plus the new investment, plus It. And we rewrote things and replaced the saving function and so on. And we end up in an expression like this that says capital per person here grows with investment, which is funded by savings, which is an increasing function of output, which in turn is an increasing function of capital per person, minus whatever is the depreciation. And what we did in the start diagram in the other model is, we plot this function and that function. We know that the steady state is when these two things are equal. That pins down the k star over, k over n star of this economy. But we also know that to the left of that point in capital space, this term is greater than that. And therefore, the capital stock is rising. To the right we get that this term is greater than that. And therefore the capital stock is falling. And that's what we have in this diagram. So that's the steady state of the economy, when the depreciation per worker, which is the required-- The minimum investment you need to keep the capital stock constant is whatever is depreciation per worker. Anything else you do, it will grow the stock of capital. Anything less you do, you're not maintaining enough of your capital stock. It's declining. And that's exactly what happens here. That's the steady state. If your capital below that, then a capital stock will be rising because you have lots of savings and therefore lots of investment relative to what you need in order to maintain the stock, the small stock of capital you have, until you reach steady state. And this model alone can explain really the pattern we have that the poorest economies tended to grow faster than the richer economies. If you think of poorer economies as economies that are otherwise similar, but that have a low stock of capital to start with, well, those economies are going to be to the left of the steady state, and therefore they're going to tend to grow at whatever is the steady state rate of growth, plus this catching up growth. And so this is a very powerful little model. It can explain a lot of those convergence, the convergence that we saw in the data. Do you understand this? This is important. Then we did some experiments. What happens if you increase the savings rate? At the time when Solow was writing this model, many people say that what was behind growth was saving. Well, in this model we show that, indeed, it is a saving rate rises. Then at any given level of capital-- suppose that was the oldest save state. If now the saving rate rises, that will increase investment above what you need to maintain that level of the stock of capital. So the stock of capital is going to start growing. When that happens, output per capita will grow faster than in a steady state because you're going to be going from here to there. But eventually you'll converge to the same old rate of growth. So the point here is that the saving rate can not change, per se, does not change the rate of growth in the long run. But it gives you transitional growth. And a lot of the Asian miracle of the very fast rates of growth from the '60s and '70s and '80s has to do with this kind of thing-- very sudden increase in saving rates plus other institutional changes and so on. But high increase in the savings rate also led to very fast growth. Again, this little model can explain a lot, as well. It can explain when you see those growth miracles, often it's associated to-- for some reason, it varies a lot across different scenarios-- the saving rate went up quite a bit. But point is, that gives you very fast growth in the short term, but eventually peters out. The next thing-- All that we did for a fixed population and said, well, suppose now the population is growing. I said the diagram we had before would be very unpleasant because all these curves would be shifting. So what we need to do is divide not by a constant. We need to divide by whatever is the population at that point in time. And that will give us the same diagram we had with one little twist. So I went through a little algebra here to arrive to a capital accumulation equation, capital per person equation, an equation for the change in the capital per person, which is very similar to what we had. The only difference is that the required capital, investment required to maintain the stock of capital per person, has an extra term here, gN. And I said that gN-- so let's think about this term. So delta, think of this as the required investment in order to maintain the stock of capital where it was. If this delta comes from the fact that, well, you have a stock of capital. You lose a fraction of that. Well, you need an investment equal to that fraction that you lost in order to maintain the stock of capital the same. That's clear. But that's not enough to maintain the capital per person constant if per person is rising, population is rising. Because even if you maintain the stock of capital constant, the denominator is rising at the rate of population growth. So in order to maintain the capital per person constant, you need to deal with the growth of the denominator, as well. And that means you need a little-- you need also investment to match the increase in population so they can keep the capital per person constant. So that's a modification. So in terms of our diagram, all that happened here, have technological progress, as well. Set it to 0 for now. All that happened relative to the previous diagram is that now I rotated this curve up, upward a little bit. But then you conduct analysis exactly the same way. The only thing that is different now is that in the previous model, we had that the rate of growth, the steady state rate of growth was equal to 0. And the steady state rate of growth of output per person was also equal to 0. Population was not growing. Output was not growing. The ratio wasn't growing either. Here it's still the case that in a steady state, output per person is not growing. But that also means since population is growing at the rate of gN or N, I don't know how we call it here, that means output must be also growing at the rate gN. That's what will keep output per person not growing. So for the output itself, a very important factor in growth is population growth. And if you look at rates of growth in general in the world, certainly in the developed world, they are falling for a variety of reasons. And one of them is because population growth is falling. But per person, that doesn't make a difference. But for the level of the rate of growth, it does. And then we added technological progress, which we modeled as effective as enhancing labor. So having a better technology means is you had more workers. So for any given level of workers, having a better technology, we modeled it as having more workers. And you can model it exactly that way. You can use exactly the same diagram we had before. But now we divide our scaling factor rather than being 1 over the population, it's going to be 1 over effective population, effective workers, 1 over AN. And you conduct exactly the same analysis. You do exactly the same approximations I did before. But the difference now is that here rather than have gN, you have AN. Why is that? Well, because if I want to maintain the capital per effective worker constant, then I need to first make up for the depreciation of the stock of capital. Thus, I have to stabilize the numerator. But then I have to take into account that the denominator is growing for two reasons-- because population is growing and because technology is growing. And in order to maintain the ratio constant, I'm going to have to add investment so to maintain that ratio constant. And that's the reason. Now we have this line here, rotates even further. And we get delta plus gA plus gN. And you should play with these things. What happened in this diagram if I increase gA or stuff like that. And notice that here, now still you have a steady state. But it's a steady state in the space of output per effective worker and capital per effective worker. That means, for example, that these quantities are not growing in the steady state. But output will be growing at which rate in the steady state? In any state, state here, what is the rate of growth of output? If output over AN is constant in the steady state, how can that happen? Output has to be growing at which rate? At the same rate as the denominator. So it's gA plus gN. That's a trickier question. What happens to output per person in this steady state? What rate is it growing at, output per person? Sorry, somebody said the right thing. gA, exactly. I want to keep this ratio constant. I'm asking the question of which space does this need to rise in order to maintain this constant, while at the same rate as A is growing. Good. Here we also did, we ask the question, well, could it be that here, if we change the saving rate, we get some extra kick in the long run? And the answer is no for the same logic as we had before. You're going to get transitional growth. But eventually you're going to convert to a steady state. And the rate of growth in the long run is not going to be a function of the saving rate. It's going to be equal to gA plus gN. I do here. Measuring technological progress, blah, blah. I told you the story of China. Good. We run out of time. So let me just say the last thing that I want to say. So the last thing we discussed, you say, well, what happen if you add education to this and try to-- [INAUDIBLE] I expanded the model a little bit. I had education and said, does it change conclusions a lot? I said, no, not really. I mean, it doesn't change the conclusion with respect to the long run. It affects the level of output per capita if you have more education. But you won't achieve effective rate of growth in the long run. And the last point I made is that, look, in this model, if you expand it and you try to assume that technology is the same and the rate of technological progress is the same across the world, you stick those parameters in the model-- population growth, the education levels and all that-- then you don't explain the amount of inequality we see in the world. The world will look a lot flatter if it was just a differences in population growth, deprivation, education levels, and things like that, but with the same technology. So if you want to account for-- so this model, the model essentially doesn't produce enough inequality in the world. You need to add something else that explains that we have some countries in Africa that are not growing or growing at a very low level of rate. And we said that something else technology, for whatever reason, it happens that there is a pocket of countries that seem to have a lower, sort of permanently lower level of technology at both level and growth rate. And that's what explains sort of a subset of countries that are sort of stacked, that are not consistent with this convergence type thing. So that's the reason it's called conditional convergence. Those countries themselves are converging to something. But they're converging to something much lower and with much lower rate of growth than most of the rest of the world. But the final lesson is for the average country, on average, it's clear that poorer countries grow faster than richer countries. That's a dominant force. But you need a little more if you want to explain certain pockets of the world. OK.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_2_Basic_Macroeconomic_Concepts.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO J. CABALLERO: I expect that there will be many fun lectures in the sense that we're going to be discussing a little more exactly at the right time in which that issue is an important issue, at least as described in the newspapers. And, you know, we are going through a very interesting time for macroeconomics. Inflation is unusually high. Something needs to be done about that. And we still have problems on the supply side of the economy as a result of COVID, as a result of the slow reopening of China. And we have a war going on, which is affecting also the price of energy, and is particularly impacting Europe. And all these things, the situation is very fluid. All of them can change at any moment. And policymakers are, therefore, paying very close attention to all these things. It's not a normal time. If you're a policymaker, macroeconomist policymaker, you are not sleeping a lot these days. And so I expect that we will have plenty of time to discuss interesting things and analyze them at a slightly higher level than you can do at this moment. Now, I also told you in the introduction that this particular lecture is not going to be of that kind. It's going to be very boring in the sense that we need to start with definitions. And I don't know, who likes definitions? I don't. It's very boring. Now, there is an interesting, a curious side of the definitions we're going to discuss, which is that if you were taking 1401, microeconomics, many of the concepts we're going to describe require no definition. They're obvious. I mean, if I ask for an output of a factory that produces cars, it's pretty obvious that it's a number of cars. If I ask you for the prices of those cars, it's pretty obvious what the price of a car is. Not so for macro, because if I ask you what is the output of the US economy, well, there's millions of goods and services are produced at the same time. So what do we mean by output, a single measure of output? Or if I ask you about the price level or the inflation, the rate at which that price level is changing. Well, what are we talking about? It's very easy to see whether the price of a car is going up. But if we're mixing sort of millions of different goods and services, then it's a little harder. And that's the reason we need this lecture, because it's a little harder than 1401 and we need to define basic things. But they have a trick because you're summing apples and oranges-- not only apples and oranges, apples, oranges, health services, financial services, all of them in one piece. And so it's a little trickier. And that's the reason we need this boring lecture. We need to go through that slightly trickier definition of output prices and so on. So let me start with the most basic thing-- aggregate output. At the end of the day when the economy is in a recession or not, and so we don't like it or we do like, it depends on what is happening to output. Is output growing at the pace it used to grow? Or is it growing less or it's declining? Well, that's very important for macro and to understand the health of an economy, the macroeconomic health of an economy. But we need to start by defining what we mean by output. And because it's a tricky thing to do when you're adding so many apples, oranges, financial services, entertainment, and lots of things that are very different, we didn't have a good way of doing that. In fact, the National accounts as we know them in the US is something that we have since the post-war period in the late '40s that we developed the techniques, the approach, to come up with a measure of aggregate output. Before that, we had measures, proxies. Industrial production is very high, meaning we're producing lots of cars, stuff like that. But something systematic like we have today is a pretty recent thing. OK we call that NIPA-- the National Income and Product Account. Income and product-- that's going to be very important for macro, as you'll see in a minute. So the main measure of aggregate output is what we call gross domestic product, or simply, GDP. You hear GDP, that means output of an economy. Why is gross and not net? You're not going to worry about that in this course. But that's it. You hear GDP, most macroeconomic wouldn't say output. They would say GDP. It's very short. It's efficient, and so on. Well, that's what it means, is the output of an economy. But how do we define It? As I said before, it's much harder than when you have an individual good. By the way, I will be, most of the time, I will say goods. But really it's goods and services. But it's very long to say goods and services. So whenever I say goods, I'm not trying to play any trick on you. I really mean goods and services. I just be lazy. And most people are lazy that way. Now, what is the difference between goods and services? You're not going to worry a lot about that in this. You're not going to worry at all about that in this course. But just to get a sense, goods are things that are tangible. Services are things that are not that tangible. They're benefits that you receive from the tasks that someone else operates on you. So you go to the Medical Center. You don't come up with a piece of a machine to-- well, they may lend you something. But you don't come out with an objective. You come out with a service provided by a doctor to you. And the same happens if you go to a bank. You don't come with an ATM with you. But you come up with is the service of having done a transaction or deposit or gotten a mortgage or something like that. It's a service. If you go to a restaurant, again, you don't-- what you have is an experience. It's people provided an experience to you. Things are a little tricky because if you do a take out, well, it's not an experience, or it's really the good. So, if you get into those details, which we're not going to get, it gets to be tricky. But just to get a sense, on average, a consumer in the US, 2/3 of the consumption is in services, not in goods. It's not the bananas and so on that you buy. It's a lot of the financial services, health services, and entertainment, and things like that, traveling. That's where you spend most of your time. Having clarified that, you can forget that. From now on, I'm going to say goods. So occasionally I may say goods and services. But I always mean the same. OK, good. So how do we measure these things? Well, there are different ways of doing this. Something happened to my slide there. OK, there we are. And so suppose that you have an economy that is very simple. I don't need to tell you how simple this economy is. It has just two firms. Suppose that we have an economy that is very simple. It has two firms. One firm produces a steel and the other one produces cars. And the company that produces cars buys all the steel from the company. You, as a consumer, don't buy steel directly. The car company buys the steel. It uses to produce a car. And you buy the car. So that's our simple economy. And those are the accounts of the simple economy. So there you have a company. One has a revenue from sales. It sells 100. So price of steel times steel is 100. The second company buys steel, uses workers, and sells 200. So the question I asked you, the first question I asked you here is, well, what is the GDP of this economy? Here you have an economy that has two goods. Needless to say, a real economy is a lot more complicated than this. But you have two companies. And I asked you, what is GDP? So the obvious things that you can come up with is, well, I sum all the revenues. OK, so that's the obvious one. The total GDP of these economies is 300. That's a sensible answer. At least at this moment, I would accept that as a sensible answer. In the quiz, I wouldn't. But here it's a sensible answer. I mean, well, you ask me for what is the total output of that economy. I sum up all the revenues and sales. And that's 300. OK, so is it 300, or is it 200? I mean, that's another-- it says, well, look, only the final goods perhaps should count because, you know, this is the only thing that you, as a consumer, will ever see is this part, not that. Those are two sensible answers. And what I'll show you in three different ways is that the right answer is 200 for that economy, not 300. The right answer is 200. So method one-- and all these methods are used, and they used to check each other to compute GDP. So method one is what I said here is final goods. You said GDP is the value of the final goods and services produced in the economy during a given period of time. Notice that GDP is a concept of a flow. It's something you produce in a year. That's the reason you say GDP of the US in 2022 was $23 trillion. It's in a year. It's a period of time. So that's one definition. And one way of making sense of this definition is imagine that I give you the same economy with the same two factories. And now all of a sudden I tell you, you know what, I'm going to merge the two companies. So company, the car company will buy the steel mill, or whatever. Well, if I now put together those two accounts, now, I never see the steel because that's all happening inside the factory. And it's still the case that the economy would be producing 200 cars. And all that you would see is 200, because I would put this thing together. There was a steel that this company had purchased from that, but now it's all inside. So if I put them together, then that steel there doesn't appear because it's all produced in-house. And now GDP would be 200. Well, it makes no sense that just because I change the ownership structure of the company, that your GDP changes, collapses from 300 to 200. If I'm only measuring final goods, though, I don't have that problem. It's still 200. Doesn't matter that I have the merge lies, and lies in 20 or whatever. So that tells you that we're going the right way here because it's a very robust answer. That is, you don't count the intermediate output. You only count the final goods, which are the things that the consumer will buy, the firms will buy for investment, and things of that kind-- that foreigners will buy. Alternative method is GDP is the sum of value added in the economy during a given period of time. What is value added? The difference between the final goods produced by a company and the intermediate inputs it purchased to produce those goods. OK, so what is the value added of the steel company here? The answer is there. But what is the value of it? It's 100. How do I know it's 100? Well, because it's not buying any intermediate input and the revenue is 100. So that's the reason I get 100 there. OK, 100. There is no intermediate inputs. What is the value added of the car company? Well, the revenue on sales is 200. But it purchased 100 intermediate inputs. So the value added is 200 minus 100. The value added of this company is 100. 100 plus 100, I get my 200 again. Yeah. AUDIENCE: Would we consider the wages to be intermediate good? RICARDO J. CABALLERO: No. Those are not goods. Those are factors of production. Their machines in that factory that are helping you produce things, that's a service of the machine. It's capital. That's not an intermediate input. An intermediate input is another good or service that you buy for the purpose of producing that good. So workers, no. The workers are working inside your company and so on. If the work was produced, was outsourced and you had another company that produces something that you use from those workers, that would be an intermediate input. But you would have to count the value added of the companies you have outsourced to. So that's method two. And you see we get exactly 200. Those two methods are called production methods. There are different ways of measuring the production of the economy. The third method, and the last one, is an income method, which means, look, all that is produced has to be earned by someone-- the workers, the owners of capital. Somebody has to own that. If the firms sell collectively $200, those $200 have to be allocated to someone. Someone means workers, the owners of capital of the firms, or in realistic economies, the government. You pay taxes and things of that kind. OK, we're not going to worry about the government for a while. So that's an alternative method, is method three. You just sum the incomes. So who are the factors of production here related to your question? In this, there is no government here, no taxes. So we have only workers and profits and the capital, the owners of the company. Wages is 80 plus 70, is 150. Profits is 20 plus 30, 50. 150 in wages plus 15 profit gives you back your 200. OK, so those are the three ways we have of measuring things. And you see, they give you exactly the same result. Now, there is something, as I say, from the construction of national accounts, there is something interesting in what I just said, which is, look, that production is the same as income. That's going to be very important for macro-- very important for macro. And it's totally unimportant for micro. When you're looking at a company, for example, in micro, and you're looking at a car company by itself, it is true that the output of that company becomes income, part for the owners of the company and part for the workers. But that income needs not be spent in cars. Can be spent in food and entertainment and whatever. Not so in macro. Because what else are you going to spend it to than in the same good that you're producing in the aggregate good? So it's very interesting. That's a very distinctive feature of macro that is not present in micro, is that the income has to be spent in the same goods. If the economy is closed, later on, we're going to open the economy to the rest of the world. And then you buy some Chinese goods and blah blah, blah. But if you keep it close, hey, you are not going to buy cars. That's what you work on. But you're going to have to buy it in the single good of the economy, which is a sum of all the goods that we consume in average. That's going to be very important. Anyways, this time this stuff moving in the right direction. OK, so that's, that. Now you know what GDP is and the different ways of measuring. You're going to have to remember that for p set one and for quiz 1. And you might as well forget it for the future. It's good that you understand the concept, but it's different ways of constructing that's not important. Second thing we need to worry about is that whenever you're thinking about the output of an economy, you're really trying to think about the real output, meaning the number of cars and number of machines and so on. But you have inflation, for example. Then the prices of these things are growing. And so the total revenue on sales is growing. But they don't mean the same. And we want to certainly separate these two things. And for that reason, we have a concept which is called nominal GDP and another one which is called real GDP. Nominal GDP is the simplest thing on Earth. It's essentially we have only one final goods company there which was carved out. Imagine you have cars, refrigerators, many, many things. Nominal GDP is simply the sum of the final goods and you multiply them by the current price. And that gives you the dollar GDP that you have. I don't know what it is in the US today. You could check it, but it's $24 trillion or something like that. So that's it. P times Q, prices times quantity. And you sum across all the final goods, as we do. That's one way of calculating things. That's nominal GDP. But again, what we really care about is we're going to care a lot about later on is how that economy is doing over time. Is it growing? Is it not growing? Nominal GDP can grow for two different reasons. It can grow because the economy is really becoming more productive. It's producing more goods, or because prices are going up. Now, at this moment, nominal GDP is growing very fast in the US, despite the fact that we may have a recession this year. We don't know. But nobody has any doubt that nominal GDP will grow because we have lots of inflation. And so you want to separate these two things. And the thing that removes the inflation component is what we call real GDP. And real GDP is you hear the word only GDP. And that was produced by somebody who understand what he's talking about. GDP really means real GDP. Just hear GDP, people are trying to say the output of the economy. Well, that's real GDP. And the real GDP is computed-- many tricks. But essentially what you do is you also sum across all the goods. You also sum across the goods, but you use constant prices, not the prices of that point in time necessarily. So I'm going to give you a very concrete example. But before doing that, for this course, we're going to call nominal GDP and all nominal variables are going to have. That's what the textbook does. They're going to have a dollar sign in front. So that's nominal GDP. GDP is going to be y without the dollar. That's real GDP. For the first part of the course up to quiz one, we're going to worry very little about nominal things because we're going to have prices completely fixed. But you still need to know the concept. And that's real GDP. So now let me give you an example. So suppose you have this, this simple economy we had before. We're just going to look at final goods because as we need to look at to construct GDP. And so this economy produces cars, and supposedly produces 10 cars in 2011, 12 cars in 2012, and 13 cars in 2013. But suppose the price of a car is what you see there, 20,000, 24,000, and 26,000. Nominal GDP is simply the product of this times that. That gives you $200,000. 12 cars times $35,000 gives you 288 and so on. That's nominal GDP. Real GDP, you have to pick which price you want to use. But only use one. And don't vary it over time. So in this particular case we picked 2012. So that means when you say GDP, real GDP at 2012, base 2012, were 2012 prices, means you're using the prices of 2012. You don't vary that. You let quantities change over time. But the prices remain fixed. So in this case, real GDP at 2012 is, 10 cars times 25,000. That gives you 240,000. 12 cars times 24,000, 288. This is interesting. For this year, nominal GDP is the same as real GDP. Why is that? An accident? AUDIENCE: Because you used the base year as the [INAUDIBLE] RICARDO J. CABALLERO: Exactly. We're using that as a base year. So nominal GDP will always be equal to real GDP at the base year. That's the year we're picking as the base because those are the prices we're using. I got about 2013. Well, is not 26,000 times 13. Is 24,000 times 13. So we get 312. And it's obvious here that real GDP is growing less than nominal GDP. Why is that? Well, because this economy has inflation. Prices are rising over time and we want to remove that when we look at the real concept. The real concept removes the price effect. There are times in which you don't want to remove all that price effect. And it happens a lot, for example, in computers. Because sometimes the increase in the price of the computer is simply because the computer is better and you want to correct for quality and so on. But again, that's not something you need to worry about in this course. Maybe some of you are deciding the pace at which you want me to move. I'm really puzzled by this stuff. This is from the book. And you see what happened in the US, with nominal and real GDP, with base year 2012. So as I said before, these two curves, one is nominal GDP, the red line. The blue line is real GDP. We're using base year 2012. So at that point, they have to be the same. And what you see very clearly there is that the blue line, real GDP, is flatter than the red line. Why is that? Why is the-- yeah. AUDIENCE: Real GDP before was lower beacuse [INAUDIBLE] GDP-- I mean this particular market probably [INAUDIBLE] inflation. RICARDO J. CABALLERO: This inflation. By the way, I do have a reference for you. So ask me after the-- OK, good. And anyway, so yeah, in the US, between 1960 and 2018, nominal GDP increased by a factor of 38, while real GDP by a factor of 5.7. Big difference. So you better be careful when you look at GDP that you are removing inflation, especially-- I mean, if you were to look in Argentina, these guys have had a recession, a chronic recession for a long time, big recession. But nominal GDP is exploding because they have 10,000% inflation. So it makes a big difference, especially over time. This is just so you get the picture, the complete picture for the US. This is GDP growth in the US since we have national accounts. And some noticeable things-- well, again, recessions. This was a big recession. Remember, we call this the Great Recession, big recession. And this is COVID. And then this is 2020. And then they bounce back in 2021 when we reopen the economy. Big growth. But that's very anomalous. I mean, it was a very weird shock. But see, these are all the shaded areas are recessions. Recessions are defined as slightly more complicated way than that. But one sort of popular way of describing recession is as an episode where you have two consecutive quarters of negative inflation. That's not the formal definition of a recession, but it's pretty close. And so that's what you have there. Another concept is unemployment rate-- the unemployment rate. So that's GDP. And in the first part of the course, we want to worry a lot about that. We're going to build a model on how to find equilibrium GDP. And we're going to see what happens with fiscal policy, with monetary policy, how does equilibrium GDP, macroeconomic output, changes with different forms of policies or when consumers get scared or stuff like that. What about the unemployment rate? The unemployment rate is not something we want to worry a lot about until the second part of the course, after quiz 1. But still, I want to get it over with these definitions. So what is employment? It's a number of people who have a job. That's easy. Unemployment is slightly less easy because it's first of all, obviously to be unemployed, you cannot have a job. But it's not enough that you don't have a job. An unemployed person is somebody that doesn't have a job and is looking for one. And not all unemployed people look for a job. Not all non-employed people are looking for jobs. So to be unemployed, you need to not have a job and be looking for one. The labor force, what we call the labor force, is the sum of those two groups, the employed and the unemployed that would like to get a job. The unemployment rate, which is something I showed you in the previous lecture, is just a ratio of these two concepts, the unemployed over the labor force. Notice, over the labor force, not population, the labor force, which is the sum of the employed, and those that are unemployed that do not have a job and are looking for a job. And how is unemployment measured in the US? It's mostly a survey. And I have the info there. It's called the CPS, the Current Population Survey that consults lots of households. And they ask them about the employment status, whether they have been looking for a job over the last two weeks or not and so on. And that's the way we come up with the number. As I said before, those that do not have a job but are not looking for a job, they haven't been looking for a job in the last two weeks, are called not in the labor force. That's what we say. Now, these concepts are between unemployed and not in the labor force is not that clear. We look at the unemployment rate. But we also tend to look at those people, as well, because many people are simply discouraged. They would like to get a job, but they have been looking for a while and they haven't found it. And it happens that there is a lot more discouraged workers during recessions. And you're having a big recession. It's very difficult to find a job. So it's very easy to get discouraged. And so that's the reason we look at broader measures of non employment than the typical unemployment rate, because a lot of those-- not in the labor force. People do not have a job and are not looking for a job, are really discouraged. They just gave up after a while. The participation rate-- and that's a very important concept-- is something you would have ignored most of the time. It's very critical at this moment. The participation rate is the ratio of the labor force to the total population of working age. And you exclude people in prison and stuff like that. But so it's labor force, which is the sum of the employed and unemployed, divided by those that could work in principle. And that's what we call the participation rate. How do these numbers look? I showed you this picture in the previous lecture. And that's the unemployment rate. It skyrocketed during COVID, but it has declined enormously. And as I said in the previous lecture, a big issue is that the unemployment rate today is extremely low. We haven't seen levels like this since the early '60s. Unemployment rate today is at record low levels. And that's a problem. Sounds wonderful. But it's also a problem because we have an inflation problem. And those two things are connected, as you will learn later on in the course. But that's what we have right now. That's the unemployment rate. Now, the reason that employment rate is so low, there are two reasons, really. One is that there was lots of stimulus policy, fiscal policy, monetary policy. So aggregate demand and consumers were fed up with being locked out of restaurants and trips and so on for two years, decided to travel and so on. And they had lots of savings. The US consumer accumulated excess savings of $2.7 trillion. And now they're spending the stuff. China, a big reason why people expect a big bounce back is because they also had a lot of savings because they were locked up for quite some time. So as a result of that, there is lots of demand for goods. And as we learn in the next lecture, that means lots of output, as well. But the second reason is the following. It's the participation rate. People haven't come back to work in the magnitude that we expected. So that's the participation rate in the US. Remember, the participation rate is labor force over all those that could work, in principle. What do you think is this? Look at the participation rate used to be in the '60s, below '60s. And then there was a big rise in the participation rate in the US. What do you think is this due to? AUDIENCE: Women joining the workforce. RICARDO J. CABALLERO: Women joining the workforce. That's what it did. OK, that's that. Since then, since this woman did all that they had to do, we have been declining. And that's an issue. But look at what happened here. Lots of people exit the labor force during COVID. I mean, you know, they had to take care of the kids and/or the elderly. And so people withdrew from the labor force. They didn't want a job. It was also discouraging. It was very difficult to get a job. I mean, you work in a restaurant. It was impossible to get a job in a restaurant. But everyone expected this to recover to the previous level and it hasn't. So you see that the participation rate has not come back to the levels pre-COVID. It's substantially below. And that's one of the reasons, you know, that restaurants complain that they don't have workers and so on and so forth, is that many people haven't come back to the labor force. We thought this was going to be temporary. Now, there is a concern that a lot of that is really permanent. People that decided that life at home wasn't that bad after all. Less income, but they spend more time with the kids or whatever. And that's an issue. And that's a big reason behind the low unemployment rate. And the fact that we have all this inflation has to do with everyone. In particular, the Fed miscalculated the bounce back of the participation rate. So as I said before, we're not going to look at labor market issues until the second part of the course, after quiz one. And the same is for inflation. We're not going to look at inflation issues until the second part of the course because to connect them, I mean, they are connected. And we're not going to look at labor markets until sort of eight lectures from now or so. But this is an important variable and certainly something you're facing every single day in the newspapers and so on-- the inflation rate. So by inflation, when you hear inflation, that typically means the sustained rise in the general level of prices. So it's not that the price of cars went up relative to the price of hotels, or now down relative to the price of hotels. It's that on average, prices are rising. That's what we call an inflation. We're going to call the price level BT. And there are many different price levels, as you see. So the inflation rate, when you hear the inflation rate, is the rate of change of that price level. An episode of deflation, the opposite of what we're experiencing now, where we're experiencing inflation, is when that inflation rate is negative. Japan most prominently has experienced something like that not now, but experienced it for on and off for the last three decades or so. And so what is the price level? There are many ways of defining it. And there are many different price levels. A very popular one is what is called the GDP deflator, and it's the one you see. It's never mentioned in the newspapers. But we economists tend to look at the deflator. The deflator is nothing else than the ratio of nominal GDP to real GDP. Another one is far more popular and more relevant for you as consumers, is what we call the consumer price index. That's the CPI. The CPI, that's what it is. So you calculate the rate of inflation from the CPI. You calculate the same way. But you use a CPI there instead of the GDP deflator. Now, it turns out that-- obviously confused with this. It turns out that these two measures are sort of pretty well aligned. There are differences that may be interesting at some specific point in time. But they tell you more or less the same picture. In particular, there is absolutely no doubt that we have an inflation problem these days. You can be as selective as you want with the price index you want to use and people are getting very selective. Now we have CPI excluding-- Well, one thing that makes a lot of sense is to exclude the most volatile goods. So typically, the CPI, we use is called Core CPI, which removes energy and food, which are very volatile prices. You don't want the thing to be moving all over the place. But now we're also beginning to remove shelter because shelter inflation is very high and sticky and so on. So people can get to be very selective. But no matter how you look at the thing, we have a problem. There is no way around that. So that's the way we look. Again, we're not going to look-- we're going to talk a lot about this problem, of course. But we need to build tools. And we're going to get there in about nine lectures from now. Nine lectures from now, we're going to be able to talk about what is going on with models. I mean, you can talk whenever you want, but with models, OK, so those are the concepts I wanted to discuss today. Those are the definitions, some relief that we got over this stuff. Let me just show you, because we have five minutes or so, equivalent numbers for other places around the world. That's China. That's China. That's GDP growth for China. And there are several things that you can see for this GDP series. The first is that it was very high. These numbers look a lot on average. That's a lot higher than the US. When I showed you the US, the rate of growth was moving around 2%, 1.5%, blah, blah, blah, occasionally recessions and so on. This is China. Look, you had numbers like 10% or so. That's interesting. We want to know why is that you can have so much difference in different countries. And that's what we're going to do in the third part of the course. When we look at growth, we're going to look at these kind of factors, what can give you sustained rate of growth? Sustain, I mean, for a long period of time higher than in another country. The main factor, just to preview what will happen, is simply that China was a lot poorer than the US at the beginning. And when you're poorer and you put your act together, you can grow a lot faster than the rest. Now China is slowing down. Aside from COVID, it's very clear for quite some time that they have been worried because clearly GDP growth is declining. And they're terrified about that. And many of the things that are happening with China have to do with the fear associated to a slowdown in the rate of growth when they are still quite poor in per capita terms. So a lot of what happens in China has to do with that. If you look at Japan, look at Japan. Japan also grew very fast in the '60s. You see this very fast rate of growth. Then it began to slow down. And boom, here it collapsed. They have a massive crash in financial markets, equities and land. The price of land was enormous in Japan at this time. They had a big financial bubble. For those of you that know Japan-- or if you don't know, doesn't matter. The imperial park in Tokyo, which is a park that is much smaller than Central Park or whatever, the value of that land at some point in time was the same as the value of the entire state of California. That's an order of magnitude. It was not for sale. But, you know, in terms of location, times, price. But that bubble crash. And since then, Japan has never been able to recover its mojo. It has been sort of growing at a very low rate for a very long period of time. And one of the things that scares China is that this may happen to them. Because this happened to Japan when they were already quite rich. Japan was pretty poor after the war, naturally. And they grew very fast in the '60s. But then they had this issue, financial bubble and so on. They crashed and had never been able to recover. And China is worried that this slowdown happens to them before they have reached the level of income per capita that Japan reached when that happened. There are common factors behind the two of them, as well-- demographic factors. Demographics are very negative for both of them, which naturally will slow down the rate of growth. We're going to look at that later on. This is inflation in Japan. You see, most countries had high inflation around there because the price of oil, we had massive oil shocks and so on. So inflation was pretty high. But the problem of Japan has been the opposite. Since the bubble crash in the late '80s, early '90s, they have had very low inflation, even deflation. And that's been a big problem. Part of the reason why they have had so low growth is because they have been in this deflationary trap. And then something you will look at later on in the course, when you have deflation, it's very difficult to use monetary policy to get out of a recession. And that's the reason they keep getting stuck there. So that's all I wanted to say for today. And I'm relieved again that this lecture is behind us. In the next lecture, we're going to introduce the first model. We're going to look at is how to determine equilibrium GDP and how that depends on a variety of things, including fiscal policy-- not monetary policy, that will happen later-- but how scared you are, consumers' preferences and fears and so on. So that's the plan. So unless there are any questions about this-- No. I'll see you next Monday.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_24_ISLM_and_Expectations.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: Expectations play a huge role in economics. So what I want to today, not only in asset pricing-- I mean, asset pricing-- obviously, it's all about the future, really, but also in the kind of issues we have discussed throughout the course. And so that's what I want to do, essentially, is I want to give you a shortcut to think about the role of expectations in the kind of models we have already discussed. And so I'm going to do all that in the most basic model we have discussed, which is the IS-LM model. And I hope you'll get the gist of what expectations can do in economics. So this is going to be a very compressed version, adapted version, of chapters 15 and 16. But in terms of material mapping into the book, those are the relevant chapters. And the main idea here is that the IS-LM model as we have described it up to now really overweights the present. And in practice, expectations about future conditions play a big role in the decision of all economic actors. We'll look at investors, asset pricing and so on. But it's also true of consumers. It's also true of firms. I mean, if you think about firms, in investment decision, we made the function of the interest rate on our current output. But it's quite clear that the reason firms invest is not because of the current condition. It's because they anticipate making profits in the future. So it's all about real expectations. And even governments and foreigners, when they invest, do foreign direct investment, they go and invest in a country, it's a lot about expectations of what the country will do in the future. I mean, political elections, for example, have huge impact on asset prices and so on precisely because they change what people think, for good or for bad, about future conditions. So expectations is just huge in economics. So we want to do things in two steps. The first, I'm going to revisit the consumption function and the investment function, now taking into account expectations and motivate how you should really think about consumption and investment in a more realistic model than we have been discussing. And then I want to embed not the fully fleshed out consumption and investment decisions, but the flavor of the role of the future into the IS-LM model. And by then, you would have seen-- you will have seen all that I wanted to communicate, at least, in this set of lectures. So let's think about, first, consumption. And up to now, we assume that consumption depended only on disposable income, on current disposable income. But that's not really the way it works. And one of the first in formulating, more or less, formally how consumption decisions are really made is Milton Friedman. And he called it the permanent income theory of consumption, meaning what really matters to you in a consumption decision is not so much your current income, but it's what you expect to get on average during your lifetime. And you don't want to be moving consumption up and down like crazy. Once you realize, more or less, what you'll get on average, then you should-- consumption should be related to that concept. And in a sense, it's also by thinking in these terms, you're also drawing a big distinction between things that are temporary and that shouldn't matter a lot for your consumption decisions versus things that are permanent, that clearly have a potential to have a much larger impact on your consumption. Of course, you can have temporary things that are very large. I mean, you win the lottery, that's a huge temporary shock. But probably you're not going to spend the whole lottery right away. You're going to smooth it over your lifetime, in any event. And that actually relates to, more or less at the same time, Milton Friedman was at Chicago, Franco Modigliani at MIT. We will develop a life-cycle theory of consumption, which says, look, even at the level of an individual, the day-to-day income is not really what pins down the level of consumption, because people know early in life that they have a lower income than they will have later on. So they will tend to spend and borrow more when they're young. Then, in the middle, when they're in the middle of their life cycle, before retirement, they panic, and you tend to save more. So you don't consume all you have because you know there are many years ahead of you where income will be lower than your consumption needs. So there is also a sense of intertemporal smoothing of your consumption. You don't follow income second by second. You sort of try to stabilize consumption over time, more or less. And that means that you have to think more about your permanent income, what you'll get on average, rather than what you get in the short run. So when you start thinking about consumption in those terms, what really think, well, what really matters, then, is total wealth more than income. How wealthy you are will pin down more or less the consumption you have more than your current income. And there are two senses of wealth. One is financial wealth, all the assets you may have, you may expect to inherit or whatever, minus the debts you have. So very much as we discussed in the previous lecture in the context of asset pricing, the expected present discounted value of the cash flows of all the assets you have, that's your financial wealth. And that's important. You have more financial wealth, even if you have no income today, you will probably borrow against that wealth to the extent that you can. And probably the banks will be more willing to lend to you if they know that you have a lot of wealth. And so you're going to fund the consumption, which is above your current income just because you have more financial wealth. In fact, the very rich seldom sell assets. They borrow against those assets to fund consumption. That's the way it works. There are tax advantages of doing that and so on. But that's the way it works. And the very rich often have no income, [CHUCKLES] at least labor income. All the income comes from returns on assets. And again, they mostly borrow against that. But in any event, the point there is that what really brings out your consumption is your wealth, not the current flow of income. And the other very important concept, which is a bigger thing for most individuals, is human wealth. I mean, this is huge for all of you here. It's obvious that your current income is a lot lower than what your income will be in the future. You have a lot of human capital. And so that's also a concept of expected present discounted value is you expect to earn a lot of income in the future. And therefore, it makes sense that, at this stage of your life, you borrow. Now, banks are a little bit more reluctant of lending you against your human capital than lending you against your financial assets. It's easier to borrow against a house than against your future income. But even there, probably you're going to have sort of-- not going to be saving a lot at this time of your life because your income is a lot higher in the future. That we call human wealth. And total wealth is just a sum of financial wealth plus human wealth. So at its most basic level-- and those are-- sorry, just to relate to things we did in the previous two lectures, those are two expected present discounted value. You don't know exactly how much income you're going to get. You get a sense of, more or less, what somebody like you does in the future, more or less on average and so on. So you have a sense-- you have an expected cash flow, labor income flow in the future. You don't know what the interest rates are exactly. So you're going to guess, more or less, what the future interest rate is. And that gives you a sense of human wealth. And I know that many of these things you are not calculating every night, what your human wealth is, and then calculating-- consumes 5% of that or 3.5% of that. But you know, there are a lot of-- this is very behavioral. It's really ingrained in you. And you're probably more likely to spend more if you think that you're going to be doing well in the future than not. Maybe you're too busy now to spend a lot. But at some point, [CHUCKLES] when you're given the opportunity, that will make a difference. Traders, very successful traders-- they get a very low income. So essentially, they live out of the income that they get, they couldn't afford what they normally afford. But they spent a lot more than that income because they expect to get a big bonus and things like that. That's the income that comes in the future. So in principle, your consumption should be something that is not proportional to your disposable income, but really proportional to your wealth. And there are estimates of what that proportionality factor is. And that's what I said, it depends on the type of assets we're talking about. But it's about 0.03, that kind of thing, OK? Now, in reality, that's just-- it's true this is a better economic concept than just putting income in there. But in reality, both things really matter. So a more realistic consumption function is something that depends on both things-- for a variety of reasons, that many people have no savings. And really, we call them hand-to-mouth. They live by the income they receive in every single period. Those people are not thinking about smoothing consumption over time. They're consuming whatever income they receive. As I said before, most banks are not likely to lend you a lot against your expected present discounted value of labor income. So you may be constrained in the short run. Your income-- you think about how wealthy you'll be, but you also think about your flows, the cash flow you're receiving. That's also part of your considerations. So in reality, it's a mixture of those two things. When you look at the micro level at different individuals, the composition changes. The richer you are, the more this term matters, the less this one matters. The poorer you are, this term overwhelms that term. That's, more or less, how it works. But on average, it looks like that. So we weren't wrong when we did IS-LM, having the consumption function as increasing in disposable income. But I always told you there is a lot of interesting stuff hidden in that little C0, in the-- in that autonomous component of consumption. Well, that-- lots of interesting things has a lot to do with wealth, OK? And again, this term here is something that captures a lot of things that are permanent. Well, this one captures a lot of cyclical components and things of that kind. So interpreted this way, the reason-- people during booms, even though human wealth may not change much over time, financial wealth typically changes in a boom. But it's also the case, in a boom, wages are high. A lot of people tend to spend more. Even so, this captures a lot the temporary component. When you're in a boom, it's likely that you're going to consume more for any given level of wealth, OK? It's temporary, but that's what it is. What about investment? That's a decision by the firm. How much physical capital? I'm talking about physical investment, real investment, not financial investment. The decision also depend on current, but particularly on expected profits. And when you think about expected profits, you need to think about interest rate as well. We put the interest rate, as I said, OK, it's more expensive to borrow if the interest rate is high. True, but actually it matters a lot more than just that because it matters also through the expected present discounted value of your future cash flows. If the interest rates are very high, and they're expected to remain very high, that means a project that gives you lots of return in the future, lots of cash flow in the future, may not be worth a lot simply because interest rates are very high. So the discounting of the future cash flows is very high. And in that environment, investments that give you a return, a quick return, are worth more than things that have a pay-off in the very long run. So the decision, for example, of buying a machine needs to look at the price of the machine right now and then at the expected present discounted value of the cash flows, OK? So let's think a bit more carefully about that decision. So suppose you buy a machine for a price. Let's normalize that price to 1. The first thing you need to know is, well, how long will this machine last because I need to know for how many years I'm going to get a cash flow out of these things. And a reasonable assumption is, for most machines, is some sort of geometric depreciation-- so meaning, it's not deterministic. It's more or less-- machines break down occasionally, but there is certain probability that they break down. We typically call that notation in economics-- we refer to that as delta. That's the depreciation probability. So if you think in terms of expected value, if you buy a machine today, and you ask, how much of a machine I'll have next year, well, it's going to be a weighted average of 0 and 1 probably. But on average, it's going to be 1 minus delta. So as a machine depreciates, the probability of the machine breaking down over a year is 5%, then 1 minus delta is 0.95, say. What is the probability that the machine is still producing two years from now? Well, 1 minus delta squared and so on and so forth. So that's the first thing you know. I have this machine, and it's likely to give me cash flows over this many years and so on. And then I have to know how much expected profits I expect to get in each of those years. And then I need also to know what are the interest rates that are likely to prevail during the lifetime of the machine and so on. So at the end of the day, when I calculate, I do my little project, and I need to decide whether 1, which was the price of the machine or not, is too expensive or too cheap, I need to compare it with the expected present discounted value that I have for that machine. So here is an example. This is a machine that gives-- the first expected cash flow comes next year. I set it up today, and I generate profits by the end of the year or at the beginning of the next year. That's expected profits for the first year of the machine, which comes at the end of the first year, discounted by an interest rate that I know today. I know the interest rate for one year. What about the cash flow that I expect for two years from now? Well, that's going to be-- that's expected cash flow. If the machine is working properly, that's the probability that the machine lasts to the second year. Or you can also assume that the machine sort of breaks down in little pieces every year. You get 0.95 of the machine in second year, 1 minus 1.05 squared two years from now and so on and so forth. But I also now, when I think about the cash flow in the second year, and I don't know the interest rate for the second year, so I need to have an expected interest rate here-- and so on, so forth, OK? If the machine lasts for many, many years, that's what I get. A question, by the way-- I'm saying, yeah, I need to have expectations here and so on. But the truth is that the guy that invests in the machine doesn't need to have that expectation because I could replace this for something that is known today. What would that be? I'm saying, when I calculate the expected cash flow, when I'm discounting the two-years-out cash flow, I'm going to have an interest rate that I know, the one from time 0 to the end of the first year. But I don't know the interest rate that prevails from the end of year one to the end of year two. That's what I wrote here. But I said, hm, but there is something in the market that I could look at and that I really know. What is that? AUDIENCE: Is it the [INAUDIBLE] RICARDO CABALLERO: Exactly. I could use 1 plus r2. These are one-year rates. R2t squared. So when you have the term structure, when you see all the interest rates, a firm deciding whether to invest or not has the interest rate it needs. It doesn't need to have expectations, form expectations about the interest rate. The market is doing it for them. Now, the guy may choose to be a trader and decide that it doesn't like the interest rate that the market is pricing in. But that's a different trade. It's not the investment decision of the firm. The firm will have to make a forecast about expected cash flows and so on, but that's it, from the machine and so on, OK? So obviously, the larger this is, the more you're going to invest, the more machines you're going to buy and so on, OK? So in principle, a better investment function-- remember, we wrote an investment function as investment, a function of output, current output, which we said is a proxy for sales, and then the interest rate. Well, a better concept is that one, which does depend on aggregate activity. It depends on many things, but not only today's, also the ones that you expect for the future, OK? And it depends on the interest rate, not only today's interest rate, though, also in the interest rate of the future. If I look at this expression, even if the interest rate today doesn't change, but I expect the interest rate to change in the future, to go up, that will lower the value of my project. We had no space for that when we posted the initial investment function. But here we have that. And sorry, and this is an increasing function of that. The higher is V, the higher is the expected present discounted value of buying a machine given the price, the larger is the investment. Now, this is in principle. In practice, current cash flows also matter a lot, OK? So in the same sense as in the case of the consumption function, we said, in principle, it's only wealth that matters. But in practice, there's lots of consumers that are financially constrained, they're hand-to-mouth and so on. So current income also matters. But for firms, the same is true because-- and the main reason for that, really, is financial frictions, in the case of the firm because a firm may arrive with a great project to a bank, but the bank may decide that it doesn't trust as much or is not as optimistic as the firm is and so on. So it may not borrow-- the firm may not be able to borrow as much as it would want given how optimistic that particular firm is on its own project. The bank may say, you know, I'm going to be more conservative here since I'm lending you the money. And one way that firms use, actually, to get around financial constraints is simply by returning-- retaining their earnings, meaning they generate a cash flow, and they save. Firms save a lot, by the way. Companies like Apple and so on save an enormous amount in huge deposits, US treasuries, and so on and so forth. In the case of Apple, it's not to relax financial constraints, although it has something to do with. Being opportunistic, having the opportunity to buy things that are in distress. But many firms, especially smaller firms, have deposits and cash flow and so on mostly because, if they get a good opportunity, they may face financial constraints. So if current activity is high, sales are high, firms are going to be less likely to be financially constrained. And that's the reason current profits also end. Now, current profit is going to be an increasing function of output over capital that-- for any given level of capital, if output goes up, that's going to generate more profit. And so we can write our investment function a little bit like we had in the earlier lectures. But now we put Vt here, Yt, and the interest rate and-- interest and then future output and future interest rates enter all through this term here. And again, investment here is increasing with respect to Vt, and it's increasing with respect to Yt. So that's a far more realistic model. So you go back to IS-LM and put this type of consumption function and investment functions, and they're going to make a lot of sense. Again, the concept of something persistent-- persistent things should matter a lot more than temporary things. OK? So naturally, if you expect profits to remain high for a very long period of time, that machine is going to be worth a lot more than if you only expect the machine to be very profitable for only one year. And so anything that is likely to be persistent is also likely to have a much larger impact. There are important exceptions, but I'm not going to get into that now. And the same is true for interest rates. If I expect-- since interest rates are high today, but we expect them to go down in the near future, then that's not going to affect a lot the discounting of future profits. But if interest rates go up today, and I expect them to remain high for a long time, that's going to affect a lot more the present value of profits. And therefore, it's going to depress investment a lot more. In fact, central banks, much more than playing with the current interest rate, they play with your minds. That's what they do. They are always telling you stories for why interest rates will remain high, for why-- [CHUCKLES] they don't want-- they want-- they only control an interest rate that is an overnight interest rate, really. But they-- and with that, nobody cares about the overnight rate except for some traders out there. But since they want to influence aggregate demand-- that is, they want to influence consumption and investment-- they need to convince you that this stuff will last for some time, because otherwise it would be irrelevant because if you want to reduce aggregate demand, you want to convince firms and households and so on that the interest rate will remain high for a while. Otherwise, you're going to get very little effect out of that. One of the problems they're having now, actually, when the Fed is trying to cool the economy, is that they keep hiking rates, but the loan rates have begun to decline already. And that's a problem for them. [CHUCKLES] They would like you not to believe, bankers not to believe that will happen. And that's a big issue. So let's think about this IS-LM with expectations. So what we said is, what really we're after-- in the IS-LM model, remember, the IS-LM model is a model in which aggregate demand determines output. And that's what happens in the short run. And the biggest components of aggregate demand, aside from the government, which is something that moves, more or less-- OK, with different behavioral functions. We're not talking a lot about that here. But the big drivers are consumption and investment. Those are at least the private sector drivers of aggregate demand, consumption and investment. And what we have said now is that human wealth is affected not only by current income, but future after labor income, future real interest rate. That affects human wealth. That affects consumption. Future real dividends plus future real interest rate affect the value of stocks. That's very important in financial wealth. Future nominal interest rate affect the price of bonds. So all these rates enter here, the price of nominal bonds. For firms, future after tax profits affect expected present value. Future real interest rate affect also the suspected present value. So there is a lot that says future in this column here that enters into the consumption and investment decisions that we care about. That's what I showed you in the previous slides. So remember the basic IS-LM model. We wrote it this way. Output was determined by aggregate demand. And closed economy-- forget all that, fully sticky prices. And we wrote consumption as these functions. So aggregate demand was increasing in output and government expenditure, decreasing in taxes, and decreasing on the interest rate. So a shortcut-- so what I want to do now is give you a shortcut to integrate these views of expectations or the concept of expectations into this very basic IS-LM model, OK? So think of now of aggregate demand rather than just being a function of current variables, be also a function of the same variables, but in the future. OK? So aggregate demand is a function, as before, of current output, current taxes, current interest rate, current expenditure, but also function-- and with the same signs-- of future output. So it's increasing in expected future output. It's decreasing in expected future taxes. It's decreasing in expected future interest rate. It's increasing in expected future government expenditure, although I'm not going to play with this here because of something very specific that I'll discuss later on. So that's the shortcut, OK? The LM is going to be the same as before. So what I want you to think about now is a model that is like the one you had before with the same LM but now the IS is a little bit richer. It has more parameters-- these are parameters-- because I'm going to determine today's output, but it's going to be a function of more parameters. And all these parameters are essentially the same variables that we worry about today, but are the variables we expect of those-- are the values of respect for those variables in the future and, again, with the same sign. So if output-- so if taxes go up today, aggregate demand will decline, and output will decline. But if I expect future taxes to go up as well, then that's going to depress aggregate demand even more. That's the type of logic I want you to develop. So that's the way our model will look. So this is the IS in the same space I had before-- interest rate and output, current output. I'm trying to determine current output. But now I have lots of parameters that I didn't have before. I have a-- things that shift to the left. If taxes go up today, this IS will shift to the left. Do you think it will shift to the left more or less than it did in lecture 3 or 4? So suppose we increase taxes by 10%. Will that reduce output more or less than when we had the static IS-LM model? Yeah? AUDIENCE: Depends on the expectation. RICARDO CABALLERO: OK, but I haven't moved. These are parameters from my curve. So I don't get the right to move them. AUDIENCE: [INAUDIBLE] less. RICARDO CABALLERO: Less, no? Less because now we said it's not only the present that matter. It's a combination of the present and the future. So if-- that means that anything that is just the present will matter less than in the past. Otherwise, you see that? Suppose we had a two-period model, and I give equal weight to the present and the future. Then I'm going to cut the effect of the present in half. That's-- I'm exaggerating there, but that's, more or less, the logic. So you correctly said, well, it depends on whether I expect the future taxes to change or not. Fine. That tells you there is a difference between changing temporarily the taxes and increasing taxes permanently. Permanently here means for the two periods. So what happens with this curve-- so we decided that increasing taxes reduces this IS to the left by a smaller amount than in the past. What happens if you expect taxes to increase in the future? Which wealth goes down? Human wealth, in particular. Your human wealth will go down because you expect your disposable income to be taxed more in the future. So that will also shift IS to the left. And that's the reason that if you have a permanent-- expected permanent increase in taxes today and next year, then that gets us back to the type of shift in the IS that we had when we had the static model. It's the sum of the two. It's a permanent. So permanent changes will behave very similarly to the way the static model worked if they're permanent, OK? In a sense, that model was a very good summary permanent changes-- permanent changes in taxes, permanent changes in interest rates and so on. Changing government expenditure, same idea-- it will also move aggregate demand to the right. But will it do it by more or less? Well, think how government expenditure worked in the basic model, in the static model. It increased aggregate demand, and that then led to a multiplier. And we got a lot more income and so on. Now, if we expect this government expenditure it to be temporary, that multiplier also will be a lot smaller because, yes, it will increase income, but people are not going to spend all their income today. That depends on whether they expect future income to also go up as well or not. And that's a reason that it's-- again, if you expect this government expenditure to go up permanently, and nothing else changes, [CHUCKLES] then you can expect income to go up in the future as well. And then you get more or less the same effect. Now, that's a tricky experiment because if you-- and it's very relevant for today. If this government expenditure goes up permanently, it's unlikely that the central bank will remain unmoved. And so you also have to start thinking, well, what will the central bank do? And that takes me to this variable here-- this variable here. Well, before I discuss this variable, actually, let me point out that it's not accidental that I made this curve a lot steeper than it used to look. I mean, this looks like a pretty steep IS curve, which is a way of saying that a given change in interest rate now has a very small effect on current output, OK? Much smaller than we had in the static model. And the reason is, again, this is permanent versus transitory. If you spec the interest rate to decline only for today, and that's it, that's not going to have a very large effect on consumption. It's not going to have a very large effect on investment. For the interest rate decline to have a very lasting effect, a very large impact on consumption and investment, it has to affect the expected present discounted values in a meaningful way. And for that, you want those changes to be more or less permanent, persistent, that private agents think that this change in the interest rate will be significant. So if the-- so it's good to separate the two things. So if the Fed cuts the interest rate but doesn't persuade anyone that this rate will remain low in the future, then it is going to get very small effect on output. However, if we convince people that there will be future changes, that the rates will remain lower for a long time, that means that this IS now will shift to the right, OK? That's what we have here. So you have distinguish which is a movement-- when the Fed cuts the interest rate, you get a small movement along the curve. But if the Fed persuades you that this is a long-lasting cut in interest rate, then the IS shifts to the right. And you recover the power of monetary policy. Monetary policy depends a lot on its ability to convince people that things will remain in the direction that they want. If they fail-- there was a famous episode in US monetary policy during the times of Alan Greenspan. Alan Greenspan is known as one of the biggest central bankers that the US has, at least in recent memory. And he went through a period which was called-- was known as the Greenspan conundrum. That is, the economy was overheating. He kept hiking interest rates. But the loan rates kept coming down. So he couldn't cool off the economy. [CHUCKLES] There was no way around that because they couldn't persuade the markets that this would be a long-lasting effect. The reason was a different one. It was not that he couldn't persuade the market. It happens that, at the same time, you had China sending massive capital flows to the US. And so-- but the point is that the Fed had couldn't move the interest rates in the long run, and so it was very ineffective in terms of its monetary policy. So again, expectations mattered quite a bit. So let's think about our-- well, this is what I was just discussing. So monetary policy-- I should have-- this-- so you're not going to do a lot unless you persuade people that interest rate will remain low for quite some time. And notice that there is here this-- everything comes into line because if the Fed convinced that the interest rate will be lower in the future as well, then you get the IS to shift to the right. But if interest rates will be low in the future, that means output will be high in the future as well, which further shifts IS to the right. If you convince the markets that-- and the markets and consumers, households, and so on-- that you're cutting interest rates and that, with that, you'll be successful in creating a-- getting out of a recession, for example, in the future, that also increases human wealth, expected present constant value of cash flows, of profits, and so on and so forth because you are giving sort of better economic conditions in the future. So again, for central banks, it's a lot like-- it's mostly about expectations management. That's the business of a central bank, really. I don't know how many of you are soccer fans, but there was a famous story of Mervyn King. Mervyn King was also one of the biggest central bankers that the UK has had, fairly recent. And he described-- he's British. It's "Lord" nowadays. And he described good monetary policy very much like Maradona's goal scored against the UK, England, in some World Cup. I don't remember which one. And it's essentially Maradona picked the ball in his side of the field. And he essentially drew in a straight line to the goal and scored. But he persuaded everyone around to move away from his path, and that was a successful strategy. And central banks do a lot of that-- lots of talking. [MUTTERS] And, you know, at the end of the day, the true actions of moving the interest rate are the least important part of, really, a monetary policy strategy. Fiscal policy can be quite tricky here, actually. So we know that the fiscal contraction, a reduction in government expenditure-- if you just think about the basic IS-LM model, what happens? Well, it's a fiscal contraction. You reduce government expenditure, that will certainly reduce output. All the IS-LM, you reduce government expenditure, you shift IS to the left. And that reduces output. When you have expectations, things are a little trickier because it depends a lot of what you expect the central bank to do in the future. And it expects a lot on what the private sector-- how the private sector responds to that. So, for example, if you have a fiscal contraction that leads to an anticipation of a big cut in interest rates in the future, that may be expansionary. It can offset quite a bit of the fiscal contraction side. And in fact, most of the time when you have episodes of fiscal consolidation in environments that are not of very high distress, financial crisis, and so on, it typically sort of-- how successful that is depends a lot on whether people expect to be a sort of implicit deal between the central bank and the Treasury. If people expect that that fiscal contraction will come with much looser monetary policy conditions, then the fiscal contraction is not as contractionary as could be otherwise. And if, for some reason, the fiscal deficit, the perception of fiscal deficit, was really dragging the economy down because people didn't know when there could be a financial crisis in the near future and so on, then you can get a situation in which the contraction, fiscal contraction today, improves the perception of stability of the country in the future, which in turn may increase expected future income and be expansionary. So most of the fiscal contractions are contractionary. But there are some famous episodes of what are called expansionary fiscal contractions. One of the most classic cases, best-known cases, is Ireland in the late '80s. Ireland had massive fiscal deficit. And all they talked about was fiscal deficits because they had very large fiscal deficits relative to GDP. And the economy was really sort of stagnating and going through cycles and so on. And it was all around this fiscal deficit and so on. So towards the late '80s, they began a delivery plan of fiscal consolidation. Fiscal consolidation means, essentially, reducing the deficit. And they were very successful, as you can see. But contrary to expectations, at least, output growth did not decline, actually. They finally sort of-- they had a very good period like that. So that's all-- it was all about expectations. Notice that unemployment, though, did go up. So despite the fact that you got more unemployment and so on, output began to grow because firms began to invest more, consumers became more optimistic. In fact, you see the house household savings rate declined dramatically. It is all-- consumption and investment did that-- consumption, investment. People consume more, invest more because sort of everything looks a lot better. They have been struggling with this for very long. And they finally-- they had gotten that behind them. Now, this example is abused by almost anyone that wants to cut taxes and things like that. But-- sorry, by almost anyone that wants to cut fiscal expenditure. But there are experiences. There is a whole spectrum of experiences. But in situations that are as extreme as this one, it clearly proved to be very effective. So that's that. So let me take a stock. So the role of this lecture was to say something that I sort of should have said earlier on, but I would have been a bit confusing. So I decided not to talk too much about it. But it's very important. Expectations plays play a central role in economics. In particular, expectations influence aggregate demand. And for us, this course was a lot about aggregate demand. Except for the part on growth, it was a lot about aggregate demand. Now, we did talk about expectations. But we did talk about expectations mostly in the context of aggregate supply. Remember, when we talk about the Phillips curve, we did have expectations because wage setting was a function of expected prices and so on and so forth. So we did talk about the role of expectations in aggregate supply very quickly. But I think a much bigger role is played-- of expectation is really on aggregate demand, and certainly on asset prices. But aggregate demand asset prices are connected because asset price is about wealth and the value of future cash flows, which are, more or less, the same drivers as for investment and consumption. And finally, I want to say that, many times when you find episodes of fiscal, even sometimes monetary policy that are counterintuitive, it's entirely due to the expectations part. So this is case of fiscal consideration is not that the cutting in fiscal expenditure was expansionary. That was not. That was contractionary. But it was overwhelmed or offset, more than offset by the improvement in the outlook that you had. And that also happens with monetary policy. Countries that have high inflation problems and so on sometimes get-- and they have to go through dramatic tightenings and so on. Yes, most of them get very short-lived recession. But sometimes they are very short-lived recessions because eventually the reduction of the instability caused by high and unstable inflation sort of ends up dominating any direct contractionary effect of monetary policy.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_25_Quiz_3_Review.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: The main topic in this part is really open economy. And so we extended the IS-LM model. We did not bring in-- we again shut down price changes, so we said price is completely fixed. No Phillips curve here. So we expanded the IS-LM model to add this open economy dimension. And so we start from the same aggregate demand function that we had in closed economy-- consumption plus investment plus go in expenditure-- but now we have to draw a distinction between demand by domestic households, companies, and the government, and the demand for domestically-produced goods. And so z is the demand for domestically-produced goods, which is equal to demand plus the demand that foreigners have for the goods produced at home minus the imports, that part of that expenditure that is going to imports. That means goods produced by other countries. And so the new behavioral functions here were the export function and the import function export. Is increasing in foreign output. More income abroad will lead to more imports by them, which means more export for home. And it's decreasing with respect to the exchange rate. Real exchange rate and nominal exchange rate will be the same here since we have fully sticky prices. But if the real exchange rate appreciates, that means domestic goods are more expensive. It means exports are less. Foreigners are going to buy less of our goods. Conversely for imports, it's like the exports of the other country, means that if domestic output goes up, then there will be more purchases of foreign goods. And if the exchange rate appreciate, means also that foreign goods are cheaper for us, and therefore we import more. OK. So positive. So those were the two new behavioral functions in the goods market expanded to include an open economy. And that had implications for the diagram that we had in lecture three or so to determine equilibrium output. We started from the same demand we had in closed economy. Then we had to subtract imports, and that shifts things down because now it's part of the domestic demand that is going to foreign goods, not to domestic goods. But it also rotates the curve because the higher is domestic income, the more the imports that we do from the rest of the world. Now to that we have to add the exports, which are not a function of domestic output. That's a parallel shift with respect to this curve. No. We go up. And that gives us the zz curve, which is what we call the demand for domestically-produced goods. Now notice that the distance between the demand for domestically-produced goods and the domestic demand for goods is the net export, so the distance between zz and dd is the net exports. So this point here, for example, the zz is higher than dd, which means that our exports are greater than our imports, and that's the reason you have a trade surplus. At this point they're the same, and that's the reason the trade account is balanced, but over here, imports exceed exports. And that's the reason we have a trade deficit. I'm going to go very quickly, so you're in charge of stopping me. I'm not going to ask you questions. Just stop me if there's something that you need clarification. OK, so that's what's the demand for domestically-produced goods. Now we're going to determine equilibrium output in this open economy context, and that means aggregate demand has to be equal-- aggregate demand for domestically-produced goods has to be equal to output, and that's what we do with the 45-degree line here. And so where the 45-degree line intersect with this zz curve, that's our equilibrium output. Now it happens that in this example, that leads to a trade deficit. But there is nothing here, so we still determine the equilibrium output up here, and then we read in this curve, bottom curve, what is the implication for the trade deficit or surplus. But the equilibrium condition, important is that domestically-produced output has to be equal to the demand for domestically-produced goods. Not for total demand. It's demand for domestically-produced goods. Because this is a Keynesian model in which output is aggregate demand determined. But it has to be aggregate demand for the things you're producing, not aggregate demand for all goods around the world. OK. Good. So then we did some experiments. We said, suppose what happens in this open economy context if we increase government expenditure. The curve will shift up in exactly the same way as in the closed economy. The difference will be in the multiplier though, because as output goes up as a result of the expansionary aggregate demand, that also means that domestic income will go up, and that means that imports will go up, and thus demand will go for foreign goods. And that's the reason this zz curve has a lower multiplier. It's flatter than the dd curve. Still if we start, for example, with a trade balance, since imports are going to increase as a result of this expansionary fiscal policy, we're going to end up with a trade deficit. And that's the reason the response of output is less than closed economies, because part of that goes to foreign goods. Conversely, if this other country that is doing an expansionary fiscal policy or something that leads to higher output abroad, y star, that's also expansionary for home because the export function goes up, and that leads to an increase in output, still with lower multiplier because part of that increase in domestic output will go to imports. But in this case, unlike the other ones, actually the trade balance improves because it's being pulled by exports. And so at impact we get a big increase in exports, which is the driver of increase for demand for domestically-produced goods. And then as income goes up, we do some of that, but you end up with higher-- better trade surpluses than in the case in which you induce the expansion in aggregate demand. Then the last step there was to look at the role of the exchange rate. And what we said is we're going to make some assumptions that I promise you, and I now read the quiz so I guarantee you all of this promise-- nothing weird will happen, meaning if our goods gets more expensive, that means that net exports will be worse. And for two reasons. For at least one reason, but it could be two. If the exchange rate goes up, then there is going to be less exports at any given level of foreign income. That will worsen the net export. And then we're going to tend to import more. Now that will be partially offset for the fact that you can buy more with the same amount of dollars, but we said we're going to impose conditions such that the positive-- the negative effect of an appreciation on net exports always dominates. And again, in your quiz you're going to have a situation like that, and that will be the case. So don't think that we're trying to trick you or anything. This will hold. The point of this being that depreciating your currency, making your goods less expensive will produce a response equivalent to what you get here out of an increase in y star. That's exports will go up, and you're going to get all this shift. Net export function will go up. That will increase aggregate demand and so on. So that's the kind of things that countries want to do typically when they're in a recession and so on. Then that was an introduction to the most important lecture in this part of the course, which is the Mundell-Fleming model. And I promise you that you would get 70% at least in the quiz, and I already read the quiz, so I tell you there is at least 70% of your points have to do with this model, so you better understand it very well. You do every single comparative statics that you can imagine with this model, and then you'll get 70% at least. I think you get 73% actually, but that's the-- so what's this? The Mundell-Fleming model is simply what I just showed you. It's the goods market equilibrium but with an endogenous exchange rate. And so we rewrote and said, since we're assuming completely sticky prices, we can replace the real exchange rate by the nominal exchange rate, but now we're going to endogenize the exchange rate. And for that we're going to use the uncovered interest parity condition. This is a condition you should understand very well as well. So that tells you essentially that the expected return of the two bonds, the bonds issued in foreign currency and domestic currency, have to be the same. The expected return have to be the same. And this condition ensures that, because if a country, for example, if the domestic interest rate is higher than the international interest rate, you need to expect a depreciation of the current currency, otherwise the expected return would not be the same. And that's the reason when we add the assumption that the expected exchange is fixed at least temporarily, then an increase in the interest rate leads to an appreciation of the exchange rate. Why? Because if the exchange rate appreciates but the expected exchange rate stays put, that means the expected appreciation will have to be undone, and that means that leads to an expected depreciation. So that's very important. So and here you have therefore you need to understand this, know that for a given expectation of the exchange rate, an increase in domestic interest rate appreciates the domestic currency, and an increase in the foreign interest rate without us matching it will lead to a depreciation of the domestic currency. So that's what you have there. That's important. Now notice that if the expected exchange rate goes up and the interest rates do not change, then the current exchange rate has to go up. Because if it didn't, then you would have an expected capital gain out of the currency and expected appreciation, and that would add to the expected return of owning domestic bonds. OK, good. So we characterize that interest parity condition as follows. We said, this here, we are plotting the domestic interest rate. Here we're plotting the current exchange rate. And we are marking in this picture-- this is a curve that traces the UIP and coverage by the condition. And naturally when the domestic interest rate is equal to the International interest rate, then it has to be the case that the exchange rate is at the same level as the expected exchange rate. If that is equal to that, so if we're here, then we know that the point in that curve is that in which the exchange rate is equal to the expected exchange rate. That's what we have. Good. So you should understand this curve and know what moves it. Here it's very clear what moves it, no? There are two things that can move this curve here. One is a change in i star, the other one is a change in expected exchange rate. What happens if the i star goes up? You know that the UIP will shift, but you do know that the point equivalent to that, that is one in which exchange rate is equal to expected exchange rate, will have to have a higher domestic interest rate. Because if I'm bringing this up and I want to still look at the point in which e is equal to the expected exchange rate, then I have to move i up by the same amount. And so I know that this curve, when i star goes up, this curve moves up or to the left. You pick which way you analyze. OK. Now what about the expected exchange rate? If the expected exchange rate goes up and the international interest rate hasn't gone up, so if the expected exchange rate moves to the right and the domestic interest doesn't go up, then that means that the current exchange rate will have to also go up. So that means if this goes up, then at an interest rate equal to the international interest rate, so let's look in this direction, then we have a point around here. If that wasn't the case, then you would be expecting an appreciation, and then again, it would be inconsistent with the UIP. Then we put things together. So what we did is we use the UIP to replace the exchange rate there, and now we get this expression in the net export function. Now the LM is exactly the same as before. The central bank sets the interest rate. Here I'm writing it in terms of the nominal interest rate. I think in the quiz we wrote it in terms of the real interest rate, but it's the same because prices are fixed, so real and nominal interest rates are exactly the same. Yeah. AUDIENCE: Is the x axis the expected exchange rate? RICARDO CABALLERO: No. It's the actual exchange rate. The expected exchange rate is in this curve here. That is a parameter. This happens to be a value of the current exchange equal to the expected exchange rate, which is convenient to plot because that's also when the domestic interest rate, which is what I'm putting here, is equal to international interest rate. That's all I'm saying. And then if you shift this to the right, exchange rate up, the expected exchange rate up, then I know that a new point in this curve has to have a higher current exchange rate. So that I know. I know that the equivalent to this point A is going to be to the right. If you lower the foreign interest rate, then what I know is that exactly, that the point at which exchange rate is equal to the expected exchange rate has to have a lower domestic interest rate. So that means that I know that this point A will be around here, which is like a shift to the right. Anyway, so as I was saying, nominal and real interest rates are the same. I think in the quiz we wrote r there, but it's the same. Same model. So now you see that interest rates have two effects. One is the traditional effect, affects investment, but it also affects the exchange rate. So an increase in the domestic interest rate now will be doubly contractionary in the sense that we lower domestic investment. That reduces aggregate demand. But at the same time, it will also appreciate the exchange rate, and therefore it will reduce net exports. We're going to import more and export less, and that's also going to reduce aggregate demand. So those are the two effects. So that's the contribution of all this exchange rate block to our IS-LM framework. Mundell-Fleming is simply IS-LM plus a UIP condition and a net export function. That's it. So we put out now the two things together, sort of a standard IS-LM, now with different slope and so on because we have this net export function and we have more parameter, we have y star, i star, and things like that, and then we have the UIP there. And then we did a few experiments. Said suppose that now you have an expansionary monetary policy. So an expansionary monetary policy as before with slightly different slopes and so on because of the net export function will lower equilibrium output. And it will lower it for two reasons. As I said before, it will lower it because investment will decline but also because higher interest rate means an appreciation of the exchange rate today because you have to expect a depreciation now in the next period. And that means also less net exports. So the interest rate is contractionary for two different reasons here. Is that clear? Yeah. Raised interest rate. Raised interest rates will lower aggregate demand for the standard reason, but on top of that, we're going to get an appreciation of the exchange rate, which also reduces net exports. What about an increase in government expenditure? It's the same as before, and nothing changes relative to before except for the fact that we have a lower multiplier. But it's still the case that it's expansionary, but it doesn't affect the interest rate, it doesn't affect the exchange rate or anything like that. Again, it's less expansionary than closed economy because part of that energy will go to imports. Then I went to this diagram, and I played with this diagram here and said, suppose that the expected exchange rate goes up, then which curves change? And the first one that changes is this one. This one moves to the right, so you get an appreciation today. And that also means that this curve here, that, yes, will shift to the left. If expected change goes up and you don't change monetary policy, that means interest rate will go up. Sorry. And you don't change monetary policy, that means the current exchange rate will appreciate. That will reduce net exports, and that's a shift in this space. That's a shift in the yes to the left. This is a parameter. These two things are parameters now in the IS-LM diagram. What about for an output going down? That doesn't affect the UIP condition, but it does affect net exports, so that moves IS to the left. And the last thing we did was an increase in i star. And an increase in i star, what it does is at the same interest rate, then you know that you need a depreciation of the currency today because that will lead to an expected appreciation, so that means that this UIP curve moves to the left and the IS curve moves to the right. And increasing the interest rate taken as given foreign output. If foreign output also changes, then you have to look at the combination of the two things. But taking foreign output as given, then this curve will shift to the left and that will move the IS to the right because the exchanges will depreciate. It said sometimes countries choose to fix exchange rates. And when you fix an exchange rate, and if it's a credible exchange rate, then the expected exchange is equal to the actual exchange rate and equal to some constant, then that implies immediately that the domestic interest rate has to be equal to the international interest rate. So if you fix your exchange rate to someone else, then you give up your monetary policy. The monetary policy is run by a different country. OK, good. So that's a very important lecture. Play with it, please. Then we look more carefully at different exchange rate regimes and the effectiveness of policy within each of these regimes, the flexible exchange rate system, which is the one we were discussing before, you get sort of-- if a country is in a recession, you can use fiscal policy. I showed you that before. It works well. And you can also use expansionary monetary policy, which will be very successful for two reasons. One, the traditional one, but the second reason is that it will depreciate your currency. OK, good. Now then suppose that you had a country that is also in a recession but you have a fixed exchange rate. Then you still can use fiscal policy. There is nothing against that. But you cannot use the expansionary monetary policy. So that's a limitation of fixed exchange rate, that you lose an important tool. Another problem that can arise with fixed exchange rates is speculative attacks on the currency. Sometimes the peg is not credible, and when the peg is not credible, you can imagine that-- suppose that people expect your currency to depreciate, so expect the exchange rate goes down, and suppose that you do want to keep your peg today, that's what typically happens. Somebody speculates against your peg, but the central bank resists that for a while. But the only way you can resist that, short of closing the capital account and doing all sorts of things there, but you haven't learned about those, so don't worry. The only tool you have here to defend the speculative attack on your currency that is for the exchanging note to depreciate today is by raising interest rates. So the defense of an exchange rate causes a recession at home. That's another problem that flexible exchange rate have. And the deal seems pretty obvious that you don't want to have a fixed exchange rate. And I said, be careful because flexible exchange rates are also not a panacea. You may get lots of volatility in the exchange rate because the role of expectations is very important. And anyways, this looks complicated, but it's essentially what we did later on when we price equity and things like that. We use the same sort of iterated substitutions and things. This was just meant to say that in a flexible exchange rate, once you endogenize expected exchange rate, you don't take it as a constant, it gets to be very complicated because effectively the exchange rate is pinned down by the expectations of infinite horizon of interest rate at home and abroad. So there's lots of space for creativity and moving things around. And that's the reason exchanges can be very volatile. OK, good. So anyways, so all that, that was it for Mundell-Fleming plus. Any question about that? Because now I'm going to move to the next part. OK, so then the next step was to look at asset prices really and/or valuations of assets in general that have cash flows in the future or an exchange that's a little bit like that, by the way. We talk a lot about current events. But the key thing was this. We said, OK, many things, many financial or real assets, actually, or even your human wealth, we'll discuss later on, you are receiving some income today, but you're also expecting to receive income in the future. And this part was about how do we value those things that we receive in the future, those cash flows that come in the future. And so we developed this concept of a expected present discounted value. And we said, very natural way of bringing dollars received in the future to the present is to discount them by the interest rate between now and then. And the reason, the logic behind that is because if you give me $1 today, I can do a lot more than if you give me $1 five years from now because I can invest the dollar today and earn the interest rate return up to five years from now. So $1 today is worth a lot more than $1 five years from now, therefore $1 five years from now is worth a lot less than $1 today. How much less? 1 over 1 plus the interest rate over that period, which is-- OK. So that's what we did. Then I show you a general cash flow. This is an asset that gives a cash flow zt at the beginning of this period. zt plus 1 at the beginning of the next one or at the end of this one, something like that. This one you don't need to discount. That one you do need to discount because you're not receiving it now, you're receiving it a year from now. This one, it's two years from now. You need to discount it more because it's two years that you could be earning an interest rate and so on and so forth. This formula you need to understand. And I said, that's if you know the future. If you don't know the future, then you just replace the things you don't know for their expected value. That's what you say. And that's the approximation. In reality, if you were to do this formally, it's a little more complicated. But for this course, that's all that you do. And then I look at some particular cases. This is a case, the same case, but one in which the interest rate is constant. Suppose that you expect the interest rate to be constant, then it's a little simpler expression, because rather than getting this product of 1 plus 1 the interest rates at different times, you get just powers of 1 plus i. Then another one that is simpler obviously is one in which all these expected payments are constant and so on. And then even simpler, if the interest rate is constant and the payment is constant, you get some simple formulas like that. Simpler formulas. And then cases in which asset lives forever of that kind and that's a value, if you don't pay for-- if you don't receive the first cash flow now but you receive it at the beginning of next year or at the end of this one, then it gets even simpler like that. And you are going to get a question of this kind in which you are going to be asked to compare two different assets that have different profiles of cash flows, and you're going to have to compare between those two. Then we talk about bonds and bond yields. And essentially we use expected present discounted value formula just for bonds. And bonds have a very particular form, profile of payment. Typically some coupons and some final payment, which we call it the face value of the bond or something like that. And we said a very important concept in bonds is maturity. And maturity is the date, the number of years till the last payment on that bond. Doesn't matter whether you receive lots of little coupons along the way. And one final payment, whether you receive no payment whatsoever until the last day, that doesn't matter. The maturity of a bond is the date, the number of years till the last time, your last payment. So we give some examples. There's a bond that pays nothing now but pays $100 one year from now. Has a price. It's a discounted value of $100 for one year divided by 1 plus the one year interest rate at time t. A bond that pays nothing up to two years and then after two years pays $100, then that's the value, the price of that bond. It's $100 discounted by that. And then we look at arbitrage, which says suppose you hold a bond that you're considering investing your money for one year, but you have two options. One is to buy a one-year bond. The alternative is to buy a two-year bond now and sell it at the end of the year. Those two strategies should give you more or less the same return. Well, if you buy a one-year bond, you're going to get 1 plus i1t at the end of the year. If you go through the two-year bond strategy, then you're going to pay this today but you expect to receive the price of a one-year bond one year from now. And we said these two things have to be equal. More or less equal. Again, we're not adding risk to these things. If there's no risk consideration of agents at risk neutral, then these two things have to be equal. That allows you to solve for the price of a two-year bond as expected price of a one-year bond one year from now divided by 1 plus the interest rate, but the expected price of a one-year bond one year from now is going to be like a one-year bond but one year from now, so it's 100 divided by 1 plus i1t plus 1 expected value. I can stick that in there, and I get exactly the same expression. So these are two different ways of pricing a bond or any other asset actually. And then we define the yield to maturity. So that's an important concept. Yield to maturity is a constant rate that gives you the current price of the bond. So we already determined the price of a two-year bond is that. And now I'm saying, suppose that-- let me look for a rate that is the same in both periods that gives me the same price. And that's the reason I have a subscript 2 here at time t. So what is the rate that, if I put a constant rate, so I have 1 plus i2t times 1 plus i2t gives me exactly the same price as the one we already determined. And that's what we call the yield to maturity or the yield or the end-- in this case, would be a two-year rate. If you hear, what is a two-year rate, it's that. And so we know what this price is, which is equal to that, thus this expression there. So the whole trick here is to find the two-year rate at time t that gives you exactly the same value. That means obviously since 100 is equal to 100, it means to find the i to t that gives you this equal to that. Would you say it's approximate, implies that approximately a two-year rate is like an average of the two one-year rates. But this concept, you should know what it is. I said there are two forms of risk in a bond. One type of risk is the full risk. What if the issuer of the bond doesn't pay you? Now there's a huge issue with the US debt ceiling, because if somehow they don't fix that, there will be a default on some treasury bonds. Let's hope that it doesn't happen. But that's default risk, is that whoever issued the debt, at the time in which it should be paying you a coupon or the principal, the face value, it doesn't pay you. That's default risk. And typically, US Treasury bonds don't have that risk, so nobody worries about that. At this moment, the default risk price in US bonds for one-month bonds is higher than that of Mexico and the bonds in Mexico or Brazil. That tells you the kind of things we have. But in any event, so this is a temporary default risk. Nobody expects in the US that this will not be eventually repaid. But you can cause a big mess by just delaying a coupon payment when these coupons are huge. And so that's what's leading to all this concern. But in any event, that's one type of risk. But we didn't look at that type of risk a lot. Corporate bonds have a lot of that risk, but we didn't look at that kind of risk. We looked at another kind of risk, which is price risk. If you invest in your one-year bond, there is no price risk, you're going to get your coupon, your face value of $100 at the end of the year. That's it. If you go through a two-year strategy, there is a risk there because you don't know exactly what the price of the one-year bond will be one year from now, and that's a risk there. We are not looking at what risk-averse investors do and so on, but in reality, there is such a risk. And just the way we model that is we said, then if I'm going to go through the two-year bond route for a one-year investment, then I don't have to set this equal to the return I get in the short bond, the one-year bond. I have to add an extra risk premium. And then we write to this formula using the same steps. We said, well the two-year rate is really the average of the expected one-year rate plus a premium. And we call that actually the term premium. You're more likely to face a question about the top of the slide than the bottom of the slide, but I don't remember fully. Stock prices and present value. It's the same sort of idea. The only difference is that the equities do not have maturity. Stocks do not have maturity. In principle, a company would last forever. And so there is no maturity. And there is also the commitment of-- the coupons are a lot shakier in the sense that, yeah, the company is likely to give dividends, they may announce the dividend policy, but it's not a commitment. Regional banks now are not giving any dividend because they want to preserve the capital. They could, but they're not because they want to build capital just to be more resilient to any further bad news. But anyway, so equity, that means that you always have this future price floating around, and you can keep substituting this multiple times. And essentially, you get to an expression that says, look, the price of equity is really the expected present discounted value of the dividends, and that includes lots of uncertainty because you don't know exactly what the interest rate will be in that period and so on. And there is always a remaining term out there which also causes a lot of trouble. In practice, assets, equities move a lot more than what you can justify with respect to the present value of dividends. There's a lot of volatility. There are bubbles. There are all sorts of things. I told you the story of Newton and so on. So this formula for the bonds, those formulas are great. For equity you're going to be pretty far off on actual prices if you use this type of formula. Still people call this the fundamental value of equity. And then the rest is sort of more speculative. But the point is that the speculative component moves a lot. It's responsible for a very large share of the volatility in asset-- in equity prices. In any event, I'm not going to ask you about this kind of stuff. Yeah. AUDIENCE: So that final equation on the slide, there is no expression for qt. RICARDO CABALLERO: It's right here. AUDIENCE: Oh, OK. RICARDO CABALLERO: It keeps going forever. It doesn't stop. Yeah. It just discounted more and more and more, so you would expect it to be less and less important. But if the thing is blowing up, then it may dominate the heavier and heavier discounting because it's further out in the future. And that's the way you create theories of bubbles. You can even come up with rational bubbles in that way, but again, that's for a different course. What else? Then we look at what is the effect of an expansionary monetary policy on asset prices. And we said, obviously it's going to, if you lower interest rates, that's going to increase the value of any asset that pays in the future returns. And so it is typically the case that expansionary monetary policy will lead to an appreciation of all assets. Most assets, but certainly bonds will go up directly because that's where the interest rate has a maximum, the clearest effect, but it's also the case that it tends to be bullish for equity as well. It's got interest rate. A lot of the response of equity to news has to do with the expected behavior of the fed in the future. Do you think that this will lead them to increase interest rates or to lower interest rates and things of that kind. But again, I think that's a little too complicated for you for now. It said, what is the effect of an increase in consumer spending on asset prices? Well, that depends. It's clear that if consumers become more bullish, that's going to tend to lead to more cash flows for the firms, so equity at least will go up. Bonds no, because the coupon is set, fixed, doesn't depend on whether the economy is doing better or worse. I'm assuming there is no default risk. But it depends a lot of what you expect the fed to do. If you think that this is going to trigger a fed hike, then it's bad news for bonds because the bonds do not benefit from the extra economic activity, and they get hurt by higher interest rates. So it depends a lot on what you anticipate the fed to do or not. But again, I think this is a bit more complicated than what you need to know. OK, the last step was to bring expectations into the IS-LM model. I said the model we discussed through the course on the IS-LM except for the part where we put the exchange rate, where we had to think about the future exchange rates and things like that. It really overweighed the present. In reality, expectations matter a lot for consumers' decisions, for firms' decisions, and so on. Probably matters even more than the future-- than the present. And so what we did is we expanded the IS-LM to include expectations, which is where consumers not only worry about disposable income-- this part will show up in your test, so you should understand what the IS-LM model is and do the comparative statics that corresponds to this model. So what we did here, it says, consumers not only worry about the current disposable income, they also worry about the income they receive in the future through financial assets, financial wealth, or through their future labor income. That's what we call human wealth. But the point is that expectations about the future matter for consumption. In the first part of the course we summarize all that in that little parameter c0. We said consumers can be bullish or not. A lot of what happens here is what shifts c0 in the first part of the course. And this also highlights an important concept, which is typically if you expect something to have only a temporary transitory consequence, it will move consumption little relative to when you expect that change to be permanent. So you expect current income to be up, but future income to go back to a lower level, that's not going to change current consumption a lot. However, if you think there is a change that will increase consumers income permanently up, well that will increase not only this but also human wealth, and that will lead to a much larger response of consumption. We did more or less the same for investment. Obviously what matters for investment is future cash flows. And there we talk about the concept of depreciation. But really was this expected present discounted value of the cash flow generated by an extra unit of capital. So expected present discounted value formula. So in the first part of the course, we just look at an investment function that has output here, and then we have an interest rate here, where now we have something that is more complicated. It has future output, which as a proxy for future cash flows but also current and future interest rates because those affect the value of those future cash flows in terms of today's dollars. And we put all of this together, and we ended up with an expanded aggregate demand in which we had the same parameters as we had when we did the static model without expectations, but now we get sort of the same things repeated here one year ahead because it matters not only for aggregate demand, not only the income that consumers are receiving today or the sales that firms are making today, but also what they expect to have next year. It matters what the taxes they are paying today but also what they expect to pay in the future. The interest rate matters not only today but also what they expect the interest rate to be in the future and so on. So the bottom line is that if we now look at the IS-LM model, now we have lots of more parameters. All these things that happen in the future are new parameters. I said notice that also this curve now is a lot steeper. Why is that? Well because if you change the interest rate today without changing the interest rate in the future, then that has a small effect. And so I said now this IS becomes very steep, but the equivalent to what we did in the static model is a situation where you cut the interest rate today-- say the central bank cuts the interest rate today, but it also convinces the public that it will also keep the interest rate low in the next period. That is not only you move along these [INAUDIBLE],, but you also persuade the public that the interest rate will be lower in the future, that will shift IS to the right. And then therefore you're going to get a much larger kick out of monetary policy. And monetary policy is a lot about forward guidance. It's that you cut interest rates today, but you're also telling, there is always a speech after they take the policy action which they talk about how they see interest rates going in the future and that. That's because you want to have maximum power. If you just tell the markets, I'm going to change the interest rate for now and then nothing else, that's going to have a very limited impact. To have a large impact out of monetary policy, you have to convince them that you will also affect the interest rate path in the future. Same sort of situation here. The other parameters is what happens if, for example, you expect future output to go up. That's going to shift IS to the right. That's yet another reason why convincing people that you're going to cut interest rates in the future as well, that you're going to keep them low in the future shift IS even more, because if you're going to keep the interest rate low in the future, that means probably that future output will be higher. And since future output is higher, that increases human wealth, and that means consumption will tend to go up. But do play with this. And again, it's important to have this distinction between the impact of temporary things, which is much smaller, and the impact of permanent things, which is bigger because it affects wealth. Oh, that's an example. So monetary policy, again, that's just if you don't persuade the public that you're going to change the interest rate in the future, then it just a movement along, but if you also convince them that you remain sort of loose monetary conditions next year, then that effectively shifts the IS to the right for a variety of reasons, for two reasons at least, and that's much more expansionary. The last thing we need is fiscal policy. I said fiscal policy today is contractionary. There is no doubt of that. But there are episodes, and I show you the Irish episode, in which actually may end up going the other way around, in which you cut expenditure today, which is contractionary, but you end up actually having an expansion. But for that, the only way that can happen is that if somehow you affect expectations in a very significant way. So that's what I said. If you ever get sort of a strange correlate response to a policy announcement, it's probably because there has been a big effect on expectations. So I showed you the case of Ireland because there's a case that was famous in which all the people talk about there was a fiscal deficit-- that it's a big drag in the economy, that it was going to be a big day of reckoning and that, and so on and so forth. So once they dealt with it, sort of expectations, they realized that they could cut interest rates then, they could realize that also that this malaise in the economy was going to go away, so people became optimistic about the future and so on, and they end up with an expansion. That shows you how important expectations are. So economic policy in general, the direct immediate effect is what we have been discussing throughout the course, but a lot of its power and even the sort of perverse or good synergies that you get out of them has to do with what you do with expectations. OK, good.
MIT_1402_Principles_of_Macroeconomics_Spring_2023
Lecture_22_Financial_Markets_and_Expectations.txt
[SQUEAKING] [RUSTLING] [CLICKING] RICARDO CABALLERO: Today, we're going to talk about a very important topic in economics, which is expectations. We have barely mentioned expectations when we talk about the Phillips curve, we talked about expectations when we discussed the UIP and so on, but expectation is a much bigger issue in economics. In fact, most decisions by firms, by consumers, governments involve considerations of the future. And it plays an even bigger role in finance in which essentially, everything is about the future. The price of an asset today is meaningless in itself. You have to compare it with what you expect to get out of that asset in the future. So it's all about expectations and so on. So that's what we're going to do today we're going to talk about expectations, how to value things that you expect to receive in the future, and how to compare those things with things that you have in the present. But before doing that, actually, let's talk a little bit about the news. Who knows who first Republic Bank is? Remember that a few weeks ago, I told you that the Silicon Valley Bank-- I mean, you read it. I just mentioned it, that the-- or discuss it that we had the second-largest bank by asset in US history-- it was-- Silicon Valley Bank was the second-largest asset-- bank in terms of assets to collapse in the US. The first one was many years ago, and then we had this bank that had more than $200 billion in assets that essentially collapsed in a few days. It was a run on deposits. They had problems before, but what really did it, as is always the case with banks, is they have a run on deposits-- funding. Well, it's no longer. The second-largest collapse in US bank history, now we have on the weekend the new second-largest bank to collapse, which is First Republic Bank that was essentially liquidated and sold to JPMorgan over-- today morning, very, very early in the morning. So you have an account in First Republic Bank, you soon are likely to have an account in JPMorgan. But again, what made it collapse was something very similar to what made Silicon Valley Bank collapse, which is that they had invested on a series of things that were very vulnerable to the fast pace of hikes in interest rates in the US. And when they had those losses, depositors became worried about it, and eventually they decided not to wait, just run and see what happened. First Republic Bank lost about $100 billion in deposits just last week. The last few days of last week. So it was obvious that it was not going to survive, and that's the reason something was arranged over the weekend to avoid the panics associated to collapse of the bank and so on. But anyways, by the way, this is all about expectations, is if people had expected the deposits to remain in the bank, then probably this bank would not have collapsed. It's all about people anticipating what other people will do and so on and so forth. OK, but now let me get into the specifics of this lecture. So there, you have-- this is the most important index of equity-- equity index in the US, S&P 500. It's a very inclusive index that captures all the large-- most of the large companies in the US-- all of the large, I think-- companies in the US. And that's an index, an average-- weighted average by this capitalization value of each of these shares, the weighted average of the major-- main shares in the US, equity shares in the US. And one thing you see is that it moves a lot around. Here, for example, when we became aware that COVID was going to be a serious issue, the US equity market collapsed by 35% or so. That's a very large collapse in a very short period of time. And then, as a result of lots of policy support, actually we had a massive rally. So up to the end of 2021, the equity market had rallied by 114%, so big rally. Then we got inflation and the Fed began to worry about inflation, so they began to hike interest rates, and when they hike interest rates, that eventually led to a very large decline in asset prices of the order of 30% or so-- 25% or so, actually, from the peak to the bottom. And then since the bottom, which was more or less October of last year, we have seen a recovery of about 16% or so of the equity market. And if you look at the NASDAQ, which is another one index that is very loaded towards technology companies, and you can see swings are even larger than that. Now why do these prices move so much? Well, a lot of it has to do with expectations. Are things going to get worse in the future? Will the Fed cause a recession? How much higher will be the interest rates? And things like that matter a great deal. Another thing that matters a great deal is how much people want to take risk at any moment in time. If you're very scared about environment, you're unlikely to want to have something that-- to invest on something that can move so much. And so risk is well-known, so it's called risk-off, when people don't want to take risk, these asset prices tend to collapse. Of Risky assets-- equity is a very risky asset. But that's not the only thing that moves these assets around. It's not just the risk that the companies underlying companies may go bankrupt or anything like that. Here, you have, for example, the movement of a-- it's an ETF, but it doesn't matter. It's a portfolio of bonds-- of US Treasury bonds of very long duration, maturities beyond 20 years and so. So this is incredibly safe bonds because it's US treasuries. So there's no risk of default or anything like that. Still, the price swings can be pretty large. I mean, over this period, you have seen an increase in value of 45%, then a decline in value of about 20%, another increase in 15%. Here, there was a huge decline, 40% since essentially-- what do you think happened here? Why is this big decline in bonds? You're going to be able to answer that very precisely later on, but I can tell you in advance, that was essentially the result of monetary policy tightening. Increasing interest rates caused the bonds to decline. So even these instruments are very safe in the sense that if you hold them to maturity, you'll get your money back and all the promised coupons along the path, well still, the price can move a lot. And it's obvious that that movement in price is something that you need to explain in terms of expectations, what people expect things to happen. In this case, it's not whether people expect to get paid or not because you will get paid, but it's expected-- but in this particular case, it's about expectations about future interest rate. You think the interest rate will be very high, then the price of bonds will tend to be very low and so on, but it's all about the future. So a key concept that we're going to discuss today, and then you're going to use it to price specific assets, is a concept of expected present discounted value, and this is a loaded concept. There's lots of terms in there and we need to understand what each of these terms means. So the key issue that we're going to discuss is how do we decide-- for example, if you see the price of an asset out there that is 100, how do you decide whether that price is fair or not, it looks cheap or not? And that question means you have to decide whether that price that you're paying today is consistent with the future cash flows that you're going to get from these assets. And that's the reason you buy an asset, is because you'll get something in return in the future. But how do we compare that? How do we compare the price today with those things that will happen in the future? So answering that question, which is what we're going to do in this lecture, involves the following concepts. First, expectations. Big thing. That's it. This is expected present discounted value. The E part is for expectations. That comes there. The expectations are very crucial because these are things that happen in the future. You need to expect-- even if it's a bond that promises you to pay $0.50 per dollar every six months, you still may have an expectation that if it is a bond issue by First Republic Bank, it may not pay. So you need to have an expectation about that. So a crucial term is expectation. Then you need some method to compare payments received in the future with payments made today. I mean, if you buy an asset you pay today, but you're going to receive things returns for that asset in the future. So how do you compare that? Suppose I pay one today and I receive one one year from now. Does that seem like a good asset? Probably not. I mean, probably not. And that's what the word "discounted" really means. When you say expected present discounted value, it says, somehow that things I receive in the future are valued less than things I have today. So if you're going to tell me that you're going to pay me $1 in the future and I have to pay you $1 today, most likely I won't take that deal. So in other words, I'm discounting the future. How do we discount the future? Well, something that we now have to figure out. So let me first shut down this part, the expectations, and then we'll introduce it. So assume for now that the future. And I'm going to derive all the equations with assuming that you know the future. So there's no issue of trying to figure out what the future is, you know it. But still, you have to decide whether-- what is the right value for an asset. OK, so let's start with the case where you know the future. Sorry. And let's do the comparison-- let's try to understand how do we move flows-- how do we value flows at different points in time? And the easiest thing is think first about comparing an asset that gives you $1 in the future, how much do you think is worth today? Well, the easiest way to get to that value is to think on the alternatives. Suppose I have a dollar today, what can I do with it? Well-- in terms of investment? Well, suppose that you have available 1-year bonds, Treasury bonds, and that the interest rate is it. That's the interest rate on a 1-year bond. So if you want-- if you have a dollar, you have the option to invest it in that asset, in that bond, which will give you 1 plus i dollars next year. Well, that means that I can get $1 next year by investing 1 over 1 plus i dollars today. Because if I invest 1 plus 1-- rather than $1, I invest 1 over 1 plus i today, then I multiply this by 1 plus i, and I get my dollar in the future. So that tells me that-- say the interest rate is 10%, then with $1 today, I can get $1.1 in the future. That means that investing $0.90 today, more or less, I can get a bucket in the future. That tells me that $1 in the future is equivalent to $0.90 today with that assumption. So that's the reason when I told you the deal of, look, I have an asset that costs you $1, but it gives you $1 in the future, and that's not a good deal if the interest rate is positive. If the interest rate is 10%, then a fair comparison is $0.90 with $1, not $1 with $1. So that's a discounting of the future. The most obvious way of discounting the future is to discount it by the interest rate. Which interest rate to pick? That's more subtle. That depends on risk, it depends on many other things, which we're going to discuss to some extent here. But for now, let's make it very simple. And in a world which you really know the future, really, the right interest to use is the safe interest-- the interest rate of Treasury bonds and things like that. So that's that. What about the dollar that you receive-- what about if you are thinking about what is the value of $1 two years from now? Well, if I get a $1, can do the same logic. If I can use the same logic. If I get a $1 today, I can convert that into 1 plus it times 1 plus it plus $1. So say 10% and 10%, I get 1.1 next year, and then I get 1.1 times 1.1, 1.21 or something like that. That's my final result. Well, then, how much is it worth to have $1-- $1 asset that gives you $1 two years from now? Well, it's going to be $1 divided by the product of this interest rate. Why is that? Well, because with this amount of dollars today-- at this point, $0.80, something like that, I can generate $1 two years from now. That means $1 two years from now is worth about $0.80 today. We're going to use a lot this type of logic. And I know that it may not be that intuitive the first time you see it, but ask questions. You want me to repeat it? OK. The final goal is the following. We're going to-- what comes next, we're going to see, which happens again with many decisions in life, but particularly for financial assets, we're going to try to value something that-- it was payoff happens at different times in the future. And the question is, how do I value an asset that pays me $5 one year from now, $25 three years from now, minus $10 10 years from now, plus $50 $100 from now? What is the value of that? Of having an asset like that? And so I need some method to bring it to today's value because today, I have a meaning of what $1 is, and therefore, I can compare it with whatever price. People are asking me for that asset. So what this is doing is it's doing that. It's telling you how to convert $1 at different parts in the future into $1 today. And by that logic, the recipe is, well, use the interest rate because you could always go the other way around. You could always-- you can ask a question, with $1 today, how many dollars can I get two years from now, say, that? Well, say x? Well, then I need 1 over x, then $1 there is worth 1 over x dollars today. That's the logic. Because 1 over x times x is 1. So let's do first problem here. With $1 today-- oops. I can generally say $1.1 at t equal 2. Then the question I want to know is, how much is $1 worth-- how much is $1 received at time t equal 2 worth today? That's the question I'm trying to answer. Because an asset will be something that will pay you in the future. So I want to know, how much is $1 received in the future worth today? And then the answer is, well, then it's-- I know the answer from this logic because I know that with 1, if I have 1 over $1.1 today, I can convert it into 1. How do I know that? Because 1 over 1.1 times 1.1 is equal to 1. If I invest this dollars today, I'm going to get this return on that. And the product of this thing gives me my dollar. So if I tell you, do you prefer to have $1 two days from-- two years from now or today, you say, I prefer it-- obviously I prefer it today because I can get $1.1 two years from now. But then the more relevant question is, no, no, but then do you prefer to have $0.90 today versus $1 in the future? Then I need to do my multiplication because then I have to multiply the $0.90 by the 1.1 and see whether I get something comparable to $1 or not. But that's the logic behind that. So the interest rate is what we discount the future by. And it's natural because if the interest rate is very high-- if the interest rate is 0, say, then $1 received two years from now and $1 received today is the same because I can-- if I invest $1 today and the interest rate is 0, I'm going to get my $1 two years from now. If the interest rate is 50%, it makes a big difference receiving the dollar today versus receiving it two years from now. If you're in Argentina and the interest rate-- I don't know what it is, it's 700%. It makes a huge difference whether you receive it one year from now than today. And so that's the role of the interest rate. The higher the interest rate, the less is $1 received in the future worth relative to $1 received today because you can get a much higher return from the dollar you have today if the interest rate is high. The interest rate is low, you don't get that much. Much difference. OK, good. So this is a big principle. And, I mean, everything that I'll say next builds on this logic. So let me give you a general formula. So let's ask, what is the value of an asset that gives payouts of zt dollars this year, zt plus 1 one year from now, zt plus 2 two years from now, and so on and so forth for n periods more? Well, I just need to do several of these operations. I know that $1 received this year is worth $1. That's zt. $1 received one year from now is not the same as $1 received today. It's the same as 1 over 1 plus it dollar received today. So that cash flow I'm going to receive from this asset is worth this amount. For something that I received two years from now, then it's not-- certainly it's much less than receiving $1 today. It's going to be 1 over 1 plus it 1 plus it plus 1. And that I have to multiply by the number of dollars I will receive two years from now. And I keep going. So that's the present value. Present discounted value-- present because I'm bringing all these future cash flows to the present. That's what each of these terms is doing. The 1 over that is bringing it to the present. Discounted because the interest rate is discounting things, making them smaller. And value because I'm trying to reduce them to the current value. That's a general formula, so it's a formula you need to understand. It's just-- so that was an asset that gives you z dollars today, zt plus 1 one year from now, so you use this formula. Zt plus 2 two years from now, so you use this formula, and then you keep going. What if we don't know the future? I had to remove the expected part. Well, if we don't know the future, then the best we can do-- in fact, we do fancier things, but that's what we want-- all that we'll do in this course. All that you can do is just replace the known quantities we have here for the expectations. So that's the closest. So I know zt, that's the cash flow I get now, but I don't know zt plus 1, so I can replace it by expectation. I do know the interest rate on a 1-year bond from today to one year, so that's the reason I don't need an expectation here. But I don't know what the 1-year rate will be one year from now, so that's the reason I need an expectation there. And so on. And I don't know what the cash flow will be two years from now. I have an expectation about what the cash flow will be, but I don't know it, so I have an expectation there. So all that I've done here is say, OK, acknowledge that this guy knew a little bit too much. He knew exactly what the cash flows were going to be in the future, and he knew what the 1-year rates were going to be in the future. This guy here knows less. He knows the cash flow today, he knows the interest rate today, but he doesn't know the cash flows-- really has a hunch, but he doesn't know the cash flows one year, two years, three years, and so on for the future, and it doesn't know the 1-year interest rate in the future. So all these expectations, here is important the concept of time. These are expectations as of time t. At time t, you have some information and you make forecasts about the future. Use whatever you want. Machine learning, whatever. But you have information at time t, and then you have a forecast for the future at t plus 1. You have more information, so you make another forecast, and so on and so forth. But in this-- we're valuing an asset at time t, then all these expectations are taken as of time t. That means given the information you have available at time t. That's the reason these guys don't have expectations in front of them because you know this at time t. How do we take in the value at t minus 1, we would have not known that and we had to-- an expectation because there would have been expectation of t minus 1. OK, so that's your big formula there. So there are some examples that are well-known, and so let me show you. They have nicer expressions. So that's an example of valuation of the same asset, but when the interest rate is constant. Then obviously I don't need all these products in the denominator. I have a constant interest rate, then I just get powers of that interest rate. That's one in which you have constant payments. So the interest rate may be different, but the payments are the same over time. So that's that. So those are two easy formulas. That's one in which you have both constants, the interest rate and the payment, then you get a nice expression that's just that. You recognize that-- if you have a constant interest rate here, you see that the value is declining-- is a geometric series. The value of two years from now is a square of 1 over 1 plus some-- is a square of a number less than 1. 1 over 1 plus i is some number less than 1. This is the square of that, then the cube, and so on. So it's a geometric series that is declining in the rate 1 plus i-- 1 over 1 plus i. Or declining at the rate 1 plus i. So that's your geometric series. That's the value of that. Constant rate and payment forever. Suppose you have an asset that lives forever. There are some bonds like that. It's called perpetuities. The US has an issue-- the UK has and so on. So that's an asset, for example, that pays you a fixed amount forever. And it's the interest rate is constant, that's a trickier thing, then the value of that asset you can see, that this is going to 0, so the value of that asset is that. And actually, a formula that you may see that is very often used as a first approximation is this one. This is the same asset, but it's called x dividend or x coupon. It's after the coupon of this year has been paid. So it's an asset that starts paying at t plus 1, is zt plus 1, plus 2, and so on. Well, that is the same as this minus the first coupon, so it's equal to that. That's an interesting thing, huh? Look, what happened to this asset as the interest rate goes to 0? So this is an asset that lasts for a very long time. And look, we got to a valuation formula. What is happening as the interest rate goes to 0 to the line? AUDIENCE: The value becomes very large. RICARDO CABALLERO: Very large. It goes to infinity. And a lot of what has happened in global financial markets in the last few years has to do with that. Interest rates were very, very, very low. And so most assets that had long-duration had very high values. And it has a lot to do with that-- monetary policy has a lot to do-- whether it was a right monetary policy or not, that's something to be discussed. I think, on average, it was the right monetary policy, but one of the things it did, it increased the value of many assets. In fact, that's one of the mechanisms through which monetary policy works in practice. It's not something we have discussed, but you can begin to see here because if the value of all assets go up a lot, people feel wealthier and that will tend to consume more, and so on. Well, this is one of the channels monetary policy does. By the way this effect happens also to the assets that are finite end. It's just that this goes-- it's maximized when this asset lasts forever. This asset literally goes to infinity if the interest rate goes to 0. Well, if an acid lasts for 10 periods, it doesn't go to infinity, it goes to n times z. For some. If the interest rate is 0, just some things. You see that? If an asset lasts for n periods, and it gives me a payment of z in every single period, then when the interest is 0, that asset is worth n times z because I will receive z coupons. And don't discount the future because the interest rate is 0. What happens is, when the asset lasts forever, then n times z is a very large number. And that's what this expression captures here. OK. So let's talk about bonds now. We're going to start pricing bonds. So bonds differ along many dimensions, but one of them that's very important for bones is maturity the n that I had there in the previous expression. So maturity means, essentially, how long the bond lasts. When does it pay you back the principal? The bonds typically pay coupons, and then there is a final payment, which we call face value of the bond or something like that. And when that final payment takes place, that's a maturity of a bond. So a bond that promises to make a $1,000 final payment in six months has a maturity of six months. A bond that promises to pay $100 for 20 years and then $1,000-- a final payment in 20 years has a maturity of 20 years. Maturity is different from duration. I don't think I'm going to talk about duration here, but that's maturity, just when it's the final payment of a loan-- of a bond? Bonds of different maturities. Each have a price and an associated interest rate. We're going to look at those things. And the associated interest rate is called the yield to maturity, or simply the yield of a bond. This is terminology, but we are going to calculate these things later on. The relationship between maturity and yield is called the yield curve. Very important concept. Big fuss about the yield curve these days. I'll talk a little bit more about that. Or sometimes it's called the term structure of interest rate. Term, in the language of bonds, is really maturity. So term structure of interest rate really tells you what is the yield in a 1-year bond, 2-year bond, 3-year bond, 4-, 5-, 6-, 7-. You plot them, and that gives you a curve. So for example, look at those-- these are two different yield curves. This is November 2000 and this is June 2001. So this tells you what the yield is on a 3-month bond. So a bond that matures in three months. On a 6-month bond, so on and so forth, up to 30-year bonds. What is the big difference between these-- what do you think happened here in between? Notice that these two curves are more or less the same long-term interest rate, but they have very different-- these curves-- this is a very steep curve and this is a very flat or even inverted curve. What do you think may have happened there between November 2000 and June 2001? AUDIENCE: People changed their expectations about interest rates. RICARDO CABALLERO: Yeah. That's true. That's for sure true about that. But look also that-- but that these three months, there is very little uncertainty about three months. It was a lot lower than that. So yes, people change their expectations, but why do you think they change their expectations? AUDIENCE: [INAUDIBLE] rising [INAUDIBLE] RICARDO CABALLERO: Rising inflation from here to here. These are nominal interest rates. Up to now, we've been talking about nominal interest rate. What happens here is there was a mini-recession, so the Fed cut interest rates. When you are in recessions, the curve tends to look like this because the central bank is cutting interest rates in the short-run to deal with the current recession. What happened 30 years from now has nothing to do with the business cycle today, so that interest rate doesn't need to move a lot. But the Fed is bringing interest rates down a lot in the front end. So that's a typical shape of a curve in a recession. That's a typical shape of a curve in the opposite situation where the inflation is too high and so on. Because what happens? The Fed is trying to-- the Fed really controls the very front end of the curve. That's what the Fed really controls-- the central bank in general, but the Fed, they control the very front end of the curve because they're setting the very short-term interest rate. So this is a situation where they're tighten-- the monetary policy is very tight because they have a situation of overheating in the economy. And in fact, they got too carried away. That's the reason that we ended up in a recession here. OK. How do you think it looks today? Do you think it looks more like this or more like that? Is inflation low or high today? High. I mean, that's a problem. The Fed is trying to hike interest rates. Now recently, because of the mess in the banking sector, then expectations of interest rates have begun to decline a little, but the situation was very inverted. Here you are. The green line is today. OK, so it's very inverted. A year ago, it looked like that. So you see, the longer hasn't changed much, but a year ago, there was no sense that inflation was getting so much out of line. It happened a little later than that. There was some concern that interest rates would rise, but now it's very clear that the economy is overheating. And this-- I should have plot you something for a month ago, it would have been even steeper. OK. Anyways. But that's because the Fed is trying to slow down the economy. It's hiking interest rates. That's the reason the curve is very, very inverted today. So let me calculate these rates. How do we go about it? So the first thing we're going to do is we're going to use the expected present discounted value formula to calculate the price of a bond. And then we want to start doing it for different bonds, and we're going to construct the yield curve. So suppose you have a bond that pays $100, nothing in between, $100 one year from now. So this is a bond with maturity, 1-year maturity. I'm going to call that bond with 1-year maturity-- the price of a bond with a 1-year maturity at time t P1t. Well, that's easy to calculate. It's expected present discounted value for you. You have the interest rate, whatever it-- say 1-year interest rate, then I know that the price of the bond is 100 divided by 1 plus the interest rate, the 1-year interest rate today. That's the price. That's the expected discounted value. So I tell you-- what I'm showing you is the relationship between interest rates and prices. Our price of a bond. The price of that bond is just 100 divided by 1 plus the-- 1 plus-- the 1-year interest rate today. So important observation is that the price of a 1-year bond varies inversely with the current 1-year nominal interest rate. This is all nominal. Why is that inverse relationship? Why is it the price of a 1-year bond is inversely related to the 1-year interest rate? In other words, I'm asking, what do you think happens to the price as the nominal interest rate rises? And why do you think that's what happens to the price? Well, the first question doesn't have a-- I mean, it's very easy, you know the answer to the first question. What happens if i goes up? Well, it's obvious that this price comes down, but why? And use the concepts we have developed here. Remember, we spent like 20 minutes in one slide. Use that slide for that answer. Hint-- this $100 you are not receiving today, you are receiving a year from now. What happens with $1 received a year from now? What is the value of a dollar received one year from now when the interest is high? It's low because you'd much rather have the dollar today, invest it, and get this big return on the dollar. That means, naturally, a bond that is paying you $100 tomorrow is going to be worth less when the interest rate is very high. It's going to be worth less today when the interest is very high. You'd rather have the money today, invest it in the interest rate and get the interest rate. And I need to invest 1 over 1 plus i1t dollars to get $100. That's another way of saying it. What about the bond that pays $100 in two years? Well, I need to discount that by this, which is-- it's a product of the two interest rate. And since I don't know what the 1-year rate will be one year from now, I have to use expectation here rather than the actual rate. But look at the notation. I'm calling P2t-- $P2t, the price of a 2-year bond, a bond with maturity of two years, as of time t. And this is a bond that has no coupons. Yes, pays you $100 at the end of the two years. Now note that this price is inversely related to both the 1-year rate today and the expectation of the 1-year rate one year from now. If either one of these goes up, the bond is worthless today. You discount more $1 received two years from now. I don't care which one-- either of them that goes up is bad news for the price of a bond. It's clear? So there is an alternative. So this is the way you price bonds using just expected discounted value approach. Now it turns out that in practice, a lot of the asset pricing is done by arbitrage, meaning you compare different assets, and that have similar risks, they should give you more or less the same return. That's what you-- so let me do this arbitrage thing. Suppose you're considering investing $1 for one year. So that's your decision. I'm going to invest one-- I have a dollar, which I want to invest for one year. But I have two options to do that. I can invest $1 in a 1-year bond. I know exactly what I'm going to get in that bond. Or I can invest in a 2-year bond and sell it at the end of the first year. Those are two ways of investing for one year. Arbitrage has to be compared over the same period of time and everything. It's not the return of a bond that you hold for 10 years versus one that you hold for one year. It has to be something a similar investment. Suppose I need to invest for one year. OK. Then if I have these two bonds, the option is not buy one or the other and then hold to maturity because that would be comparing an investment of one year with an investment of two years. I need to compare the strategies of getting my return in one year. In the 1-year bond that's trivial because I get my return at the end at the maturity of the bond. In the 2-year bond, it means I need to sell it in between after one year. So those are the two strategies I want to compare. And since I'm not-- considering risk here as a central element, those two strategies are going to have to give me the same expected return. That's arbitrage. That's what we call arbitrage. The two strategies have to give me the same expected return. So what do we get from this strategies? Well, if I go through the 1-year bond, I know I'm going to get my dollar times 1 plus i1t. That's what I get for one year-- out of investing $1 in a 1-year bond. If I go through the 2-year bond strategy, buy it and sell it at the end of the year, then I'm going to get-- I invest $1 today, I'm going to pay P2t-- that's what I pay today for a 2-year bond, that's what I pay here for a 2-year bond. And I expect to get the price of a 1-year bond one year from now. I mean, the 2-year bond will be a 1-year bond after a year has passed. It's a 2-year bond today, but after one year, it's going to have only one year to mature. So that's the reason the price I need to forecast is the price of a 1-year bond one year from now. That's what this is here. And that's my return on this strategy because I'm going to pay this today, these dollars, and I expect to get that one year from now. OK so arbitrage means I need to set these to equal. So that means I have to get the same return with two strategies. That means I'm investing the same. So you only need to compare the returns. This needs to be equal to that. That's what I have here. Which tells you that you are solving from here that the price of a 2-year bond at time t is equal to the expected price of a 1-year bond at t plus 1 discounted by 1 plus the 1-year interest rate. This was like my cash flow-- my cash flow now is not the cash flow-- it's just the price I'm going to get a price for that asset. That's like the z's I had in my formula. And for a 1-year strategy, I only needed to worry about the z plus 1. There was no dividend at day 0. And that's exactly that formula. But notice that at t plus 1, that will hold. So at t plus 1-- I'm at t plus 1, I don't need expectations. I know that P1t plus 1 will be equal to 100 divided by 1 plus i1, the 1-year rate, at t plus 1. Therefore, the expected is something like this, approximately. The expected price is something like that. I expect-- I mean, this will be-- without the e, will be the price of this 1-year bond at t plus 1. I don't know exactly what the interest rate will be next year, so I have-- the best I can do is have an expectation-- that's my expectation, approximately. But now I can stick this expression in here. I have this I'm going out and I can stick that in there and I get this expression. So that's the price for the 2-year bond. Do you recognize this? You saw it before. That's the same expression that we got when we used the expected present discounted value formula. We said we're going to go-- say $100 years-- 100 years-- two years from now, I know that discount factor for that is 1 over 1 plus i1t times 1 plus i1t plus 1 expected. Well, that's what I got. Get some arbitrage. OK. From an arbitrage logic. This is used a lot in finance. I'm going to say something complicated, but just ignore it if it's-- it's not relevant for the quiz or anything. But there is a big debate in the US today about-- not big debate, a big concern about the US Treasury debt because there is a debt ceiling, meaning there is a maximum amount that the government can-- of debt they can issue, and that ceiling has been moved over time, but every time we get close to a deadline, when this needs to be agreed again, there is a concern and there's negotiations and so on, and-- well, I mean, everyone at this moment at least thinks that, as in every instance in the past, they're going to reach some sort of agreement the day before of the deadline or not. But if they don't, and there is a mess, this is huge for finance. It's huge for finance because US Treasury bonds, especially short-term bonds, are used for pricing everything through arbitrage and so on. So you get a mess there. That's a mess in every single financial market. You wouldn't know how to price many financial assets, actually. So it would be a disaster. But the reason I mention this here is because, again, lots of prices are priced in reference-- in finance are priced in reference, especially derivatives, options, and stuff like that, you price them relative to something using this type of logic. So if the thing you use as a base, as a reference becomes highly unstable and uncertain and risky, then obviously everything becomes very complicated, very risky, and financial markets do not like risk, that's for sure. Anyways, ignore that, that's irrelevant for your quiz, but that's the reason this-- the whole discussion can over the summer can get to be very, very tricky for finance. So the yield to maturity-- remember I mentioned this concept before-- of an n-year bond is also-- whenever you hear the 3-year rate, it's that. It's the yield to maturity. A which is different from-- OK, let me tell you-- show you a formula that's easy to explain, then. And it's defined-- it's important-- as the constant annual interest rate that makes the bond price today equal to the present discounted value or expected discounted value. So notice the highlight there, it's defined as the constant annual interest rate that makes the bond price today equal to the present discounted value of future payments of the bond. So, for example, in our 2-year bond, that's the price. This is the price of the asset. That we know the price. We already got the price from the previous slides of the bond, which was based on the short-term interest rate, 1-year interest rate, and our forecast of the short-term-- the 1-year interest rate one year from now. I know that price, take that as a number. So then the yield the yield to maturity is calculated as that constant interest rate-- constant-- how do I see this constant? Because, well, I'm using the same interest rate for the first period and the second period, and now I'm calling it i2t. It's a 2-year interest rate, but it's constant. Constant doesn't mean that it doesn't move over time. It means I'm discounting all the cash flows as a constant interest rate. It means I'm using this equation. OK. So the yield to maturity is find an interest rate that allows me to use this constant thing-- constant-- assume-- use this formula and get back the same price as I got by using the expected discounted value or the arbitrage or something like that. So that's the definition. You this price. Now you look for that interest rate that allows you to match that price. And that's called the yield. That's the thing-- remember, I plotted some curves. Well, those interest rates and those curves were computed this way. Now notice that we know what this price is. This price is, by the expected discounted value or the arbitrage approach, is equal to 100 divided by this. So I know that these two things are-- this is equal to that, which means that this denominator is equal to that. And that implies, for a small interest rate, that these two-year interest rate is approximately equal to the average of the expected interest rate-- 1-year rates. So this is called actually the expectation or hypothesis, by the way. Is that the 2-year rate is approximately equal to the average of the 1-year rate this year plus the expected 1-year rate one year from now. So that's an important concept. I'm going to start from here again in the next lecture.