diff --git "a/economics/query.jsonl" "b/economics/query.jsonl" new file mode 100644--- /dev/null +++ "b/economics/query.jsonl" @@ -0,0 +1,103 @@ +{"query":"In what way do developed countries steal trillions of dollars from developing countries?\n\nAccording to the Guardian, rich Western countries 'steal' large amounts of money from poor countries, much more than they give in development aid.\n\nIf we add theft through trade in services to the mix, it brings total net resource outflows to about $3tn per year. That\u2019s 24 times more than the aid budget. In other words, for every $1 of aid that developing countries receive, they lose $24 in net outflows. These outflows strip developing countries of an important source of revenue and finance for development. The GFI report finds that increasingly large net outflows have caused economic growth rates in developing countries to decline, and are directly responsible for falling living standards.\n\nI would like to know in what way the money is \u2018stolen\u2019, if this is illegal and if this is indeed \u2018directly responsible for falling living standards\u2019.","reasoning":"The way the money is stolen refers to same-invoice faking in multinational companies, which is actually tax avoidance. Then the question becomes how the tax avoidance is related to falling living standards in developing countries.","id":"0","excluded_ids":["N\/A"],"gold_ids_long":["stole\/ajol.txt"],"gold_ids":["stole\/ajol_6.txt","stole\/ajol_5.txt","stole\/ajol_4.txt","stole\/ajol_3.txt","stole\/ajol_2.txt","stole\/ajol_1.txt","stole\/ajol_0.txt"],"gold_answer":"$\\begingroup$\n\n**On Theft**\n\nThe same article you link to says that what it calls 'theft' is the practice\nof multinationals dodging taxes. The article explains it the two paragraphs\nbefore:\n\n> Multinational companies also steal money from developing countries through\n> \u201csame-invoice faking\u201d, shifting profits illegally between their own\n> subsidiaries by mutually faking trade invoice prices on both sides. For\n> example, a subsidiary in Nigeria might dodge local taxes by shifting money\n> to a related subsidiary in the British Virgin Islands, where the tax rate is\n> effectively zero and where stolen funds can\u2019t be traced.\n\nTax dodging is not theft according to dictionary definition but I believe the\nauthor is using the word theft metaphorically here (e.g. like when Nozick when\nhe said 'taxation is theft' or when Proudhon said 'property is theft', of\ncourse its not meant literally).\n\n**On Impact of Tax Dodging on Economic Development**\n\nThis is topic that is not yet well understood as some other areas of economics\ndue to lack of data. However, what research there is suggest that tax\navoidance negatively impacts economic development (e.g. see reviews of\nliterature such as: [ Fuest & Riedel, 2009\n](https:\/\/rybn.org\/thegreatoffshore\/THE%20GREAT%20OFFSHORE\/7.RESOURCES\/ACADEMIC%20PAPERS\/TAX%20EVASION\/Tax%20evasion,%20tax%20avoidance%20and%20tax%20expenditures%20in%20developing%20countries%20-%20A%20review%20of%20the%20literature.pdf)\n; [ Cobham, 2005\n](https:\/\/rybn.org\/thegreatoffshore\/THE%20GREAT%20OFFSHORE\/7.RESOURCES\/ACADEMIC%20PAPERS\/TAX%20EVASION\/Tax%20evasion,%20tax%20avoidance%20and%20tax%20expenditures%20in%20developing%20countries%20-%20A%20review%20of%20the%20literature.pdf)\n).\n\nThe reason why tax avoidance harms economic development is that it reduces the\ngovernment revenue that could be in principle used for provision of public\ngoods. Public goods such as for example, provision of effective police that\nprotects people and their property, or effective courts etc. In turn these\npublic goods and institutions they protect are crucial to economic development\n(e.g. see [ Acemoglu and Robinson 2012\n](https:\/\/scholar.harvard.edu\/jrobinson\/publications\/why-nations-fail-origins-\npower-prosperity-and-poverty) ). However, what is less clear is whether some\ndeveloping countries would actually in a contrafactual world where they get\nthis extra tax revenue spent it on such public goods or used it to prop up\ncorrupt regime (e.g. how oil revenue is often used by dictatorships to stay in\npower).\n\nHowever, the caveat above being said, its not unreasonable to state that such\ntax dodging probably negatively affects economic development. Even if the\nevidence is not completely conclusive its thought that it is more likely than\nnot to have negative effect."} +{"query":"Which of these two lotteries, a consumer with Von-Neumann Morgenstern preferences will choose under exponential distribution?\n\nConsider two lotteries each having an exponential distribution. The function of cumulative distribution of an exponential distribution is:\n\n\ud835\udc39(\ud835\udc65;\ud835\udf06)=1\u2212\ud835\udc52\u2212\ud835\udf06\ud835\udc65\u2200\ud835\udc65\u2208\u211d+.\n\nThe expected gain given by this distribution is \ud835\udc38[\ud835\udc4b]=1\ud835\udf06\n.\n\nSuppose that the first lottery, \ud835\udc391\n has a parameter \ud835\udf061\n and the second, \ud835\udc392\n has a parameter \ud835\udf062\n. Suppose that \ud835\udf061<\ud835\udf062\n. Which of these two lotteries, a consumer with Von-Neumann Morgenstern preferences will choose?\n\nI am confused on this question.","reasoning":"The expected value is different from the expected utility; the decision-maker need not be risk-neutral. Probably, the assumption is that the decision-maker prefers more to less money. Then, you can solve this using first-order stochastic dominance. Just compare the CDFs pointwise. We need to look up stochastic dominance.","id":"1","excluded_ids":["N\/A"],"gold_ids_long":["stochastic_dominance\/Stochasticdominance.txt"],"gold_ids":["stochastic_dominance\/Stochasticdominance_8.txt","stochastic_dominance\/Stochasticdominance_6.txt","stochastic_dominance\/Stochasticdominance_4.txt","stochastic_dominance\/Stochasticdominance_5.txt","stochastic_dominance\/Stochasticdominance_7.txt","stochastic_dominance\/Stochasticdominance_3.txt"],"gold_answer":"$\\begingroup$\n\nAs already mentioned in the comments, the question appears to be framed in a\nway that you should look up [ **stochastic dominance**\n](https:\/\/en.wikipedia.org\/wiki\/Stochastic_dominance) , and contemplate about\nits relation to expected utility theory, that can lead you to an answer.\n\nTwo aspects of the question worth pointing out:\n\n 1. The payoffs per lottery are the whole $\\mathbb R$ , this is the meaning of \"the lottery has a [XXXX continuous] distribution\". \n\n 2. The question does not specify a utility function, only that the consumer has Von-Neumann Morgenstern preferences. So apparently, the correct answer does not depend on, say, specific preferences about risk, but it should be more general."} +{"query":"Can GDP ever be negative?\n\nImagine that in an economy we only produced a toy and that cost us 10 dollars. We don\u2019t sell this toy in year 1. Then this goes under Investment as +10 and GDP is +10. But say the only thing we do in year 2 is sell this toy for a loss at 8 dollars. Then investment is now -10 but consumption is +8 so GDP in year 2 is -2.\n\nIs this a valid example of negative GDP?","reasoning":"This is essentially a pathological example of the inventory change","id":"2","excluded_ids":["N\/A"],"gold_ids_long":["negative_gdp\/chapter07pdf.txt"],"gold_ids":["negative_gdp\/chapter07pdf_0.txt","negative_gdp\/chapter07pdf_2.txt","negative_gdp\/chapter07pdf_1.txt"],"gold_answer":"$\\begingroup$\n\nYes, theoretically, GDP could be negative, and you have described a\npathological case where that happens. Gross value added in your example over\nthe two periods is still positive, but the particular distribution of net\nvalue added over the two periods makes GDP in the second period negative.\n\nYou could think about this as a two-stage production process. In the first\nstep, taking place in period 1, there is net value added of $10 in production\nof toys not yet delivered to retailers. Thus, GDP in period 1 is 10 dollars.\nIn period 2, the retailers can only sell these goods for 8 dollars, so their\nnet value added is -2 dollars. If this is the only economic activity, GDP in\nperiod 2 would be -2 dollars.\n\nSidenote: You have essentially described a pathological example of the\ninventory change example outlined in the BEA's NIPA handbook, Chapter 7 ( [\nhttps:\/\/www.bea.gov\/resources\/methodologies\/nipa-handbook\/pdf\/chapter-07.pdf\n](https:\/\/www.bea.gov\/resources\/methodologies\/nipa-\nhandbook\/pdf\/chapter-07.pdf) )"} +{"query":"Have governments ever defaulted on its public domestic debt or treasury bonds?\n\nRecently, some countries have experienced hyperinflation such as Lebanon, Zimbabwe, and Venezuela.\n\nHas there ever been a time when a government defaulted on its treasure bond payments due to a country's economic state, such as hyperinflation, or for some national or international political issue? By default, I mean failing to pay local and international treasury bondholders.\n\nWhen Lebanon prevented Lebanese citizens from withdrawing their money from banks, did it also stop paying its government bonds?","reasoning":"To understand whether governments have ever defaulted, we can check examples of countries that defaulted.","id":"3","excluded_ids":["N\/A"],"gold_ids_long":["public_debt_default\/ssrn.txt"],"gold_ids":["public_debt_default\/ssrn_7.txt"],"gold_answer":"$\\begingroup$\n\n> Has there ever been a time when a government defaulted on its due to a\n> country's economic state, such as hyperinflation, or for some national or\n> international political issue?\n\nYes, since default leads to increased cost of borrowing and often also to\ncapital flight, and host of other economic issues, countries do not default on\ntheir debt just for fun but only when they are in economic difficulties.\nHyperinflation, is often a way how countries try to avoid default but when\ncountry decides that cost of hyperinflation is worse than cost of default it\nmight decide to default.\n\nHere are some examples of countries defaulting from [ Reinhart and Rogoff 2011\n](https:\/\/www.researchgate.net\/publication\/5188928_The_Forgotten_History_of_Domestic_Debt)\n:\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/jQS5y.jpg)\n](https:\/\/i.sstatic.net\/jQS5y.jpg)\n\n> When Lebanon prevented Lebanese citizens from withdrawing their money from\n> banks, did it also stop paying its government bonds?\n\n[ Lebanon did default\n](https:\/\/www.fitchratings.com\/research\/sovereigns\/lebanon-22-09-2023#:%7E:text=Eurobond%20Default%20Continues%3A%20Lebanon%20remains,of%20Eurobonds%20pending%20a%20restructuring.)\non their Eurobond on 9 March 2020. Lebanon started with [ capital controls on\n2019 ](https:\/\/www.state.gov\/reports\/2020-investment-climate-\nstatements\/lebanon\/#:%7E:text=Since%20October%202019%2C%20Lebanon%27s%20financial,banks%20are%20denominated%20in%20dollars.)\nso it did not happened at the same time, however, capital controls are typical\ninstrument governments close to default implement to prevent capital flight."} +{"query":"How to understand wages in poor countries?\n\nOften when a news story is reporting on economic conditions in poor countries, they'll talk about people earning meager wages like $10\/day or even only $10\/month (I think I heard that in a story about the immigration crisis a few days ago, referring to conditions in the South American country people were coming from). To an American like me, this sounds ridiculous, that's less than the price I pay for a sandwich at lunch. Homeless beggars almost certainly make more than this. It's two orders of magnitude less than the US poverty line (a bit more than $1k\/month).\n\nBut surely living expenses must be significantly lower in these countries, or these people couldn't even survive long enough to be interviewed. Does it really make sense to compare earnings between such different countries? ","reasoning":"in different countries prices are different so 1 dollar does not have the same purchasing power. However, they are ways to adjust for this. The most common way is to use Purchasing Power Parity (PPP) conversion rates. We can check statistics of wages after conversion.","id":"4","excluded_ids":["N\/A"],"gold_ids_long":["purchasing_power\/purchasingpowerparitiesppphtm.txt"],"gold_ids":["purchasing_power\/purchasingpowerparitiesppphtm_5.txt","purchasing_power\/purchasingpowerparitiesppphtm_1.txt"],"gold_answer":"$\\begingroup$\n\n> Does it really make sense to compare earnings between such different\n> countries?\n\nNo it doesn't as you mentioned, in different countries prices are different so\n1 dollar does not have the same purchasing power in US than in let's say\nMaldives.\n\nHowever, they are ways to adjust for this. The most common way is to use\nPurchasing Power Parity (PPP) conversion rates (e.g. see [ OECD data\n](https:\/\/data.oecd.org\/conversion\/purchasing-power-parities-ppp.htm) ). If\nthe article you read used this adjustment rate then you can compare the\nsalaries to US. There are still some caveats, these conversion rates are based\non averages, clearly a cup of coffee will be more expensive in NY or San\nFrancisco than in some midwestern flyover town. Same holds for developing\ncountries. Nonetheless, as long as you keep this in mind PPP conversion allows\nyou to look at differences controlling for this issue.\n\n> Are they reporting earnings like this just to garner sympathy, by making the\n> wages seem like practically nothing?\n\nThere is no need to assume bad faith. Most people are mathematically,\nstatistically and economically illiterate. Typical journalist does not have\ndegree in statistics or economics. Moreover, most journalist degrees do not\nrequire any statistics education, and generally it is widely acknowledged that\nmost journalist fail miserably when reporting on some statistical results\n(e.g. see [ Nguyen & Lugo-Ocando, 2016\n](https:\/\/eprints.whiterose.ac.uk\/89527\/3\/P1%20-%20Introduction.pdf) ). More\ngenerally, you just can't make inference from some raw data, whoever presents\nit.\n\nIf you see such reporting from an economic journalist, without justifying it\nor using something like PPP conversion rates you could assume bad faith, but\ntypically it will be just likely due to lack of statistical and economic\neducation not some ulterior motive.\n\n> Or are they really earning practically nothing, and just barely scraping by\n> somehow?\n\nEven when you account for PPP there are still places where people are just\nbarely scraping by somehow. Although, most of the absolute poverty around the\nworld was eradicated in the last 100 years, there is still considerable number\nof people who are living in extreme poverty (e.g. see the map in [ the Our\nWorld in Data ](https:\/\/ourworldindata.org\/explorers\/poverty-\nexplorer?time=2019&facet=none&country=NGA%7EMOZ%7EBOL%7EKEN%7EBGD%7EBOL%7EGEO%7EZMB&Indicator=Share%20in%20poverty&Poverty%20line=%242.15%20per%20day%3A%20International%20Poverty%20Line&Household%20survey%20data%20type=Show%20data%20from%20both%20income%20and%20consumption%20surveys&Show%20breaks%20between%20less%20comparable%20surveys=false)\n).\n\nHence depending on the country you are reading about there might still be\nlarge number of people in extreme poverty. However, two things can be true at\nthe same time, that there is an extreme poverty in some countries, and that\nnews might misreport on such poverty due to lack of statistical literacy."} +{"query":"Could Gaza have been \"one of the wealthiest states on the Mediterranean\"?\n\nAccording to this transcript, (I don't have access to the original interview), Robert Kennedy told Krystal Ball:\n\nThe international aid agencies have given Hamas, have given Gaza more than 10 times per capita what we gave to rebuild all of Europe after the Marshall Plan. 8,300 dollars per capita, every person in Gaza. We rebuilt Europe with $621 per capita in Europe, and we rebuilt it. What did they do with that money? Instead of using it to make this, you know, Gaza this beautiful country with white sand beaches. It should be a paradise...They take virtually all the money and they steal it...It\u2019s not Israel\u2019s fault that Gaza is poverty-stricken. Gaza should be one of the wealthiest states on the Mediterranean.\n\nIf Gaza truly received US$8,300 per capita in international aid over time, would that indeed have made it - barring diversion of funds for non-rebuilding purposes - one of the wealthiest states on the Mediterranean? (I realize Gaza does not have independent statehood, and I'm assuming Kennedy's intent was that if it were a state it would be from the wealthiest.)","reasoning":"The aid itself would not make Gaza the richest Mediterranean nation. To be clear, Gaza could be the richest Mediterranean nation in principle, if pro-growth economic policies were pursued over last half of a century or so, but this is impossible to achieve through economic aid. We can check how foreign aid affects other countries.","id":"5","excluded_ids":["N\/A"],"gold_ids_long":["gaza_aid\/2016246Mooliopdf.txt"],"gold_ids":["gaza_aid\/2016246Mooliopdf_4.txt","gaza_aid\/2016246Mooliopdf_3.txt","gaza_aid\/2016246Mooliopdf_6.txt","gaza_aid\/2016246Mooliopdf_7.txt","gaza_aid\/2016246Mooliopdf_2.txt","gaza_aid\/2016246Mooliopdf_8.txt","gaza_aid\/2016246Mooliopdf_0.txt","gaza_aid\/2016246Mooliopdf_9.txt","gaza_aid\/2016246Mooliopdf_5.txt","gaza_aid\/2016246Mooliopdf_1.txt"],"gold_answer":"$\\begingroup$\n\nThe aid itself would not make Gaza the richest Mediterranean nation. To be\nclear, Gaza could be the richest Mediterranean nation in principle, if pro-\ngrowth economic policies were pursued over last half of a century or so, but\nthis is impossible to achieve through economic aid.\n\n 1. The literature that looks at case studies of economic development aid shows mixed results, in some cases showing that development aid can actually hurt economic growth (e.g. see [ Moolio & Kong, 2016 ](https:\/\/doi.org\/10.30958\/ajbe.2.4.6) ; [ Sothan, 2018 ](https:\/\/doi.org\/10.1080\/09638199.2017.1349167) ; [ Tang & Bundhoo, 2017 ](https:\/\/doi.org\/10.1080\/09638199.2017.1349167) ). \n\n 2. Cross country studies show that the effect of development aid on economic growth is ([Abate 2021; [ Ekanayake et al 2010 ](https:\/\/scholar.google.nl\/scholar?q=effect%20of%20aid%20on%20economic%20growth&hl=en&as_sdt=0&as_vis=1&oi=scholart#d=gs_cit&t=1703690270132&u=%2Fscholar%3Fq%3Dinfo%3AgdJuUlzIinUJ%3Ascholar.google.com%2F%26output%3Dcite%26scirp%3D0%26hl%3Den) )] [ 4 ](https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/23322039.2022.2062092#:%7E:text=Large%20number%20of%20studies%20found,countries%20than%20middle%2Dincome%20countries.) : \n\n * Dependent on institutional quality (e.g. how well are property rights protected, contracts enforced, etc). \n * Non-linear, extremely poor countries benefit more than less poor countries. This implies that country can't really become rich just thanks to development aid alone. \n\nHence, if you want to interpret the quote literally then its clearly not\nrooted in sound economics. Aid can at best help with economic growth when the\ncountry is very poor but as a country approaches middle and high income levels\nthe benefits to economic growth would become very small. Moreover, these\nbenefits are contingent on having good institutions.\n\nHowever, if you want to interpret the quote more loosely as, Gaza having\npotential to become the richest Mediterranean country, if sound policies, like\nnot diverting development aid, were pursued, then there is truth to it. It is\ngenerally consensus in economics that country does not need any natural\nresources or have large size to grow. Countries like Singapore are prime\nexamples of that."} +{"query":"Why do governments issue bonds rather than just increasing tax or printing money?\n\nAssuming a government needs to raise money, what are the pros and cons of increasing taxes or printing more money or selling bonds?","reasoning":"One disadvantage of money growth is inflation.","id":"6","excluded_ids":["N\/A"],"gold_ids_long":["printmoney_inflation\/vol.txt"],"gold_ids":["printmoney_inflation\/vol_8.txt","printmoney_inflation\/vol_2.txt","printmoney_inflation\/vol_0.txt","printmoney_inflation\/vol_1.txt","printmoney_inflation\/vol_3.txt","printmoney_inflation\/vol_6.txt","printmoney_inflation\/vol_5.txt","printmoney_inflation\/vol_4.txt","printmoney_inflation\/vol_7.txt"],"gold_answer":"$\\begingroup$\n\nIn order to answer this question, first let me quickly provide some relevant\ncontext:\n\n 1. Most modern governments cannot print money directly. They can do it indirectly, most central banks (which are government institutions) can buy government bonds with newly created reserves. Some central banks such as ECB have even power to actually print money. This is analogous to financing government with printing new money. However, in developed countries central banks are nominally independent (like courts or police force) from central government. Hence this option is, in most countries, only available if technocrats at the central bank are willing to play ball with government. \n\n 2. Even in cases where executive power in the government does control money supply or central bank is willing to play ball, monetizing government finances results in larger inflation. There are numerous empirical studies that show there is (sometimes endogenous) causal effect of money supply on inflation (e.g see [ Grauwe & Polan (2005) ](https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/j.1467-9442.2005.00406.x?casa_token=eB3vl6f0WOMAAAAA:6uR2-Wov04WFV2K6ZBLO--Az7Z5KWLLPgiXfLMy-IrL3xrvZShOwiDC-gU_I4_me4ak5vI39UFEspwZA) or [ Frain (2004) ](https:\/\/www.esr.ie\/ESR_papers\/vol35_3\/Vol%2035_3Frain.pdf) ). So there is a trade-off here between monetizing spending and inflation. \n\n 3. Issuing more bonds is costly since the bonds have to be repaid with interest (unless sold to central bank where point 2 applies). \n\nHence there is no free lunch. Government intertemporal budget constraint is\ngiven byL\n\n$$G = T + \\theta - B$$\n\nWhere $G$ is government spending, $T$ tax net of interest payment,\n$\\theta$ high powered money (i.e. monetizing gov finances) and $B$\ngovernment borrowing.\n\nNow to circle back to your question:\n\n> what are the pros and cons of increasing taxes or printing more money or\n> selling bonds?\n\nThe pros of using $B$ or $\\theta$ is that government does not have to\nraise taxes now (which is unpopular), depresses _current_ economic activity\nand leads to deadweight loss to economy which is just pure waste for society.\nThe con of $B$ is that government will have to raise even more taxes in the\nfuture (to cover for spending and interest). Finally the con of $\\theta$ is\nhigher inflation."} +{"query":"Transitivity of Preferences paper\n\nI am going through some of my old grad school notes, and in my microeconomics notes on transitive preferences, the teacher made a note of a behavioral economics result where when presented with two dates, A and B, a person might say they would strictly prefer A to B. Then when presented with A, B, and B', where B' was slightly uglier than B, then suddenly the preferences might turn into B preferred to A preferred to B'.\n\nDoes anyone know what paper this could be taken from? My notes here were written in September 2015, so the paper would have to have been earlier than this.","reasoning":"The central question is to understand the behavior, which is related to decoy effect whereby people will tend to have a specific change in preference between two options when also presented with a third option that is absolutely worse.","id":"7","excluded_ids":["N\/A"],"gold_ids_long":["decoy_effect\/Decoyeffect.txt"],"gold_ids":["decoy_effect\/Decoyeffect_4.txt","decoy_effect\/Decoyeffect_5.txt"],"gold_answer":"$\\begingroup$\n\nI do not know the paper you are referring to but the behavioural bias is\ncalled the **attraction effect** or decoy effect ( [ wiki\n](https:\/\/en.wikipedia.org\/wiki\/Decoy_effect) ). It is well known in\nbehavioural economics and psychology, so it should be easy to find sources for\nit."} +{"query":"Would a GDP measure be improved by excluding foreign interest paid?\n\nThe income method of calculating GDP is as follows: GDP = wages + profits + rents + interest + depreciation + taxes + NFFI. If an economy has high external debt, for instance, because it used external financing to buy machinery and equipment, then foreign interest payments will be high. In that case, wouldn't GDP (per capita) be a poor measure of economic well-being since a significant portion of the generated income is leaving the home economy?\n\nWouldn't excluding foreign interest payments in calculating GDP\/ income give us a better measure since this represents what income is available to the home economy?","reasoning":"The essential question is to ask the linkage between GDP and wellbeing. We can check their measures.","id":"8","excluded_ids":["N\/A"],"gold_ids_long":["gdp_wellbeing\/wellbeingandgdphtm.txt"],"gold_ids":["gdp_wellbeing\/wellbeingandgdphtm_10.txt","gdp_wellbeing\/wellbeingandgdphtm_9.txt","gdp_wellbeing\/wellbeingandgdphtm_8.txt"],"gold_answer":"$\\begingroup$\n\n 1. GDP is measure of output not a direct measure of wellbeing. You can use it as a proxy for wellbeing because it has very high correlation with broader measures of well being (e.g. see this [ OECD explainer ](https:\/\/www.oecdbetterlifeindex.org\/blog\/well-being-and-gdp.htm#:%7E:text=The%20chart%20below%20demonstrates%20this,being%20is%20higher%20on%20average.) ). \n\n 2. If country, on net basis, pays income to foreigners such as interest payments NFFI will be negative. \n\nNFFI= Income of our citizens from foreign countries - Income of foreign\ncitizens from our country.\n\nSo in your example, GDP is lower if the interest income is paid to other\ncountries."} +{"query":"Is there a diagram outlining the life of a dollar?\n\nI would like to understand the economic system concretely. My reading leads quickly into graphs and theory, which is interesting but not what I'm looking for. Instead, I imagine I should be able to trace the path of a dollar from my pocket through the major components of the economic system (the central bank, commercial banks, investors, companies, tax, government spending, government borrowing...)","reasoning":"This is related to the circular flow in which the major exchanges are represented as flows of money, goods and services, etc.","id":"9","excluded_ids":["N\/A"],"gold_ids_long":["dollar_flow\/Circularflowofincome.txt"],"gold_ids":["dollar_flow\/Circularflowofincome_16.txt","dollar_flow\/Circularflowofincome_14.txt","dollar_flow\/Circularflowofincome_13.txt","dollar_flow\/Circularflowofincome_4.txt","dollar_flow\/Circularflowofincome_15.txt","dollar_flow\/Circularflowofincome_5.txt","dollar_flow\/Circularflowofincome_12.txt"],"gold_answer":"$\\begingroup$\n\nThis is called the **circular flow of income** . A simple model demonstrates\nthe exchanges between households, business and government:\n\n[ ![Three-sector circular flow diagram](https:\/\/i.sstatic.net\/Y2R7t.jpg)\n](https:\/\/i.sstatic.net\/Y2R7t.jpg)\n\nMore complex models include additional _economic agents_ such as the foreign\nsector (imports, exports, foreign investments) and the financial sector\n(banks). Some of these can be sources of money (banks can \"create\" money\nthrough credit).\n\nThe original image is in fact such a diagram. (Thanks @1muflon1 for pointing\nthis out.) You probably want to start with one of the simple diagrams and add\ndetails from there.\n\nSource: [ https:\/\/en.wikipedia.org\/wiki\/Circular_flow_of_income\n](https:\/\/en.wikipedia.org\/wiki\/Circular_flow_of_income)"} +{"query":"US population growth: \"natural increase\" vs \"fertility rate\"\n\nBased on the 2017 US Census population projections through 2060, specifically table 1 (Projected population size and births, deaths, and migration), the \"natural increase,\" which is US births - US deaths, was positive in 2017 at over 1 million more births than deaths, and was projected to remain positive through 2060, reaching a minimum of 400,000 in 2050 before beginning to increase again.\n\nOn the other hand, the fertility rate per woman for the US was 1.8 in 2017, according to the World Bank. This is well below the replacement rate of 2.1.\n\nHow is it possible that the US could have over 1 million more births than deaths, and that a positive natural increase would be projected to continue for 40 years, but be well below replacement fertility rate?\n\nThis question is related, but the answers there indicate that it would be temporary situation due to population momentum. However, here the Census Bureau is predicting it will continue for 40 years.","reasoning":"This is also affected by immigrants. We need to take their numbers into calculation.","id":"10","excluded_ids":["N\/A"],"gold_ids_long":["uspopulationgrowth\/ImmigrantsComingAmericaOlderAges.txt"],"gold_ids":["uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_7.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_5.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_6.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_2.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_4.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_8.txt","uspopulationgrowth\/ImmigrantsComingAmericaOlderAges_3.txt"],"gold_answer":"$\\begingroup$\n\n**Short answer:**\n\nThe \u201cnatural increase\u201d in population, defined as births in the U.S. minus\ndeaths in the U.S., in the spreadsheet you linked is affected by immigration.\n\n**Longer answer:**\n\nDefinition of TFR...\n\n\u201csum of age-specific fertility rates (usually referring to women aged 15 to 49\nyears)\u201d\n\nSource... [ https:\/\/www.who.int\/data\/gho\/indicator-metadata-registry\/imr-\ndetails\/123 ](https:\/\/www.who.int\/data\/gho\/indicator-metadata-registry\/imr-\ndetails\/123)\n\nThe TFR in the U.S. in 2017 was 1.7.\n\nSource... [ https:\/\/data.worldbank.org\/indicator\/SP.DYN.TFRT.IN?locations=US\n](https:\/\/data.worldbank.org\/indicator\/SP.DYN.TFRT.IN?locations=US)\n\nThe 2017 TFR was calculated from age-specific fertility rates of women born\nbetween 1968 and 2002 or a similar period. Since it uses historical data we do\nnot need to question it. We can be critical of how it can be applied as a\nguide to the future if we so choose.\n\nThe under the same method births in 2060 depend upon women born between 2011\nand 2045 and their TFR is unknown. Suppose we assume it will still be 1.7\nbecause suppose we want to assume a huge change such as between 1960 and 1976\nwill not happen again. Births in 2060 will also depend upon immigration of\nwomen born between 2011 and 2045 and we only have data on 2011 through 2023.\nThat means births in 2060 depends upon unknown amounts of future immigration\nbetween 2023 and 2045. A 30 year old immigrant today was born in 1993 so she\nwon\u2019t directly affect births in 2060. However, she can potentially give birth\nto a female in the next several years and that child will likely be in the\nchild-bearing age range in 2060. (Immigrants often apply for family re-\nunification so some children might arrive later and about half of those\nchildren are female. An infant female arriving today will be 37 in 2060 and\nthat is in the child-bearing age range.)\n\nDeaths in 2060 depends upon births around the year 1980, whether born in the\nU.S. or born elsewhere and immigrated at any time before 2060. That means\ndeaths in 2060 depends upon unknown amounts of future immigration between 2023\nand 2060. A 30 year old immigrant today still has a good chance of being alive\nin 2060 at the age of 67. (Immigrants often apply for family re-unification so\nsome children might arrive later and they have a smaller chance of dying in\n2060.)\n\nImmigrants tend to be young, **at the time they immigrate** , so the \u201cnatural\nincrease\u201d, as defined, can be positive between 2023 and 2060 even if TFR\nremains 1.7 from 2017 to 2060.\n\nA quote...\n\n\u201cThe average age of newly arrived legal and illegal immigrants was 31 years in\n2019\u201d\n\nSource... [ https:\/\/cis.org\/Report\/Immigrants-Coming-America-Older-Ages\n](https:\/\/cis.org\/Report\/Immigrants-Coming-America-Older-Ages)"} +{"query":"Specific term for disincentivized \"doing something first\"\n\nOf course we can use \"moral hazard\" but we can use \"moral hazard\" for many things. Tragedy of the commons is a moral hazard (maximize your utility before the rest of the public wises up). I want a greater degree of specificity, so I'm looking for a term that is comparable to tragedy of the commons (in other words I want it to sound legit and still be specific).\n\nThe situation is:\n\nOne group innovates, the others copy the first group. Importantly, the groups that copy reap similar rewards to the first group. The result is an environment that does not foster innovation.\n\nI initially thought this was a type of tragedy of the commons but it's not what's being used up, it's about what never gets initiated and worked on.\n\nQuestion\nIs there an economics term to describe this type of situation? Either macro or micro economic setting would work. (Note: it doesn't have to be about \"innovation\" per se, just this general mechanism that weighs on the first group because it knows other groups will copy.)","reasoning":"If I understand it correctly, you are making these core assumptions:\n\nIf nobody innovates, no profits are generated.\n\nInnovating pays off, but is costly.\n\nCopying pays off as well, and is not costly, but it needs an innovator to copy from.\n\nThe simplest model of this is a static normal form game: There are \ud835\udc5b\u22652\n groups. Each group can either choose \ud835\udc3c\n (innovate) or \ud835\udc36\n (copy). Choosing \ud835\udc3c\n has a constant payoff of 1\n. Choosing \ud835\udc36\n has a payoff of \ud835\udc4f>1\n if at least one group chooses \ud835\udc3c\n, but a payoff of 0\n if nobody chooses \ud835\udc3c.","id":"11","excluded_ids":["N\/A"],"gold_ids_long":["volunteerdillema\/174243pdfrefreqidfastlydefault3A88b79b5167e1aaf791f6cb7976234486absegmentsorigininitiatoracceptTC1.txt"],"gold_ids":["volunteerdillema\/174243pdfrefreqidfastlydefault3A88b79b5167e1aaf791f6cb7976234486absegmentsorigininitiatoracceptTC1_0.txt","volunteerdillema\/174243pdfrefreqidfastlydefault3A88b79b5167e1aaf791f6cb7976234486absegmentsorigininitiatoracceptTC1_1.txt"],"gold_answer":"$\\begingroup$\n\nIf I understand it correctly, you are making these core assumptions:\n\n 1. If nobody innovates, no profits are generated. \n\n 2. Innovating pays off, but is costly. \n\n 3. Copying pays off as well, and is not costly, but it needs an innovator to copy from. \n\nThe simplest model of this is a static normal form game: There are $n\\ge 2$\ngroups. Each group can either choose $I$ (innovate) or $C$ (copy).\nChoosing $I$ has a constant payoff of $1$ . Choosing $C$ has a payoff of\n$b>1$ if at least one group chooses $I$ , but a payoff of $0$ if nobody\nchooses $I$ .\n\nThis game is known as the [ _Volunteer's Dilemma_\n](https:\/\/www.jstor.org\/stable\/174243) .\n\n(BTW: Neither the Volunteer's Dilemma nor the Tragedy of the Commons are\ninstances of _[ moral hazard ](https:\/\/en.wikipedia.org\/wiki\/Moral_hazard) _\nproperly understood.)"} +{"query":"KPIs re-indexing after some years\n\nI just realized that eurostat CPI index has three values of reference:\n\n1996, 2005, 2015\n\nhttps:\/\/ec.europa.eu\/eurostat\/databrowser\/view\/PRC_HICP_MANR__custom_120601\/bookmark\/table?lang=en&bookmarkId=952bcf60-22e8-433b-ab93-fe85e2ab2367\n\nhttps:\/\/ec.europa.eu\/eurostat\/cache\/metadata\/en\/prc_hicp_esms.htm#ref_period1682076048145\nBeing myself an engineer, not that familiar with economy KPIs, I wonder:\n\nWhat need does this re-index tries to solve, or why is it performed?\n\nThank you so much!!\n\n","reasoning":"This is related to base shift, where new products are introduced, leading to changes in CPI calculation.","id":"12","excluded_ids":["N\/A"],"gold_ids_long":["cpi_shift\/consumerpriceindex.txt"],"gold_ids":["cpi_shift\/consumerpriceindex_4.txt"],"gold_answer":"$\\begingroup$\n\nThis is so-called base shift. It was typically done every 5-10 years by\nvarious statistical offices, although now some do it on yearly basis.\n\nThe reason why this is done is that CPI does not actually look at all prices\nbut just some sample of representative prices. This had to be done because,\nespecially in the past, but to some extent even now, data collection is\nextremely expensive so statistical offices would not be able to afford to\ntrack all prices. Nowadays, there are indices that are more comprehensive like\n[ the billion prices project ](https:\/\/thebillionpricesproject.com\/) but such\nmethodology is not widely used by statistical offices yet.\n\nHowever, using representative basket of goods introduces problem when consumer\nspending patterns shift, since to make index consistent you need to compare\nthe prices of the same basket of goods. So once in a while statistical office\nselects new base and recalculates everything in terms of the basket of goods\nused in that year. This can be done even to the old data since what usually\nchanges are shares of how much households spend on lets say food vs travel\netc. This is pretty much the main or even only reason, other than this the\nselection of base is purely arbitrary but you might change it once in while to\nmake graphs from raw data more readable (of course any researcher can easily\nrebase the CPI, but I suppose many sites that source data from statistical\noffices just take default raw data).\n\nThis [ CBS website ](https:\/\/www.cbs.nl\/en-gb\/background\/2005\/26\/consumer-\nprice-index) also further explains the base shift if you are interested."} +{"query":"Log Transformation of Zeros (Slave Trade Data)\n\nI am trying to log transform data on slave trade voyages from the Database by David Eltis,The Trans-Atlantic Slave Trade Database.\n\nThe Problem I am facing: Many values are 0. My first idea was to assign value 1 to all values which are 0, so after log transforming they are 0. However, since I am taking average voyages per year, for many years the value is < 1, leading to negative values after the log transformation.\n\nSo now I have smaller values for years in which there was slave trade (but <1) compared to years when there was no slave trade.\n\nHow does one adress such a problem?","reasoning":"One alternative is to use another transformation that is \"sufficiently\" close to the log transform. The inverse hyperbolic sine function is very popular.","id":"13","excluded_ids":["N\/A"],"gold_ids_long":["hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf.txt"],"gold_ids":["hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_3.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_6.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_2.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_1.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_4.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_0.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_5.txt","hyperbolicsine\/BellemareWichmanIHSFebruary2019pdf_7.txt"],"gold_answer":"$\\begingroup$\n\nTaking the logarithm of $0$ is impossible. This is a frequent occuring\nproblem with econometricians who insist on using the log transform while\nhaving data with (many) zeros.\n\nThere are a few options, none of which is entirely satisfying:\n\n * drop the data for which the dependent observable is zero (this raises issues of sample bias). \n * Use another transformation that is \"sufficiently\" close to the log transform. The inverse hyperbolic sine function is very popular (however one should be careful in interpreting the coefficients, see for example [ Bellemare and Wichman (Elasiticites and the Inverse hyperbolic sine transformation) ](https:\/\/marcfbellemare.com\/wordpress\/wp-content\/uploads\/2019\/02\/BellemareWichmanIHSFebruary2019.pdf)\n * Use a different econometric specification. If you have count data (like in your case), a Poisson (or negative binomial) specification might be more appropriate. \n\nSimply adding a 1 to all data to avoid observations with 0 is (I think) not a\nvery good idea as it is unclear how to interpret the estimated coefficients.\nHowever, there might be others that disagree with me on this."} +{"query":"Samsung's contribution to South Korea's GDP\n\nMany articles or sources mention that Samsung's contribution to South Korea GDP is approximately 20%. Checking its financial statements in 2022 for example, I see that the revenues amount to 234 billions of US dollars which only represents 13% of South Korea's GDP (1665 billions of US dollars according to the world bank). Since the value added must be less than revenues, the contribution to GDP should be less than 13%. Am i missing something ?\n\nThank you","reasoning":"Accounting definitions of revenue have their own rules about the timing of revenues that may may have different timing assumptions than the economic concept of GDP does about when economic activity takes place. We need to check those rules.","id":"14","excluded_ids":["N\/A"],"gold_ids_long":["samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard.txt"],"gold_ids":["samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard_27.txt","samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard_28.txt","samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard_29.txt","samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard_31.txt","samsung_contribution\/timingiseverythingwithasc606thenewrevenuerecognitionstandard_30.txt"],"gold_answer":"$\\begingroup$\n\nAccounting definitions of revenue have their own [ rules about the timing of\nrevenues ](https:\/\/www.firmofthefuture.com\/accounting\/timing-is-everything-\nwith-asc-606-the-new-revenue-recognition-standard\/) that may may have\ndifferent timing assumptions than the economic concept of GDP does about when\neconomic activity takes place. But this is probably at its most serious a\nsecond order concern. I agree with you that revenues should be an approximate\nupper bound on contribution of a firm to GDP. In practice, it is likely far\nless for several reasons:\n\n 1. Inputs contribute to the cost of production but are not part of the output of a firm. For example, if you buy car parts and assemble them into cars, the costs of the cars will be reflected in your sales but it isn't your output. \n 2. In an internationally active firm, some firm output should sometimes be counted in the GDP of other countries. For example, if you make cars in the US and China, you are going to contribute to the GDP of both countries but your sales will reflect both countries' output. \n\nFor a domestic firm, in a setting where GDP = GDI, we can think of the firm's\ncontribution to GDP as the contribution to income. That turns out to be,\nsimplifying a bit, Salaries + Interest on Debt + Profits (After Tax and\nInterest). That's going to be a deal less than revenues, because sales will\nalso include the costs of other inputs."} +{"query":"Why did the pay of CEOs decrease around the year 2000?\n\nAccording to the book I'm reading (*), the pay of CEOs tends to increase because the divorce between ownership and control allows for abusive compensation practices by the CEOs. Indeed, in the graph below we can see a near exponential rise in CEO compensation (measured as the ratio between the wage of the CEO and that of the average worker) since the 70s. I've found many sources discussing this topic, but none on the decreasing spikes in the years 1994-95 and the early 2000s. Why did this happen?","reasoning":"This is related to the fact that the compensation\/pay to CEO is tied with stock, which is a variable.","id":"15","excluded_ids":["N\/A"],"gold_ids_long":["ceopay\/ExecutiveExcess1999pdf.txt"],"gold_ids":["ceopay\/ExecutiveExcess1999pdf_7.txt"],"gold_answer":"$\\begingroup$\n\n2001 was the time the dot-com bubble burst. CEOs and other highly ranked\nemployees get most of their income from bonus payments, which are often tied\nto stock prices and revenues, and the like (which all suffered). 1994 was due\nto a weak stock market, and few executives exercised their options; hence,\ntotal compensation took a temporary dip. There was also a reform by the\nClinton administration around this time, which was capping the tax\ndeductibility of employee compensation. This will unlikely have had a major\nimpact since the majority of CEO compensation is performance-based, which was\nexempt from the cap.\n\nAccording to [ the Institue for Policy Stduies - A Decade of Executive Excess:\nThe 1990s Sixth Annual Executive Compensation Survey September 1, 1999\n](https:\/\/ips-dc.org\/wp-content\/uploads\/1999\/09\/Executive-Excess-1999.pdf) :\n\n> Of course, the biggest contributor to exorbitant CEO pay is stock options,\n> which are variable. Indeed, when the stock market was weak in 1994, fewer\n> executives exercised their options and total compensation took a dip.\n> However, most executives still enjoy gen- erous base pay packages and in\n> fact in 1994 a record number earned more than $1 million."} +{"query":"Is there a name for relative diminishing returns, or relative increasing opportunity cost?\n\nConsider the following hypothetical situation:\n\nYou are producing something where the total amount produced, \ud835\udc34\n, is equal to the product of two factors, \ud835\udc4b\n and \ud835\udc4c\n. Hence \ud835\udc34=\ud835\udc4b\u2217\ud835\udc4c\n.\n\n\ud835\udc4b\n and \ud835\udc4c\n both cost $1 to increase by 1.\n\nIn this situation, \ud835\udc4b\n is not said to have diminishing returns, because holding other factors constant, \ud835\udc4b\n has linear returns. Increasing \ud835\udc4b\n by 10% will increase output by 10%, and increasing \ud835\udc4b\n by 100% will increase output by 100%.\n\n\ud835\udc4b\n is also not said to have increasing opportunity cost, because the opportunity cost of increasing \ud835\udc4b\n by 1 is equal to increasing \ud835\udc4c\n by 1, regardless of how many \ud835\udc4b\n or \ud835\udc4c\n you have.\n\nBy symmetry, the same arguments are true for \ud835\udc4c\n.\n\nHowever, if \ud835\udc4b=20\n and \ud835\udc4c=10\n (and \ud835\udc34=200\n), then by reducing \ud835\udc4b\n to increase \ud835\udc4c\n, for the same cost, you can increase output by having \ud835\udc4b=15\n and \ud835\udc4c=15\n, so \ud835\udc34=225\n. Also, by spending $1 on \ud835\udc4b\n, you would increase output by 5%, while by spending $1 on \ud835\udc4c\n, you would increase output by 10%.\n\nIn general, increasing one factor will increase the returns of the other factor, and by extension, diminish the relative value of that factor (or increase the opportunity cost in terms of the output, but not in terms of dollars or the alternative purchase).\n\nThis situation is very common in video games, and because investing too heavily into either \ud835\udc4b\n or \ud835\udc4c\n would be inefficient, people will often say that both \ud835\udc4b\n and \ud835\udc4c\n have diminishing returns. However, I've found this to be very misleading, because it implies that \ud835\udc34\n increases by less, marginally, the more \ud835\udc4b\n and \ud835\udc4c\n you have, which isn't the case. I'm also not sure if it would be accurate to say \ud835\udc4b\n and \ud835\udc4c\n have increasing opportunity cost, because the cost of both always stay the same.\n\nIs there a name for this kind of factor, and if not, how would you describe it succinctly to get across the idea that having too much of one factor is less optimal than having a balanced amount of both?","reasoning":"If the marginal output of one factor increases with the amount of the other factor, the two factors are said to be complements in the production. It is related to supermodular\/submodular functions.","id":"16","excluded_ids":["N\/A"],"gold_ids_long":["diminish_return\/Supermodularfunction.txt"],"gold_ids":["diminish_return\/Supermodularfunction_7.txt","diminish_return\/Supermodularfunction_6.txt","diminish_return\/Supermodularfunction_5.txt","diminish_return\/Supermodularfunction_9.txt","diminish_return\/Supermodularfunction_8.txt"],"gold_answer":"$\\begingroup$\n\nIf the marginal output of one factor increases with the amount of the other\nfactor, the two factors are said to be complements in the production. See [\nhere ](https:\/\/en.wikipedia.org\/wiki\/Supermodular_function) for the wikipedia\nlink.\n\nThis is the case if the production function is supermodular: in case of\ndifferentiability this amounts to a positive cross partial derivative: $$\n\\frac{\\partial^2 A}{\\partial X \\partial Y} > 0\\. $$ There is a huge\nliterature in economics on supermodularity and complementarity as it is\ntightly related to comparative statics and provides useful equilibrium\nexistence results (e.g. [ Milgrom and Shannon, 1994\n](https:\/\/www.jstor.org\/stable\/2951479) )."} +{"query":"Proving existence of a unique pure strategy Nash Equilibrium in two-person continuous games\n\nI am working on a game theory exercise and I want to prove existence and uniqueness of pure strategy NE for a 2 person game with continuous strategy spaces. Lets call the payoff functions \ud835\udf0b1\n and \ud835\udf0b2\n for players 1 and 2. The strategy spaces are simple one dimensional intervals in Euclidean space that are common across players (\ud835\udc601,\ud835\udc602\u2208\ud835\udc46=[\ud835\udc4e,\ud835\udc4f]\n). What do I need to do to show that the payoff functions are diagonally strictly concave? I am having a hard time wrapping my head around Rosen's paper and I don't need many dimensional strategy spaces and n players for now. I was wondering if there was a simpler, easy to understand version of the result for my simplified game. If someone could also provide some intuition for that condition, it would also be a massive help! Thanks","reasoning":"We need to find formulas, conditions to prove the existence in continuous case.","id":"17","excluded_ids":["N\/A"],"gold_ids_long":["nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf.txt"],"gold_ids":["nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_0.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_4.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_2.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_1.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_3.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_9.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_10.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_6.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_7.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_8.txt","nash_continuous\/48cb7ae4210825b7de3d7fc0bcc8553fMIT6254S10lec06bpdf_5.txt"],"gold_answer":"$\\begingroup$\n\nFor anyone else who is interested, I found a cool note online and apparently\nthe following ugly monstrosity is a sufficient condition for strict diagonal\nconcavity for my simplified case:\n\n$ 2s^2_1 \\frac{\\partial^2 \\pi_1}{\\partial s_1^2} + 2s_1s_2 (\\frac{\\partial^2\n\\pi_1}{\\partial s_1 \\partial s_2} + \\frac{\\partial^2 \\pi_2}{\\partial s_2\n\\partial s_1}) + 2s^2_2 \\frac{\\partial^2 \\pi_2}{\\partial s_2^2} < 0\n\\hspace{2mm} \\forall [s_1 \\hspace{2mm} s_2]^T \\neq 0 $\n\nStill no luck on the intuition... Reference: [\nhttps:\/\/ocw.mit.edu\/courses\/6-254-game-theory-with-engineering-applications-\nspring-2010\/48cb7ae4210825b7de3d7fc0bcc8553f_MIT6_254S10_lec06b.pdf\n](https:\/\/ocw.mit.edu\/courses\/6-254-game-theory-with-engineering-applications-\nspring-2010\/48cb7ae4210825b7de3d7fc0bcc8553f_MIT6_254S10_lec06b.pdf)"} +{"query":"Quadratic Form Single Summation notation\n\nI'm trying to translate what feels like a double summation notation , compacted into a single summation. It's the summation definition of quadratic forms.\n\n\ud835\udc44(\ud835\udc651,\ud835\udc652.....\ud835\udc65\ud835\udc5b)=\u2211\ud835\udc56\u2264\ud835\udc57\ud835\udc4e\ud835\udc56\ud835\udc57\ud835\udc65\ud835\udc56\ud835\udc65\ud835\udc57\n\nI assume quadratic forms can be written as the less compact double summation?\n\n\ud835\udc44(\ud835\udc651,\ud835\udc652.....\ud835\udc65\ud835\udc5b)=\u2211\ud835\udc5b\ud835\udc56=1\u2211\ud835\udc5b\ud835\udc57=1\ud835\udc4e\ud835\udc56\ud835\udc57\ud835\udc65\ud835\udc56\ud835\udc65\ud835\udc57\n\nTo count this we would:\n\nFix the outer summation first, so in this case we would be fixing rows and going across columns. Alternately we could switch the summation so we fixed columns and counted rows. This holds for any \ud835\udc5b\u2022\ud835\udc5a\n matrix i believe?\nQuestion: I suppose the \ud835\udc56\u2264\ud835\udc57\n notation has to correspond to this. But it's not immediately clicking?","reasoning":"We need to check the definition of double summation to understand the calculation.","id":"18","excluded_ids":["N\/A"],"gold_ids_long":["quadratic_form\/summationpdf.txt"],"gold_ids":["quadratic_form\/summationpdf_3.txt"],"gold_answer":"$\\begingroup$\n\nThe sum $\\sum_{i \\le j}=a_{ij}x_ix_j$ goes over all indices $(i,j)$ with\n$i\\leq j$ . We can do that by either summing over all $i$ and then for each\n$i$ all $j\\geq i$ , which would give us\n$$\\sum_{i=1}^n\\sum_{j=i}^n=a_{ij}x_ix_j,$$ or we could sum over all $j$ and\nthen look at $i\\leq j$ , which would give us\n$$\\sum_{j=1}^n\\sum_{i=1}^j=a_{ij}x_ix_j.$$\n\nHowever, for [ general quadratic forms\n](https:\/\/en.wikipedia.org\/wiki\/Quadratic_form) , one does not restrict to\n$i\\leq j$ , which would correspond to the additional restriction that\n$a_{ij}=0$ for $i>j$ , which need not hold in general."} +{"query":"Derivative to ln(K(t)) in the RBC model\n\nIn the calculation of the equation of motion for capital in the RBC model, I came across this equation:\n\nd ln K_(t+1) \/ d ln K_t = (d K_(t+1) \/ d K_t) * (K_t \/ K_(t+1))\n\nCan someone explain what are the mathematical steps in between? I don't see how exactly the derivative to ln(K(t)) gets us an almost elasticity-like equation.\n\nWould be thankful for any leads. :)","reasoning":"This is related to elastic substitution. We need to check its definition and some examples.","id":"19","excluded_ids":["N\/A"],"gold_ids_long":["elastic_substitution\/Elasticityofsubstitution.txt"],"gold_ids":["elastic_substitution\/Elasticityofsubstitution_8.txt","elastic_substitution\/Elasticityofsubstitution_9.txt","elastic_substitution\/Elasticityofsubstitution_6.txt","elastic_substitution\/Elasticityofsubstitution_10.txt","elastic_substitution\/Elasticityofsubstitution_7.txt","elastic_substitution\/Elasticityofsubstitution_4.txt"],"gold_answer":"$\\begingroup$\n\nThat is the formula of the elasticity of substitution of $K_{t+1}$ with\nrespect to $K_t$ expressed differently. Notice that the differential of the\nfunction $\\ln K_{t+1}$ is $d\\ln K_{t+1}=\\frac{1}{K_{t+1}}\\frac{d K_{t+1}}{d\nK_{t+1}}d K_{t+1}$ (see [ section 12.9\n](https:\/\/rads.stackoverflow.com\/amzn\/click\/com\/1292074612) or [ this\n](https:\/\/en.wikipedia.org\/wiki\/Elasticity_of_substitution#Definition) ), then\n$d\\ln K_{t+1}=\\frac{d K_{t+1}}{K_{t+1}}$ . Hence, using the same operation\nwith $\\ln K_{t}$ and combining we get the elasticity of substitution of\n$K_{t+1}$ with respect to $K_t$ :\n\n$$ \\frac{d\\ln K_{t+1}}{d\\ln K_{t}}=\\frac{d K_{t+1}}{d\nK_{t}}\\frac{K_{t}}{K_{t+1}}. $$"} +{"query":"Why have bank deposits gone down with rising interest rates?\n\nFrom an opinion piece in the Washington Post:\n\n\"Since the Federal Reserve began to raise rates about a year ago, deposits leaving the banking sector have totaled nearly $1 trillion\".\n\nWhy would bank deposits go down while interest rates are going up? I thought the higher interest rates would attract more deposits.","reasoning":"This is probably a consequence of our much misunderstood monetary system in which bank loans create new money and the repayment of (the principal of) those loans destroys money. We need to check the process of Money creation in the modern economy.","id":"20","excluded_ids":["N\/A"],"gold_ids_long":["deposit_interest\/moneycreationinthemoderneconomypdf.txt"],"gold_ids":["deposit_interest\/moneycreationinthemoderneconomypdf_4.txt","deposit_interest\/moneycreationinthemoderneconomypdf_1.txt","deposit_interest\/moneycreationinthemoderneconomypdf_3.txt","deposit_interest\/moneycreationinthemoderneconomypdf_5.txt","deposit_interest\/moneycreationinthemoderneconomypdf_0.txt","deposit_interest\/moneycreationinthemoderneconomypdf_6.txt","deposit_interest\/moneycreationinthemoderneconomypdf_2.txt"],"gold_answer":"$\\begingroup$\n\nIt's all a consequence of our much misunderstood [ monetary system\n](https:\/\/www.bankofengland.co.uk\/-\/media\/boe\/files\/quarterly-\nbulletin\/2014\/money-creation-in-the-modern-economy.pdf) in which bank loans\ncreate new money and the repayment of (the principal of) those loans destroys\nmoney.\n\nDeposits mainly correspond to what the banks owe to its customers, so they can\nnot really \"leave\" the banking sector. The reason there are less deposits is\nthat they expire when loans are repaid. If amount of new loans being created\neach day is less than the amount of existing loans being repaid then the\namount of deposits in existence will fall. A higher interest rate reduces\nlending (because it discourages borrowing) and so makes this scenario more\nlikely."} +{"query":"Why there is no study on the forecasting methods for the exchange rates of CBDCs? What do you think?\n\nWhat do you think about the predictiion\/forecasting the exchange rate of CBDCs?\n\n(CBDC= CENTRAL BANK DIGITAL CURRENCY)\n\nI cannot find any paper on this issue in the literature. But, there are some papers and studies on the forecasting methods for the exchange rate of cryptocurrencies.\n\nI am asking the reasons why there is no study on the forecasting the exchange rates of CBDCs? What do you think? Does this issue make no sense?\n\nMy thought is that CBDCs are pegged to the value of that country's fiat currency.","reasoning":"We need to check how many countris have launched CBDC, as we can't statistically forecast something without any data.","id":"21","excluded_ids":["N\/A"],"gold_ids_long":["exchange_cbdc\/centralbankdigitalcurrenciescbdc.txt"],"gold_ids":["exchange_cbdc\/centralbankdigitalcurrenciescbdc_13.txt","exchange_cbdc\/centralbankdigitalcurrenciescbdc_11.txt","exchange_cbdc\/centralbankdigitalcurrenciescbdc_14.txt"],"gold_answer":"$\\begingroup$\n\n * As of now there are few actual CBDCs in place China, Bahamas, Jamaica and Nigeria; other countries only have plans to implement them and are still conducting pilot experiments ( [ https:\/\/blog.chainalysis.com\/reports\/central-bank-digital-currencies-cbdc\/ ](https:\/\/blog.chainalysis.com\/reports\/central-bank-digital-currencies-cbdc\/) ). \n\nYou can't statistically forecast something without any data, and maybe there's\nnot enough available data yet. I think this IMF call for papers ( [\nhttps:\/\/www.imf.org\/en\/News\/Seminars\/Conferences\/2023\/11\/15\/11th-imf-\nstatistical-forum\n](https:\/\/www.imf.org\/en\/News\/Seminars\/Conferences\/2023\/11\/15\/11th-imf-\nstatistical-forum) ) will probably contain many papers about what you're\nasking.\n\n * CBDCs are being considered as replacement of current currencies and form of making digital payments (see [ ECB ](https:\/\/www.ecb.europa.eu\/paym\/digital_euro\/html\/index.en.html) ), so there won't be an exchange between regular paper currency and CBDC (at least not officially). It would be like having money on bank account. There will be exchange rate between other currencies but not between physical money and digital money of the same currency."} +{"query":"Why do all filled bids receive the same rate, which is the rate of the lowest filled bid, during the US Treasury auction process?\n\nThis question pertains to the US Treasury auction process. {1} states:\n\nTreasury auctions are designed to minimize the cost of financing the national debt by promoting broad, competitive bidding and liquid secondary market trading.\n\nHowever, from my understanding, all competitive and noncompetitive bids that are filled receive the same rate, which is the rate of the lowest filled bid.. This seems to contradict {1}, seems it does not minimize the cost of financing the national debt.\n\nWhy do all competitive and noncompetitive bids that are filled receive the same rate, which is the rate of the highest filled bid, during the US Treasury auction process?","reasoning":"This is related to the design of treasury in enforcing multi-price auction format or single-price auction format. We need to find some pros and cons to these two methods.","id":"22","excluded_ids":["N\/A"],"gold_ids_long":["bid_auction\/ci112pdf.txt"],"gold_ids":["bid_auction\/ci112pdf_5.txt","bid_auction\/ci112pdf_3.txt"],"gold_answer":"$\\begingroup$\n\nThe explanation is given in [ NY FED - The Treasury Auction Process:\nObjectives, Structure, and Recent Adaptations\n](https:\/\/www.newyorkfed.org\/medialibrary\/media\/research\/current_issues\/ci11-2.pdf)\n.\n\n> The Treasury first adopted the multiple-price format when it initiated bill\n> auctions in 1929 and it continued to use that format when it introduced\n> auctions of coupon-bearing securities in the early 1970s. However, when the\n> auction process came under scrutiny in 1991, public officials became\n> interested in alternative formats that might appeal to more investors and\n> that might lead to lower financing costs. Several academics had suggested\n> earlier that single-price auctions might reduce financing costs (see Carson\n> [1959], Friedman [1960, 1963], and Smith [1966]). In a single-price auction,\n> a participant can bid its actual reservation yield for a new security, that\n> is, the minimum yield at which it is willing to buy the security. The bidder\n> certainly has no reason to bid a lower yield, but if the auction stops at a\n> higher yield it will get the full benefit of buying at that higher yield. In\n> contrast, the multiple-price format encourages a participant to bid higher\n> than its reservation yield in hopes of getting the security on more\n> favorable terms.\n\nBox 2 explains the results from the test auctions that the treasury conducted.\n\n> The Treasury produced two empirical studies of the results of its experiment\n> with a single-price auction format: Malvey, Archibald, and Flynn (1995) and\n> Malvey and Archibald (1998). The studies calculated\u2014for both single-price\n> and multiple-price auctions\u2014the difference between the auction yield of a\n> security and the yield at which the same security was trading in the when-\n> issued market at the time of the auction. A positive difference indicated\n> that the securities had been auctioned at a yield higher than the one at\n> which they were trading in the when-issued market. For securities auctioned\n> in a multiple-price format, the average difference was statistically\n> significantly greater than zero. For securities auctioned in a single-price\n> format, however, the studies were unable to reject the hypothesis that the\n> average difference was zero. These results suggest that moving to a single-\n> price format would lead to lower financing costs.\n\nIf you continue reading you will see that the results are not unambiguous.\nNonetheless, the underlying idea is that this design should lead to lower\ncosts and there is some evidence that it indeed does."} +{"query":"The Canonical New-Keynesian Model\n\nI'm trying to learn about new-keynesian models hw they were derived. However, I found great difficulty in deriving the equations used in many studies and the lack of books and papers that mention their origin. The following equation (see link slide 7\/26, https:\/\/slideplayer.com\/slide\/10639387\/) represents the aggregate demand block of the canonical gap model (reduced-form new-keynesian model) which is a s follows,\n\n\ud835\udc66\u0302 \ud835\udc61=\ud835\udc4e1\ud835\udc66\u0302 \ud835\udc61\u22121\u2212\ud835\udc4e2\ud835\udc5a\ud835\udc50\ud835\udc56\ud835\udc61+\ud835\udc4e3\ud835\udc66\u0302 \ud835\udc61+1+\ud835\udf16\ud835\udc66\ud835\udc61,\n\ud835\udc5a\ud835\udc50\ud835\udc56\ud835\udc61=\ud835\udc4e4\ud835\udc67\u0302 \ud835\udc61+(1\u2212\ud835\udc4e4)\ud835\udc5f\u0302 \ud835\udc61.\n\nWhere \ud835\udc66\u0302 \ud835\udc61\n is the output gap, \ud835\udc5a\ud835\udc50\ud835\udc56\ud835\udc61\n represents the monetary conditions index, \ud835\udc5f\u0302 \ud835\udc61\n refers to the real interest rate gap, \ud835\udc67\u0302 \ud835\udc61\n denotes the real exchange rate gap. Can anyone please show how such equation was derived or direct me to its source. Thanks in advance.","reasoning":"We need to check the derivation of the model linearized equilibrium conditions starting from the optimization problems and more details into the estimation of the model using bayesian techniques.","id":"23","excluded_ids":["N\/A"],"gold_ids_long":["new_keynesian\/benchmarkdsge.txt"],"gold_ids":["new_keynesian\/benchmarkdsge_13.txt","new_keynesian\/benchmarkdsge_53.txt","new_keynesian\/benchmarkdsge_21.txt","new_keynesian\/benchmarkdsge_10.txt","new_keynesian\/benchmarkdsge_5.txt","new_keynesian\/benchmarkdsge_20.txt","new_keynesian\/benchmarkdsge_12.txt","new_keynesian\/benchmarkdsge_33.txt","new_keynesian\/benchmarkdsge_39.txt","new_keynesian\/benchmarkdsge_1.txt","new_keynesian\/benchmarkdsge_23.txt","new_keynesian\/benchmarkdsge_63.txt","new_keynesian\/benchmarkdsge_15.txt","new_keynesian\/benchmarkdsge_51.txt","new_keynesian\/benchmarkdsge_45.txt","new_keynesian\/benchmarkdsge_50.txt","new_keynesian\/benchmarkdsge_27.txt","new_keynesian\/benchmarkdsge_2.txt","new_keynesian\/benchmarkdsge_48.txt","new_keynesian\/benchmarkdsge_9.txt","new_keynesian\/benchmarkdsge_16.txt","new_keynesian\/benchmarkdsge_35.txt","new_keynesian\/benchmarkdsge_4.txt","new_keynesian\/benchmarkdsge_41.txt","new_keynesian\/benchmarkdsge_59.txt","new_keynesian\/benchmarkdsge_62.txt","new_keynesian\/benchmarkdsge_29.txt","new_keynesian\/benchmarkdsge_40.txt","new_keynesian\/benchmarkdsge_28.txt","new_keynesian\/benchmarkdsge_55.txt","new_keynesian\/benchmarkdsge_22.txt","new_keynesian\/benchmarkdsge_19.txt","new_keynesian\/benchmarkdsge_26.txt","new_keynesian\/benchmarkdsge_54.txt","new_keynesian\/benchmarkdsge_36.txt","new_keynesian\/benchmarkdsge_18.txt","new_keynesian\/benchmarkdsge_52.txt","new_keynesian\/benchmarkdsge_32.txt","new_keynesian\/benchmarkdsge_3.txt","new_keynesian\/benchmarkdsge_8.txt","new_keynesian\/benchmarkdsge_42.txt","new_keynesian\/benchmarkdsge_0.txt","new_keynesian\/benchmarkdsge_25.txt","new_keynesian\/benchmarkdsge_31.txt","new_keynesian\/benchmarkdsge_56.txt","new_keynesian\/benchmarkdsge_14.txt","new_keynesian\/benchmarkdsge_64.txt","new_keynesian\/benchmarkdsge_7.txt","new_keynesian\/benchmarkdsge_60.txt","new_keynesian\/benchmarkdsge_57.txt","new_keynesian\/benchmarkdsge_61.txt","new_keynesian\/benchmarkdsge_58.txt","new_keynesian\/benchmarkdsge_34.txt","new_keynesian\/benchmarkdsge_46.txt","new_keynesian\/benchmarkdsge_24.txt","new_keynesian\/benchmarkdsge_6.txt","new_keynesian\/benchmarkdsge_43.txt","new_keynesian\/benchmarkdsge_47.txt","new_keynesian\/benchmarkdsge_38.txt","new_keynesian\/benchmarkdsge_17.txt","new_keynesian\/benchmarkdsge_44.txt","new_keynesian\/benchmarkdsge_11.txt","new_keynesian\/benchmarkdsge_49.txt","new_keynesian\/benchmarkdsge_30.txt","new_keynesian\/benchmarkdsge_37.txt"],"gold_answer":"$\\begingroup$\n\nI would recommend first to check [ Sims' class notes\n](https:\/\/www3.nd.edu\/%7Eesims1\/new_keynesian_2017.pdf) in Graduate Macro\nTheory. I think it is both very gentle and straightforward introduction to\nthis kind of models. In these notes he derives the model linearized\nequilibrium conditions starting from the optimization problems.\n\nOnce you've read Sims notes, probably you want to check [ Fenandez-\nVillaverde's \"A Baseline DSGE Model\" notes\n](https:\/\/www.sas.upenn.edu\/%7Ejesusfv\/benchmark_DSGE.pdf) in which he\ndevelops essentially the same model but including some extensions, plus some\ninsights into the estimation of the model using bayesian techniques.\n\nIf you want to understand these types of models in a more general way and in\nmore depth, I'd recommend the book [ Recursive Methods in Economic Dynamics\n](https:\/\/www.hup.harvard.edu\/catalog.php?isbn=9780674750968) by Stokey, Lucas\nand Prescott, which is the standard reference. This book develops the\nrecursive approach to model economic problems while providing the necessary\nmath background, although you are supposed to have some exposure in proofs and\nreal analysis before going into this reading.\n\nIn line with the last recommendation is [ Recursive Macroeconomic Theory\n](https:\/\/mitpress.mit.edu\/9780262038669\/recursive-macroeconomic-theory\/) by\nLjungqvist and Sargent. This book offers a wider range of topics than SLP, but\nis also more referencial to my taste since it assumes a some mathematical and\nrigorous-macro background.\n\n**TL;DR** . [ Sims' class notes\n](https:\/\/www3.nd.edu\/%7Eesims1\/new_keynesian_2017.pdf) will give you some\nworking knowledge in NK models; Go into [ Fenandez-Villaverde's \"A Baseline\nDSGE Model\" notes ](https:\/\/www.sas.upenn.edu\/%7Ejesusfv\/benchmark_DSGE.pdf)\nif you want to get a little bit further. Check [ Recursive Methods in Economic\nDynamics ](https:\/\/www.hup.harvard.edu\/catalog.php?isbn=9780674750968) and [\nRecursive Macroeconomic Theory\n](https:\/\/mitpress.mit.edu\/9780262038669\/recursive-macroeconomic-theory\/) if\nyou want to _really_ understand these types of models."} +{"query":"Which covariates can I include in my fixed-effects regression?\n\nI am doing a difference-in-difference analysis of an event that affected several states in the US. I am interested to understand the effects of this event on state-level unemployment rates. I have state level data on demographics for several years before and after this event. My question is what covariates should I include if I am estimating equation of the following form:\n\nunempRate\ud835\udc57\ud835\udc61=\ud835\udefe\ud835\udc57+\ud835\udefc\ud835\udc61+\ud835\udefd\ud835\udc37\ud835\udc57\ud835\udc61+\ud835\udeff\ud835\udc4b\ud835\udc57\ud835\udc61+\ud835\udf16\ud835\udc57\ud835\udc61\n\nwhere \ud835\udefe\ud835\udc57\n are state-fixed effects; \ud835\udefc\ud835\udc61\n are time fixed effects; \ud835\udc37\ud835\udc57\ud835\udc61\n are dummies - 1 if state \ud835\udc57\n is affected at time \ud835\udc61\n, and 0 otherwise; and \ud835\udc4b\ud835\udc57\ud835\udc61\n are time-varying state level covariates. Coefficient of interest is \ud835\udefd\n.\n\nSo far, I have included population and average household income at the state level.\n\nWhat more covariates can I include? How does one decide which covariates to include in a setting like this? What is the guiding philosophy?","reasoning":"We need to understand back-door criterion via Directed Acyclic Graphs (the average causal effect (ACE) of a treatment X on an outcome Y).","id":"24","excluded_ids":["N\/A"],"gold_ids_long":["fixed_effect\/backdoorcriterion.txt"],"gold_ids":["fixed_effect\/backdoorcriterion_0.txt","fixed_effect\/backdoorcriterion_2.txt","fixed_effect\/backdoorcriterion_1.txt","fixed_effect\/backdoorcriterion_3.txt","fixed_effect\/backdoorcriterion_4.txt"],"gold_answer":"$\\begingroup$\n\nUnderstanding back-door criterion via Directed Acyclic Graphs ( [\nhttp:\/\/causality.cs.ucla.edu\/blog\/index.php\/category\/back-door-criterion\/\n](http:\/\/causality.cs.ucla.edu\/blog\/index.php\/category\/back-door-criterion\/) )\nhelped me understand how to add covariates on my econometric model. While\nstudying the material, I also found a wonderful website that tells which\nvariables to condition on and how to test whether the DAG you draw is correct:\n[ http:\/\/www.dagitty.net\/dags.html ](http:\/\/www.dagitty.net\/dags.html)"} +{"query":"A question on Marx' \"Value, price and profit\"\n\nIn his lecture Value, price and profit Karl Marx argues that profit is made by capitalists by (i) selling commodities for their real price (= real value expressed in money), (ii) paying the workers (down the complete supply chain) as wages the real value (expressed in money) of the commodities they produce but (iii) letting them work more time than needed for the production of these commodities (= produce more commodities, but unpaid for).\n\nI don't understand why he insists on (ii) and (iii) instead of just saying that the capitalists pay the workers simply less than the real price (by which they sell the commodities).\n\nWhat's the difference between \"working full time for half wages\" and \"working half time for full wages and half time for no wages\"?","reasoning":"We need to check the description of value, price or profit by Marx to catch up the meaning.","id":"25","excluded_ids":["N\/A"],"gold_ids_long":["valuepriceprofit\/ch02htmc10.txt"],"gold_ids":["valuepriceprofit\/ch02htmc10_21.txt","valuepriceprofit\/ch02htmc10_43.txt","valuepriceprofit\/ch02htmc10_51.txt","valuepriceprofit\/ch02htmc10_18.txt","valuepriceprofit\/ch02htmc10_5.txt","valuepriceprofit\/ch02htmc10_23.txt","valuepriceprofit\/ch02htmc10_27.txt","valuepriceprofit\/ch02htmc10_37.txt","valuepriceprofit\/ch02htmc10_31.txt","valuepriceprofit\/ch02htmc10_60.txt","valuepriceprofit\/ch02htmc10_55.txt","valuepriceprofit\/ch02htmc10_65.txt","valuepriceprofit\/ch02htmc10_47.txt","valuepriceprofit\/ch02htmc10_42.txt","valuepriceprofit\/ch02htmc10_12.txt","valuepriceprofit\/ch02htmc10_64.txt","valuepriceprofit\/ch02htmc10_19.txt","valuepriceprofit\/ch02htmc10_44.txt","valuepriceprofit\/ch02htmc10_36.txt","valuepriceprofit\/ch02htmc10_14.txt","valuepriceprofit\/ch02htmc10_25.txt","valuepriceprofit\/ch02htmc10_58.txt","valuepriceprofit\/ch02htmc10_30.txt","valuepriceprofit\/ch02htmc10_50.txt","valuepriceprofit\/ch02htmc10_49.txt","valuepriceprofit\/ch02htmc10_6.txt","valuepriceprofit\/ch02htmc10_59.txt","valuepriceprofit\/ch02htmc10_16.txt","valuepriceprofit\/ch02htmc10_54.txt","valuepriceprofit\/ch02htmc10_56.txt","valuepriceprofit\/ch02htmc10_38.txt","valuepriceprofit\/ch02htmc10_29.txt","valuepriceprofit\/ch02htmc10_61.txt","valuepriceprofit\/ch02htmc10_3.txt","valuepriceprofit\/ch02htmc10_45.txt","valuepriceprofit\/ch02htmc10_17.txt","valuepriceprofit\/ch02htmc10_8.txt","valuepriceprofit\/ch02htmc10_63.txt","valuepriceprofit\/ch02htmc10_39.txt","valuepriceprofit\/ch02htmc10_22.txt","valuepriceprofit\/ch02htmc10_41.txt","valuepriceprofit\/ch02htmc10_4.txt","valuepriceprofit\/ch02htmc10_7.txt","valuepriceprofit\/ch02htmc10_20.txt","valuepriceprofit\/ch02htmc10_10.txt","valuepriceprofit\/ch02htmc10_33.txt","valuepriceprofit\/ch02htmc10_46.txt","valuepriceprofit\/ch02htmc10_13.txt","valuepriceprofit\/ch02htmc10_2.txt","valuepriceprofit\/ch02htmc10_11.txt","valuepriceprofit\/ch02htmc10_15.txt","valuepriceprofit\/ch02htmc10_35.txt","valuepriceprofit\/ch02htmc10_28.txt","valuepriceprofit\/ch02htmc10_62.txt","valuepriceprofit\/ch02htmc10_24.txt","valuepriceprofit\/ch02htmc10_53.txt","valuepriceprofit\/ch02htmc10_9.txt"],"gold_answer":"$\\begingroup$\n\nYour summary of Marx's argument in these lectures is inaccurate. In these\nlectures too, Marx advances his known theory of surplus. He does not say that\ncapitalists pay to the workers the full value of the commodities the workers\n_produce_ , but the full value needed for the workers and their families to\nsurvive.\n\n**[ Quote ](https:\/\/www.marxists.org\/archive\/marx\/works\/1865\/value-price-\nprofit\/ch02.htm#c10) **\n\n> We must now return to the expression, \u201cvalue, or price of labour.\u201d We have\n> seen that, in fact, it is only the value of the labouring power, measured by\n> the values of commodities necessary for its maintenance.\n\nMarx argument is the well-known one: when you work $H$ hours, you produce\n$Y$ goods that are more than you need to survive -but you are paid only\n$Wg model's conclusions?\n\nIn Capital in the Twenty-First Century Piketty suggests that if the return on capital is higher than the economic growth, then wealth inequality in society will rise.\n\nYet, this model seems to ignore that people leave their inheritance to more than one person on average and form marriges with people of not ideally equal wealth. I.e. if a billionare marries a millionare and have three children to whom they pass their wealth after 50 years, they'll pass $1000\ud835\udc5a+$1\ud835\udc5a3(1+\ud835\udc5f)50\n to each of them, making the society more equal if r is small enough.\n\nDoes such effect of inheritances break the model's conclusions (and inequality may decrease when r is significantly higher than g) or the effect is negligible?","reasoning":"We need to check more details about Piketty's model","id":"26","excluded_ids":["N\/A"],"gold_ids_long":["inheritance_inequality\/830.txt"],"gold_ids":["inheritance_inequality\/830_12.txt","inheritance_inequality\/830_19.txt","inheritance_inequality\/830_21.txt","inheritance_inequality\/830_1.txt","inheritance_inequality\/830_15.txt","inheritance_inequality\/830_22.txt","inheritance_inequality\/830_2.txt","inheritance_inequality\/830_14.txt","inheritance_inequality\/830_23.txt","inheritance_inequality\/830_20.txt","inheritance_inequality\/830_7.txt","inheritance_inequality\/830_3.txt","inheritance_inequality\/830_17.txt","inheritance_inequality\/830_18.txt","inheritance_inequality\/830_4.txt","inheritance_inequality\/830_10.txt","inheritance_inequality\/830_6.txt","inheritance_inequality\/830_16.txt","inheritance_inequality\/830_9.txt","inheritance_inequality\/830_11.txt","inheritance_inequality\/830_24.txt","inheritance_inequality\/830_5.txt","inheritance_inequality\/830_13.txt","inheritance_inequality\/830_8.txt"],"gold_answer":"$\\begingroup$\n\nIt actually does not change conclusions from his model. Piketty's model is not\nabout income inequality per se, it is about inequality between capital's and\nlabor's share of income. Piketty's model does not even have heterogenous\nagents with different wealth or income in it. It has separate capital incomes\nand separate labor incomes, but the model does not even explicitly model these\nas two separate groups. What happens in the Piketty's model is that when\n$r>g$ the share of total GDP that goes to capitalists as a whole class\nincreases. The model is not about inequality between individual capitalists.\nIf the capitalist class would be very large they might be even poorer than\nworkers per capita, Piketty's model does not analyze it at all. Piketty does\nargue that this should lead to more inequality in real world because he argues\nin real life capital _tends_ to be concentrated but that is not _direct_\nconclusion of his model which only makes predictions about income shares (see\noverview of Piketty's model in [ Gusella 2020\n](https:\/\/www.researchgate.net\/publication\/352198040_Notes_on_Piketty%27s_model)\n).\n\nAs such you proposed modification would not change the conclusions from\nPiketty's model since the inheritors of those millionaires (assuming they do\nhave zero labor income) would be still members of capitalist class. If\ncapitalist class gets 30% of GDP (lets say GDP is 100) then regardless whether\nthere are 100 capitalists each getting 0.3 or 1 capitalist getting whole 30\nthey are still getting 30% of GDP in either case. Piketty's model just says\nthat this share will be increasing always as long as $r>g$ so labor will be\ngetting smaller and smaller share, but this is not the same income inequality\namongst individuals but classes of people."} +{"query":"What will be the Effect of the Russian oil Price Cap\n\nG7 countries have just announced that they will not buy oil from Russia unless the price is USD 60\/barrel or below. What will be the effect of this cap- will it be an effective economic sanction? Why can\u2019t Russia sell the oil to a non G7 intermediary who sells it to G7? And what will it do to the market price of oil ?","reasoning":"One effect could be the reduced revenue to Russia.","id":"27","excluded_ids":["N\/A"],"gold_ids_long":["oil_price_cap\/businessreview20220928willanoilpricecapmanagetoreducethepdf.txt"],"gold_ids":["oil_price_cap\/businessreview20220928willanoilpricecapmanagetoreducethepdf_0.txt"],"gold_answer":"$\\begingroup$\n\n> What will be the effect of this cap- will it be an effective economic\n> sanction?\n\nThe effect will be that Russia gets less revenue from EU and the west for the\nsale of their oil.\n\nYes, according to economic theory, it will likely be an effective sanction.\nContrary to many sanctions on Russia prior to this, this is actually a 'smart'\nsanction. That is, this sanction actually has some economic rationale behind\nit. Although for oil it would be best if we would use tariffs, price cap makes\nmore sense with gas, it still is more sensible than total embargo (see [\nMikhail et al 2022\n](https:\/\/www.zora.uzh.ch\/id\/eprint\/222702\/1\/172987_global_economic_consequences_of_the_war_in_ukraine_sanctions_supply_chains_and_sustainability.pdf)\n).\n\n> Why can\u2019t Russia sell the oil to a non G7 intermediary who sells it to G7?\n> And what will it do to the market price of oil ?\n\nThey can but they can only do that at a discount. So this hurts Russia's oil\nrevenue. There is also consensus among economists this will hurt the revenues\nof Russia (see [ here\n](http:\/\/eprints.lse.ac.uk\/116998\/1\/businessreview_2022_09_28_will_an_oil_price_cap_manage_to_reduce_the.pdf)\n).\n\nThe reason why this sanction is more rational than pure embargo is that under\nembargo you can still have resale of oil (which still hurts due to discounts),\nbut it does push the oil price higher for EU so it unnecessarily damages\nEuropean economies, and the point of sanctions is to hurt the other side, not\nto shoot yourself in the foot. In this case, if the cap is chosen at the\noptimal level (which can be debated), the sanction might simultaneously lower\nRussian oil revenue while preventing the price of oil to get too high so that\neither Russia offsets its losses due to price increase or it hurts EU\neconomies too much. At the same time it minimizes the damage to the European\neconomies. Again one can debate whether the right price was chosen to achieve\nthis goal, and for oil a tariff might be better, but in principle this is much\nmore rational than let's say an embargo."} +{"query":"Effects of retirement and unemployment on PPF\n\nWhich of the following will not shift a country's production possibility frontier (PPF) ? An increase in the age at which people retire or a fall in unemployment ?\n\nTo me, it is the increase in the age at which people retire that will not shift the country's PPF, this situation will represent a point in the bounded area (by the PPF, the axis and the origin), meaning that we can produce more but the PPF is not shifted, we still have the same technology and tools to produce. If a country can produce 100 cars and 50 tons of wheat, then if the people retire later, I do not see why the PPF could shift.\n\nIn the other hand, a fall in unemployment can imply more workers in development sectors of new technologies so it will shift it, we can produce more.\n\nThe correct answer was apparently fall in unemployment. This is very strange to me, according to me justification. I believe that if we make the assumptions that workers can only be employed in PPF sectors, then yes, a fall in unemployment will not shift the PPF. But that means that there are no people working in labs, in science, in developing new technologies etc. There are only people producing wheat and cars... Also, the question is not indicating by how much is the increase of age, so we can be very annoying, saying if I increase the age of retirement by 1 year or 1 day, it surely won't shift the PPF...\n\nAlso, mathematically speaking if fall in unemployment does not shift the PPF, then a rise in unemployment does not shift the PPF neither. And an increase at the age at which people retire \u27f9\n a rise in unemployment \u27f9\n no shift of PPF. So for me an increase in the age at which people retire will not shift a country's PPF.\n\nWho is right there ?","reasoning":"An economy with unemployment is not fully utilizing its ressources. We may refer to PPF for more details, which shows all the possible options of output for two goods that can be produced using all factors of production.","id":"28","excluded_ids":["N\/A"],"gold_ids_long":["ppf_retire\/ProductionE28093possibilityfrontier.txt"],"gold_ids":["ppf_retire\/ProductionE28093possibilityfrontier_8.txt","ppf_retire\/ProductionE28093possibilityfrontier_14.txt","ppf_retire\/ProductionE28093possibilityfrontier_15.txt","ppf_retire\/ProductionE28093possibilityfrontier_11.txt","ppf_retire\/ProductionE28093possibilityfrontier_13.txt","ppf_retire\/ProductionE28093possibilityfrontier_5.txt","ppf_retire\/ProductionE28093possibilityfrontier_12.txt","ppf_retire\/ProductionE28093possibilityfrontier_7.txt","ppf_retire\/ProductionE28093possibilityfrontier_9.txt","ppf_retire\/ProductionE28093possibilityfrontier_17.txt","ppf_retire\/ProductionE28093possibilityfrontier_16.txt","ppf_retire\/ProductionE28093possibilityfrontier_6.txt"],"gold_answer":"$\\begingroup$\n\nYou can google [ PPF unemployment\n](https:\/\/www.google.com\/search?q=%22unemployment%22+ppf+frontier) and [ PPF\nretirement age\n](https:\/\/www.google.com\/search?q=%22retirement%22+ppf+frontier) . One of the\nfirst entries, ` homework.study.com ` discusses both circumstances.\n\nIn any case, a $\\color{blue}P$ roduction $\\color{blue}P$ ossibility\n$\\color{blue}F$ rontrier ( $\\color{blue}{PPF}$ ) represents all possible\ncombinations of output that can be produced by fully and efficiently utilizing\nall factors of production. See for example [ Wikipedia\n](https:\/\/en.wikipedia.org\/wiki\/Production%E2%80%93possibility_frontier) . It\nis clear that an economy with unemployment is not fully utilizing its\nressources (labour force). The Wikipedia article also offers a formula for a\nsimple non linear frontier with two goods (linear would do as well).\n\n$$q2 = (q1 - 7\/2)^{-1} + 7\/2$$\n\nwhere q2 and q1 are the quantities for each product. In the simplest case,\nthere are only workers, and no technology or capital. In this case, you can\njust think of 7 being the number of workers (say millions).\n\nBelow, I use [ Julia ](https:\/\/julialang.org\/) to plot this.\n\n \n \n ppf=[(unit -7\/2)^-1 + 7\/2 for unit in 0:0.1:(7\/2)]\n area = [0 for i in ppf]\n plot(0:0.1:(7\/2),ppf, label = \"PPF\",fillrange = area, fillalpha = 0.35, c = 3)\n title!(\"PPF Example Wikipeadia\")\n ylabel!(\"Quantity of second good Q2\")\n xlabel!(\"Quantity of first good Q1\")\n ylims!((0.0,5))\n xlims!((0.0,5))\n annotate!([1.25], [1], [\"possible region\"], :green)\n annotate!([3.5], [3.5], [\"not possible region\"], :red)\n annotate!([collect(0:0.1:(7\/2))[length(ppf)-length(ppf[ppf .< (1.5-7\/2)^-1 + 7\/2])-1]], [(1.5 -7\/2)^-1 + 7\/2].-0.11, (\"*\", :blue, :left))\n \n\n[ ![enter image description here](https:\/\/i.sstatic.net\/uNSvn.png)\n](https:\/\/i.sstatic.net\/uNSvn.png)\n\nThis is more or less identical to the picture from Wikipedia but I added a\nblue dot representing the point where q1 = 1.5, which means q2 = 3. In this\ncase, there is full employment and the economy is working at full capacity.\n\nAdding a few lines similar to [ this answer\n](https:\/\/quant.stackexchange.com\/a\/69780\/54838) makes the chart interactive.\nUnemployment means that only a subset of the entire labour force will be used\nas can be seen [ here\n](https:\/\/www.google.com\/search?q=labour+force+employment+unemployment&sxsrf=ALiCzsa7gTY2H2GiDGrB9Vv5UhVkcP4lhg:1669329545989&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjn5-eU8cf7AhXUSfEDHfEPDdoQ_AUoAnoECAIQBA&cshid=1669329562336205&biw=1536&bih=792&dpr=1.25)\n.\n\n$$ Labour \\ Force = Employed + Unemployed$$\n\nHence, if you subtract unemployed from the labour force you end up with the\nnumber of employed people and a production that is not at full capacity.\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/Togy2.gif)\n](https:\/\/i.sstatic.net\/Togy2.gif)\n\nIncreasing the workforce moves the PPF outwards. People who retire, would [ no\nlonger be part\n](https:\/\/www.econport.org\/content\/handbook\/Unemployment\/Define.html#:%7E:text=Labor%20Force%20Participation%20Rate&text=In%20addition%2C%20students%2C%20retirees%2C,called%20labor%20force%20participation%20rate.)\nof the labour force, resulting in a downward shift in the PPF. Now at any\ngiven time, these people are not retired yet and it will not impact the PPF\nimmediately. However, as soon as the first person would be retired under the\nold system, the PPF looks different.\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/FBVj5.gif)\n](https:\/\/i.sstatic.net\/FBVj5.gif)"} +{"query":"At what point would negative accounting equity for a central bank have macroprudential implications?\n\nThe yardstick for central bank \"health\" and\/or \"credibility\" can be nebulous at times. It is sometimes argued that a balance-sheet assessment only does not capture the nuance of a central bank because the central bank's true \"asset\" is its ability to print money. But by the same token, central banks are now becoming bigger buyers of certain securities. In the case of Japan, the BOJ has over 50% of some maturities. The point being that when a central bank is expected to play a meaningful role when crises emerge, then it's \"liabilities\" will also be incalculable.\n\nFor commercial banks, equity serves as a quasi-clawback provision as it allows creditors to get repaid should the bank encounter extreme solvency issues. But for central banks, I'm not sure what \"equity\" really signals to anybody.\n\nQuestion\nAt what point does negative accounting equity register on the broader macroprudential radar and why? I suppose that even though one cannot have a \"run\" on a central bank, it could be seen as a BOP or FX crisis.","reasoning":"The central bank is different from commercial banks, e.g., the risks may be also different.","id":"29","excluded_ids":["N\/A"],"gold_ids_long":["negative_commercial\/bispap71pdf.txt"],"gold_ids":["negative_commercial\/bispap71pdf_3.txt","negative_commercial\/bispap71pdf_4.txt","negative_commercial\/bispap71pdf_2.txt"],"gold_answer":"$\\begingroup$\n\n> At what point does negative accounting equity register on the broader\n> macroprudential radar and why?\n\nIt does not. The reason why is that negative equity does not pose any\nmacroeconomic risk as such it cannot be registered by the 'macroprudential\nradar'. As argued by [ David Archer and Paul Moser-Boehm (2013)\n](https:\/\/www.bis.org\/publ\/bppdf\/bispap71.pdf) :\n\n> Central banks are not commercial banks. They do not seek profits. Nor do\n> they face the same financial constraints as private institutions. In\n> practical terms, this means that most central banks could lose enough money\n> to drive their equity negative, and still continue to function completely\n> successfully. For most central banks, one would have to go far to construct\n> a scenario under which they might have to compromise their policy objectives\n> in order to keep paying their bills.\n\nNegative equity simply does not pose any macroprudential risks whatsoever.\nThere are political risks, or to be more precise risks of politicization of\ncentral bank, but no macroprudential risks ( [ Stella 1997\n](https:\/\/www.imf.org\/external\/pubs\/ft\/wp\/wp9783.pdf) ).\n\n> I suppose that even though one cannot have a \"run\" on a central bank, it\n> could be seen as a BOP or FX crisis.\n\nCentral bank does not need equity for FX intervention, and central bank equity\ndoes not matter for BOP as central bank equity does not constrain the quantity\nof CB reserves."} +{"query":"Would the more robust global economy resulting from the abolition of trade be worth any costs? What would happen to the economy?\n\nHow can a giant volcanic eruption devastate the world? Aside from the immediate fatalities, the eruption can devastate the world\u2019s economy and climate cannot be understated. A good comparison would be the last recorded magnitude 7 eruption in Tambora. While the death toll was severe, it impacted the entire world. The eruption launched huge amounts of volcanic ash, water and sulphuric acid into the atmosphere, obscuring the Sun and repelling some solar radiation. The Tambora eruption\u2019s volcanic winter hit hard in 1816, which has become known as the Year Without a Summer. Temperatures dropped worldwide and climate and weather changes were felt everywhere. But Europe and North America were arguably the most severely hit, with lakes and rivers being frozen over in July and August. Crop damage was rampant, harvests ruined and food shortages widespread. Food prices skyrocketed and violent riots erupted. Malnourishment soon became a severe issue, facilitating disease epidemics that killed tens of thousands. Could it get worse? The world is already experiencing severe food shortages and rising prices, though these were largely caused by inflation and the Russo-Ukraine War. However, incidents like this, the COVID-19 pandemic and the blocking of the Suez Canal serve to underscore just how fragile the global supply chain is.\n\nGiven these risks, would it be better for the economy to abolish trade completely and develop a principle of self-sufficiency, where a state will attempt to find any way to make and obtain its resources and products completely by itself if it is even remotely possible to do so? Would the increased robustness of an autarkic economy outweigh any economic costs (many of these could be alleviated by gradually making the switch) such as increased expenditure on finding ways to obtain certain resources without trade? Also, what would happen to the global economy as a whole? Which countries would be most and least affected and in what ways?","reasoning":"We could check some examples, e.g., the consequences of Brexit for\nUK trade and living standards.","id":"30","excluded_ids":["N\/A"],"gold_ids_long":["abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf.txt"],"gold_ids":["abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_3.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_6.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_7.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_9.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_0.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_8.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_2.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_1.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_5.txt","abolish_trade\/lseacukstorageLIBRARYSecondarylibfilesharedrepositoryContentLSE20BrexitVote20blogbrexit02pdf_4.txt"],"gold_answer":"$\\begingroup$\n\n> Given these risks, would it be better to abolish trade completely and\n> develop a principle of self-sufficiency, where a state will attempt to find\n> any way to make and obtain its resources and products completely by itself\n> if it is even remotely possible to do so?\n\nNo it would not be better abolishing trade completely, if by better you are\nasking if countries would have higher material levels of welfare.\n\nFor most countries that would mean literally going back to dark ages as some\nsmall countries do not have resources to produce modern technologies (for\nexample lithium for batteries is available only in few locations around the\nworld). Big countries such as USA or Russia or China might not go all the way\nto stone age but the welfare standards would plunge by decades if not\ncenturies and most modern comforts would not be widely accessible to public.\n\nEven if we would allow trade in commodities that are not physically present in\nhome country it would lead to major economic disaster. People trade precisely\nbecause trade is beneficial. Carpenters do not grow their own food, they\nspecialize in what they can do comparatively better than other people and then\ntrade for food with farmers. The same way countries specialize in what they\ncan do comparatively better and then trade. This leads to higher living\nstandards, because if someone focuses only on farming that person can produce\nmore food than someone who is not specialist farmer but a jack of all trades\nmaster of none.\n\nIn many places that might even cause famines since some countries have higher\npopulation than can be sustained without any import of food (e.g. Singapore).\nIt would certainly lead to great decrease of human population globally, not\nonly because it would lead to lack of food.\n\nThis being said of course it is technically possible to do it if people would\nbe fine with massive decrease with living standards. But it would be very hard\nsince people would try to break law and trade with foreigners anyway (people\nsmuggle even when there are just minimal restriction on trade) so it would\nrequire very heavy handed response from the government, similar to USSR iron\ncurtain that every country would have to maintain (e.g. heavily\nfortified\/guarded border walls). In addition, I do not believe any democratic\nelectorate would be willing to put out with becoming impoverished so I do not\nbelieve that this would be possible to implement without autocracy. Even if\npeople would foolishly vote for this they would promptly vote against it once\nthe effects would become visible.\n\nYou can see that on the case of Brexit. Brexit is not even a full autarky\n(which is what you propose), it is just exist from free trade area with EU\nthat makes trade for Britain little bit more difficult, yet it causes\nsignificant loss of welfare (approximately 6.3-9.5% loss in GDP per capita in\nlong term and immediate loss of about 2.6% in short run - [ Dhingra et al 2016\n](http:\/\/eprints.lse.ac.uk\/66144\/) ). As a consequence many voters say now\nthey would switch their vote ( [ yougov\n](https:\/\/yougov.co.uk\/topics\/politics\/articles-reports\/2021\/02\/02\/britons-\nwould-vote-remain-are-less-sure-about-re-j) ).\n\nYour proposal would be like Brexit on steroids, the effects would be of\nseveral magnitudes larger.\n\n> Would the robustness of an economy without trade and other benefits I\u2019m not\n> sure of right now be worth any downsides, such as economic strain (because\n> countries aren\u2019t getting money from trade agreements and are using more on\n> attempting to find a way to obtain certain resources without trade)?\n\nThis is question for moral philosophers to answer.\n\nTrade makes people better off, it allows them to achieve higher standards of\nwelfare and it reduces poverty. On the other hand it probably hurts more to be\nrich and than having your riches to be taken away from you than always being\npoor because people get used to the comfort and if you never experienced good\nlife then you don't know what you are missing.\n\nA science cannot determine whether the benefit of always being poor and thus\nrobust to loss of riches is better than being rich with some small but real\nprobability of becoming poor. That is for moral philosophers to ponder.\n\nIt would be equivalent of asking engineer whether costs of building buildings,\nwith all the modern comforts, in places where there are earthquakes, are worth\nthe benefit of forcing people to live in tents but having city virtually\nimmune to earthquakes. After all, if everyone lives in tents then city is\nrobust to any earthquake because even heavy earthquake will do minimal damage\nif there are no firm structures in the city. So is it worth to make the\nexchange? That's a philosophical question.\n\n> Also, what would happen to the global economy as a whole?\n\nWithout question it would collapse. That would be the largest global recession\nworld had ever seen. Covid-19 disrupted supply chains and economic activity a\nlittle bit and it caused greater recession than 2008 financial meltdown. This\nwould be without question largest economic recession in history of mankind.\n\nAs already mentioned above even relatively small disruption to UK trade could\nlead to 6.3-9.5% loss in national income per capita. Complete autarky would\nlead to drop that would be much higher, there are no studies that would\ncalculate impacts of such far fetched ideas as complete autarky but if\nhypothetically some study would say the world output would contract by 5-8\ntimes more it would not raise eyebrows.\n\nAutarky is man-made disruption of world supply chains. The effect on world\neconomy from a volcano or pandemic that would completely shut down trade will\nbe similar effect as when politicians decide to shut down trade.\n\n> Which countries would be most and least affected and in what ways?\n\nGeographically small countries with limited resources would be most affected.\nThey would experience economic collapse and return to low living standards\nthat our ancestors had in distant past. Geographically large countries with\nlot of natural resources would be least affected but they would still\nexperience significant and in modern times unprecedented drop in living\nstandards.\n\n* * *\n\nAnswer to new edited question:\n\n> Would the increased robustness of an autarkic economy outweigh any economic\n> costs (many of these could be alleviated by gradually making the switch)\n> such as increased expenditure on finding ways to obtain certain resources\n> without trade?\n\nNo. Problem with sudden loss of trade is not just that it is sudden, it is the\nloss of trade itself. Of course, making the process gradual would ease the\npain little bit, similarly as getting unemployed without severance is worse\nthan when there is some transitional period with severance, but in the end you\nend up unemployed anyway.\n\nTrade itself is beneficial and increases material standards of living (see for\nexample discussion in Krugman et al International Economics: Theory and Policy\nCh 1-3). Of course, any unexpected negative shock is worse than slow expected\nnegative shock, but removing trade still creates negative shock.\n\nHere we again can use Brexit as a case study. Brexit did not happen overnight.\nFirst UK leaving EU free trade area took 2 years after article 50 was invoked\n(and this article was not invoked immediately after the referendum). So UK\nleaving EU was almost 3 year process. In addition, it was not even completely\nunexpected shock as polling was showing the vote was very close so people\nalready had chance to start preparing even before for this eventuality.\n\nYet as discussed above the Brexit had immediate negative effects, and what\neven more, the research estimates that the long-term negative effects will be\neven _worse_ then short term ones. This is because trade has not just static\npositive effect (it allows economies to produce and operate on higher level)\nbut it also has dynamic effects that allows countries to progress faster (see\nKrugman et al Ch 3). So removing it (even taking the surprise factor out of\nthe equation) creates double whammy of reducing the production possibilities\nof the country immediately and retarding the growth of production\npossibilities of the country in the future. Since economic growth is\ncompounding this can have profound effects over time. If we have two equal\ncountries starting with 100 GDP per capita but one grows at 2% and another one\nat 1% per year then over 100 year period faster growing country will have GDP\nper capita of 724.46, and slow growing country 270.48. By removing the\nunexpected shock you might be able to mitigate the immediate damage but it\nwon't help mitigate much if any of the long term damage that is even higher\nthan the immediate damage.\n\nAgain there is a philosophical debate about whether its better to live in\nstable poverty, or live at high standards of living with some risks that there\nmight be periods of time when those standards of living drop. However, there\nis no question that material standards of living themselves will be higher in\ntrading countries (even factoring in occasional disruptions during global\ndisasters) than in countries under autarky."} +{"query":"Differences in Differences with Small Violations of Parallel Trends\n\nI am interested (for no particular reason) in estimating a hypothetical Differences-in-Differences model with one period of treatment. However, we observe small non-linear violations of the parallel trends assumption.\n\nFor instance, consider wages for IN vs. OH before and after IN implements a new tax policy. We see that prior wages in IN are equal to wages in OH for T = \u2212\u221e,...,\u22123,\u22122,\u22121\n. However, we see that prior wages are \ud835\udf16>0\n higher in OH over the period \ud835\udc47=\u22125\u2212\ud835\udc58,\u22125\n, with \ud835\udc58\n some small-ish natural number. Here, parallel trends does not hold, but we might still expect to be able to observe the impacts of a change in tax policy.\n\nIs there a way to still properly estimate a Differences in Difference estimator under these situations? I imagine that we should see the same point estimate for treatment at time \ud835\udc47=0\n, with a penalty term being added to the standard errors of the point estimate which is a function of \ud835\udf16\n. That said, I have been unable to find any papers which cover this issue.\n\nRelated papers include: https:\/\/jonathandroth.github.io\/assets\/files\/HonestParallelTrends_Main.pdf\n\nHowever, as far as I can tell, that paper deals with differences in trends after treatment, while I am interested in differences in trends prior to treatment.","reasoning":"One option is to use synthetic cohort analysis, where we construct a fake control for each treated unit out of linear combinations of control units to maximize the similarity between control and treated units.","id":"31","excluded_ids":["N\/A"],"gold_ids_long":["differenceindifference\/Syntheticcontrolmethod.txt"],"gold_ids":["differenceindifference\/Syntheticcontrolmethod_38.txt","differenceindifference\/Syntheticcontrolmethod_42.txt","differenceindifference\/Syntheticcontrolmethod_39.txt","differenceindifference\/Syntheticcontrolmethod_40.txt","differenceindifference\/Syntheticcontrolmethod_41.txt"],"gold_answer":"$\\begingroup$\n\nIf you have violations of the parallel trend assumption then you will have\nbiased estimates of the causal effect (assuming you satisfy the other\nassumptions). If you know the maximum that the two series can drift apart over\ntime or have a functional form that specifies how that difference can arise,\nthen you can estimate the diff in diff in a bounded way. For example, if IN\nhas a trend of 2% growth per year and OH has a trend growth of 3%, then adding\na linear time variable to a log regression will result in parallel trends.\n\nYou don't need the trends to be identical. Effectively, you need them to be\nidentical in expectation (see Lechner (2011) [ The Estimation of Causal\nEffects by Difference-in-Difference Methods\n](https:\/\/core.ac.uk\/download\/pdf\/6387069.pdf) ). [ ![Common Trend as an\nexpectation from Lechner \\(2011\\)](https:\/\/i.sstatic.net\/u34uD.png)\n](https:\/\/i.sstatic.net\/u34uD.png)\n\nWhat I did in my paper, [ Competition and complementarities in retail banking:\nEvidence from debit card interchange regulation\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1042957318300184) , is\nlook at the trends in the pre-treatment period and test if they were\nstatistically distinguishable. [ ![testing common trend in pre\nperiod](https:\/\/i.sstatic.net\/qjmHd.jpg) ](https:\/\/i.sstatic.net\/qjmHd.jpg) .\nOf course, you can never test that the common trend in the pre-period would\npersist in the counterfactual treatment period in the absence of the\ntreatment. That can only be assumed.\n\nAnother approach is to do a [ synthetic cohort\n](https:\/\/en.wikipedia.org\/wiki\/Synthetic_control_method) analysis. Basically,\nyou construct a fake control for each treated unit out of linear combinations\nof control units to maximize the similarity between control and treated units.\nYou can do that in a way that closely or even exactly matches the pre-\ntreatment trend of the treatment and control groups."} +{"query":"What are the consequences of a single global currency on trade?\n\nSorry for the general nature of the question but I find it difficult to wrap my head around the idea of a global currency.\n\nLet's say there existed a single global currency which was not controlled by a centralized entity. All countries use this currency as their national currency. Let's also assume there exists some mechanism by which countries can come to a consensus about changes to the global monetary system (money supply, interest rates, etc).\n\nMy question is, is there a way to achieve price stability in such a system? Would trade get in the way of the ability to achieve price stability, since the price of goods\/services is different in each country?\n\nExcuse me if I am getting major concepts wrong. I am new to economics but extremely interested in this problem.","reasoning":"This is related to optimum currency area. We need to check how it is modeled, its applications and some discussions.","id":"32","excluded_ids":["N\/A"],"gold_ids_long":["single_currency\/Optimumcurrencyarea.txt"],"gold_ids":["single_currency\/Optimumcurrencyarea_18.txt","single_currency\/Optimumcurrencyarea_12.txt","single_currency\/Optimumcurrencyarea_5.txt","single_currency\/Optimumcurrencyarea_8.txt","single_currency\/Optimumcurrencyarea_7.txt","single_currency\/Optimumcurrencyarea_16.txt","single_currency\/Optimumcurrencyarea_13.txt","single_currency\/Optimumcurrencyarea_14.txt","single_currency\/Optimumcurrencyarea_11.txt","single_currency\/Optimumcurrencyarea_9.txt","single_currency\/Optimumcurrencyarea_17.txt"],"gold_answer":"$\\begingroup$\n\nTo be honest, I do not see the connection of the title with the question body\nbecause you ask about inflation, rather than the impact on trade (you mention\ntrade, but in the context of price stability).\n\nPaul Krugman's [ The Return of Depression Economics and the Crisis of 2008\n](https:\/\/rads.stackoverflow.com\/amzn\/click\/com\/B004EJZE3G) has, in my\nopinion, the best layman's explanation for the impacts of a single currency. I\nhighly recommend this book if you are extremely interested in this problem (or\n(monetary) economics in general).\n\nKrugman explains this with a fictional currency called globo. A quick summary:\n\n * Businessmen in particular like this system because they could buy and sell anywhere with a minimum of hassle \n * Careful management of the currency could prevent a boom-bust cycle for the world as a whole, but not for each country (also applies to regions within countries, but usually to a lesser extent because of the same language, free movement of labour etc). \n\nThe other article I recommend reading is [ optimum currency area\n](https:\/\/en.wikipedia.org\/wiki\/Optimum_currency_area) . Since shocks would be\nvery different in different countries (some may grow, others shrink), prices\nand wages and \/or factor (labor) mobility must be very flexible for a country\nto regain competitiveness. A country (region) can either decrease prices\nrelative to other countries, or workers leave the country where the\nunemployment rate is high to take jobs elsewhere. That way, the unemployment\nrate in that country decreases back to normal. In summery, even under a fixed\nexchange rate, countries can adjust their real exchange rate in the medium run\nbut these adjustments may take longer and be more painful compared to flexible\nexchange rates with independent monetary policy.\n\nEither way, I am not sure how you define price stability? If you mean low\ninflation, that is rather unrelated to trade but mainly a subject of economic\nactivity relative to money in circulation."} +{"query":"Who insures the FDIC in case it fails?\n\nI understand that the Federal Deposit Insurance Corporation (FDIC) failing is unlikely, but the probability of such a failure is still positive. In case the FDIC fails, who covers the customers' losses? Will the Federal Reserve surely step in to print money to cover FDIC's ass?","reasoning":"We first need to understand what is FDIC and its relationship to the government.","id":"33","excluded_ids":["N\/A"],"gold_ids_long":["fdic\/indexhtml.txt"],"gold_ids":["fdic\/indexhtml_101.txt"],"gold_answer":"$\\begingroup$\n\nFDIC is government institution, so it can only fail if [ government itself\nlacks the funds ](https:\/\/www.fdic.gov\/resources\/deposit-\ninsurance\/faq\/index.html) .\n\nFed (unlike other central banks) does not have power to print new money. If\nthe government would like to print more money it will be done by the [ Bureau\nof Engraving & Printing ](https:\/\/www.bep.gov\/) that is officially part of the\nTreasury. Fed could help by creating more money digitally by buying government\nbonds, which also creates new money but it can't literally print them.\n\nIn case government runs out of funds, Fed could create more money by buying up\nUS bonds or Treasury could order BEP to print more money. However, even though\ngovernment does not need to ever default on nominal obligations, [\nhistorically ](https:\/\/en.wikipedia.org\/wiki\/List_of_sovereign_debt_crises)\ngovernments often choose to (at least partially) default to avoid negative\nconsequences of excess money creation.\n\nHence, what would happen is that either government would raise more money\nthrough monetary financing to avoid default, or government would decide that\nit is not worth the consequences of monetary financing and will just default\non its FDIC obligations."} +{"query":"What is the refutation of this article about the FED being privately owned\n\nWhat is the refutation of this article about the FED being privately owned\n\nI find this article and I would like to know what is the refutation.\n\nPlease proper academic arguments.","reasoning":"The central question is to ask whether Fed is private or public.","id":"34","excluded_ids":["N\/A"],"gold_ids_long":["fed\/whoownsthefederalreservebanks.txt"],"gold_ids":["fed\/whoownsthefederalreservebanks_181.txt","fed\/whoownsthefederalreservebanks_180.txt"],"gold_answer":"$\\begingroup$\n\nThere is no refutation of the claim.\n\nFed is a privately owned despite it being a government institution.\n\nFed clearly states this on its own [ website ](https:\/\/www.stlouisfed.org\/in-\nplain-english\/who-owns-the-federal-reserve-banks) :\n\n> The Federal Reserve Banks are not a part of the federal government, but they\n> exist because of an act of Congress. Their purpose is to serve the public.\n> So is the Fed private or public?\n\n> The answer is both. While the Board of Governors is an independent\n> government agency, the Federal Reserve Banks are set up like private\n> corporations. Member banks hold stock in the Federal Reserve Banks and earn\n> dividends. Holding this stock does not carry with it the control and\n> financial interest given to holders of common stock in for-profit\n> organizations. The stock may not be sold or pledged as collateral for loans.\n> Member banks also elect six of the nine members of each Bank's board of\n> directors.\n\nHowever, note despite of private ownership:\n\n * the Chairman of Fed is publicly selected by president. \n\n * private banks are actually forced to own Fed, and have no say in how Fed is being run, and the return on the capital they are forced to invest in Fed is also not really lucrative."} +{"query":"New financing instruments as covert leverage?\n\nBack when the Federal Reserve intervened in the commercial paper market in the 1970s, corporates essentially woke up the next morning and found they had two back-stopped modes of financing: their own paper and bank loans. It seems like it would be possible for the (large) companies to issue their own paper and then tap into their bank credit lines if they had trouble rolling the paper over for whatever reason. In effect, banks then, would be on the hook in that scenario. We could call this a covert liability, as it technically would not appear on the bank's balance sheet.\n\nQuestion\nWould this be a fair characterization of how money market instruments interact with each other and on aggregate change the supply of money, seemingly circumventing the powers of the Federal Reserve (in terms of capital adequacy and all that alphabet soup around tier 1 capital)? If so, has anything changed since then?","reasoning":"We need to check current Line of Credit products and see whether the situation persists.","id":"35","excluded_ids":["N\/A"],"gold_ids_long":["basel3\/115009834408RegulatoryCapitalRequirementsforLineofCreditProducts.txt"],"gold_ids":["basel3\/115009834408RegulatoryCapitalRequirementsforLineofCreditProducts_1.txt","basel3\/115009834408RegulatoryCapitalRequirementsforLineofCreditProducts_2.txt","basel3\/115009834408RegulatoryCapitalRequirementsforLineofCreditProducts_3.txt"],"gold_answer":"$\\begingroup$\n\nThings have changed- Basel III requires capital to be held against committed\nbut undrawn lines of credit [ https:\/\/support.precisionlender.com\/hc\/en-\nus\/articles\/115009834408-Regulatory-Capital-Requirements-for-Line-of-Credit-\nProducts ](https:\/\/support.precisionlender.com\/hc\/en-\nus\/articles\/115009834408-Regulatory-Capital-Requirements-for-Line-of-Credit-\nProducts)"} +{"query":"What is the name for the term\/principle that trade causes a net increase in utility\n\nPretend that I grow and sell apples and that you are a hungry person. At that point of time, having an apple is more useful to you than having money. Because you are hungry and can\u2019t eat money. However, to me having money would be more useful to me because I have more apples than I could ever use myself. After you buy the apple from me we are both better off.\n\nThis seems to be to be a basic economic principle but because I can describe the idea but don't have its name I don't know what to search for to read up more about that idea.","reasoning":"This is related to Gains from trade where economic agents benefit from trading with each other.","id":"36","excluded_ids":["N\/A"],"gold_ids_long":["gain_from_trade\/Gainsfromtrade.txt"],"gold_ids":["gain_from_trade\/Gainsfromtrade_10.txt"],"gold_answer":"$\\begingroup$\n\nThe term [ \"gains from trade\"\n](https:\/\/en.wikipedia.org\/wiki\/Gains_from_trade) is most commonly used for\nthis."} +{"query":"What were the benefits of creating ON RRP (versus expanding the availability of IOR)?\n\nI'm not understanding why the Fed decided to introduce their Overnight Reverse Repo Facility (ON RRP).\n\nFrom the St. Louis Fed, it helps to provide a floor for the Federal Funds Rate (FFR), a market-determined rate at which banks lend to one another.\n\nNot every financial institution that operates in the federal funds market has access to interest on reserves. So, the FFR could fall below the setting of the Interest of Reserves (IOR) rate.\n\nTo aid in the control of the level of the FFR, the Fed introduced the overnight reverse repurchase agreement (ON RRP) facility to a broad set of financial institutions.\n\nAnother option seems to be simple, giving more institutions accounts at fed banks, allowing them to also get IOR. Or even allow them accounts, but do not give them the same interest as depository institutions.\n\nFurther, ON RRP is no longer cash, so there must be use for the Fed to be ridding themselves of Treasuries overnight, but it's not clear why.","reasoning":"This is related to the fact that some banks are member banks while others are non-member banks.","id":"37","excluded_ids":["N\/A"],"gold_ids_long":["onrrp\/2326853pdfrefreqidfastlydefault3A1e0b8cbcb0976b37cc46b0f2f97e70ababsegmentsorigininitiatoracceptTC1.txt"],"gold_ids":["onrrp\/2326853pdfrefreqidfastlydefault3A1e0b8cbcb0976b37cc46b0f2f97e70ababsegmentsorigininitiatoracceptTC1_1.txt"],"gold_answer":"$\\begingroup$\n\nSome banks are \"non-member banks\". A non-member bank is not subject to the\nrequirements of a member bank of the Federal Reserve system.\n\n[ Here is a link to a reference from the Journal of Finance, June 1975.\n](https:\/\/www.jstor.org\/stable\/2326853)\n\nIt says reserve requirements for nonmember banks are less onerous and\nreporting requirements for nonmember banks in most states are \"less\nrestrictive\"."} +{"query":"How are stock prices determined in the following cases?\n\nI looked at this question already. I know there is an order book with bid and ask and that the price is updated when a match occurs. But I have two questions:\n\nWhat happens when the bid is higher than the ask? For example someone is ready to pay $101 per share for 100 shares and someone wants to sell 100 shares at $100. What will be the new price?\n\nWhat if there are multiple matches at an instance? Let's say we have someone wanting to buy 100 shares at $100 and someone wanting to sell 100 shares at $100. We also have someone wanting to buy 500 shares at $110 and someone wanting to sell 500 shares $110. What is the new price?","reasoning":"The stock price is about matching the prices of exchange pairs.","id":"38","excluded_ids":["N\/A"],"gold_ids_long":["matching_price\/matchingordersasp.txt"],"gold_ids":["matching_price\/matchingordersasp_3.txt","matching_price\/matchingordersasp_2.txt","matching_price\/matchingordersasp_4.txt","matching_price\/matchingordersasp_5.txt"],"gold_answer":"$\\begingroup$\n\nAs in the answer [ here\n](https:\/\/economics.stackexchange.com\/questions\/24729\/how-are-final-stock-\nprices-arrived-at) (which you referred to yourself), the price of a stock is\nthe price the stock was last traded at (until that price is updated because a\nnew trade happens). A trade occurs if a bid and ask are matched.\n\nThe matching relies on a double ordering. The principle to remember here is\n\"buy low, sell high\". Also, remember that the ask is the _minimum_ price a\nwould-be seller is happy to sell, while the bid is the _maximum_ price a\nwould-be buyer is happy to buy. The buy and sell offers are ordered in the\nfollowing way:\n\n * **Asks** : lowest first, highest last, then in the order they were submitted. \n * **Bids** : highest first, lowest last, then in the order in which they were submitted. \n\nThis is called **price-time-priority matching** (see [ here\n](https:\/\/www.investopedia.com\/terms\/m\/matchingorders.asp#:%7E:text=Under%20a%20basic%20FIFO%20algorithm,order%20at%20a%20lower%20price.)\n). Many exchanges use a variant of that procedure with potentially some\nextensions. An important exception is the NYSE, which has a pro-rate system on\ntop (see [ here ](https:\/\/www.nyse.com\/article\/parity-priority-explainer) ),\nbut this is not relevant for your cases.\n\n> What happens when the bid is higher than the ask? For example, someone is\n> ready to pay \\$101 per share for 100 shares, and someone wants to sell 100\n> shares at \\$100. What will be the new price?\n\nThe matching in this case depends on the order in which instructions were\nsubmitted and on the _best price rule_ , which says that whoever submits last\ngets the best available price. One motivation for that rule is that agents\nshouldn't be punished for submitting limit orders. After all, if that\nprinciple were not in place, agents who act last could instead put in market\norders (without a limit), where a buyer immidiately buys at the lowest ask,\nand a seller sells at the highest bid, which may result in a better deal for\nthe agent last to act. If not in place, you wouldn't get many limit orders, if\nany at all. So this rule favors market liquidity.\n\nApplying these rules, if the sell offer at \\$100 was there first, the buyer,\neven though she is happy to pay \\$101, will only pay 100. Conversely, if the\nbuyer places the order first, the seller will sell for 101. So, depending on\nthe order of submission, the resulting price will be either 100 or 101.\n\n> What if there are multiple matches at an instance? Let's say we have someone\n> wanting to buy 100 shares at \\$100 and someone wanting to sell 100 shares at\n> $100. We also have someone wanting to buy 500 shares at \\$110 and someone\n> wanting to sell 500 shares \\$110. What is the new price?\n\nAgain, the matching depends on the double ordering mentioned above and the\nbest price principle. I won't go through all possible cases, but provide a few\n**illustrations** :\n\n * Bid = 500@\\$110. The seller comes in with 100@\\$100. The trade is executed at the best price (for the seller) at \\$110. (if they submit in the reverse order, the trade will happen at a price of \\$100) \n\n * Bid = 100@\\$100, Ask = 500@\\$110. There is no match. Ask = 100@\\$100 comes in, \"jumps the queue,\" and 100 shares are traded at \\$100. \n\n * Bid = 100@\\$100, Ask = 500@\\$110. There is no match. A new bid comes in with 500@\\$110. It jumps the bid queue and is matched with the ask, and the trade happens at a price of \\$100. \n\nNote that time-stamping these days is very accurate, to within one millisecond\n(see for example [ here\n](https:\/\/www.npl.co.uk\/getattachment\/5b94a9f8-bd79-478e-ba25-8fc56f51be33\/NPLTime_Complete_Guide.pdf.aspx?lang=en-\nGB&ext=.pdf) ). For any confusion to occur, orders would have the share the\nsame time-stamp and precisely the same price."} +{"query":"What's wrong with this argument that the fed's OMOs don't change the money supply\n\nIs there anything facially stupid about the following argument that fed open market operations don't affect the money supply? And if not, are there any professional economists who have made this argument?\n\nWhen the fed buys or sells treasury bonds they are not changing the broader stock of highly liquid assets that include treasury bonds (is that m3?), since they are just exchanging one of the components (cash) for another (govt bonds). Since there are no reserve requirements for commercial banks -- only capital requirements -- this doesn't actually affect the rate of credit creation since a bank that exchanges a dollar of central bank reserves for a dollar of us treasury debt is just as well capitalized as it was before.","reasoning":"We need more information about model\/framework of liquidity coverage ratio, and how it affects money policy.","id":"39","excluded_ids":["N\/A"],"gold_ids_long":["omo_money_supply\/rqt1212gpdf.txt"],"gold_ids":["omo_money_supply\/rqt1212gpdf_8.txt","omo_money_supply\/rqt1212gpdf_9.txt","omo_money_supply\/rqt1212gpdf_13.txt","omo_money_supply\/rqt1212gpdf_11.txt","omo_money_supply\/rqt1212gpdf_10.txt","omo_money_supply\/rqt1212gpdf_12.txt"],"gold_answer":"$\\begingroup$\n\nM3 isn't even published anymore in the US (it did not include bonds though).\n\nAs Alex noted, the LCR is entirely unaffected if the FED purchases government\nbonds because both are level 1. See for example this [ BIS paper\n](https:\/\/www.google.com\/url?sa=t&source=web&rct=j&url=https:\/\/www.bis.org\/publ\/qtrpdf\/r_qt1212g.pdf&ved=2ahUKEwjXiYCB7Lf5AhXAYPEDHX1lAkgQFnoECA8QAQ&usg=AOvVaw2c0uqFq3624W07Ups0mj1A)\non P.58 for a simple example demonstrating this.\n\nMoreover, the Fed cannot control the federal funds\n\n> rate through routine changes in the quantity of reserves, also known as open\n> market operations (OMO)\n\nas copy pasted from [ this FED note about the ample reserves regime\n](https:\/\/www.federalreserve.gov\/econres\/notes\/feds-notes\/implementing-\nmonetary-policy-in-an-ample-reserves-regime-the-basics-\nnote-1-of-3-20200701.htm) . This is a direct effect of the ample reserves\nregime, where the [ reserve requirement ratio is zero\n](https:\/\/www.federalreserve.gov\/monetarypolicy\/reservereq.htm#:%7E:text=As%20announced%20on%20March%2015,requirements%20for%20all%20depository%20institutions)\n.\n\nGenerally, OMO can mean a lot of things (purchase of all sorts of securities)\nbut (ignoring the title), I think the main argument is about the purchase of\ngovernment bonds? For short, this does neither affect the interest rate (in an\nample reserves regime), nor the liquidity ratio (LCR) \/ capital requirement of\na bank. The latter statement requires that the central bank is purchasing a\nhighly liquid (level 1) asset though (which US government bonds are).\n\nInsofar, as written in your post, a purchase of government bonds by the FED\nhas\n\n> no direct impact on credit creation,\n\nat least not via the LCR or interest rate channel. I am inclined to believe\nthat this was the argument made by whoever you heard this from. I am less\nconvinced they mentioned money supply because traditionally the monetary base\nis defined as currency in circulation plus reserve balances and while there\nare more liquid types, none include bonds in the US in any case (there used to\nbe M4 and L which included T-bills but these were stopped long ago).\n\nLastly, the implementation of monetary policy has evolved considerably and\nrepeatedly since the financial crisis. The FED also does not (anymore) target\nmonetary aggregates when conducting monetary policy. I believe reading [ this\nanswer ](https:\/\/economics.stackexchange.com\/a\/49858\/40033) might be\ninteresting, although many others have different opinions about this subject.\n\n**Edit**\n\nCommercial bank credit is NOT part of M1. When a bank provides a loan, the\nborrower receives a deposit. From the perspective of commercial banks balance\nsheets, this increases credits on the assets side and customer deposits on the\nliabilities side. Most of the times (with mortgages it is frequently directly\nsent to seller of the house \/ or mortgage notary's account) this deposit is\nwithdrawn quickly and usually sent to another bank (unless the seller's bank\naccount is at the same bank as the buyer's). Therefore, an individual\ncommercial bank cannot generate lasting increases in its deposits by granting\nloans.\n\nHowever, the banking system as a whole does see an increase in deposits (thus\nmoney supply) despite constant amounts of central bank money. Leaving\npotential reserve requirements aside, the bank will still have risk\/return\nconsiderations to take into account when providing loans. From an asset\nliability management perspective, loans are long-term claims on the assets\nside, while sight deposits are typically liquid and short-term liabilities. In\nessence, that is one of the main reasons justifying commercial banking - it's\nan intermediary that brings together investors and savers despite their\ndiverging requirements. Banks can offer this service because they can\ndiversify the credit and liquidity risks better than individuals.\n\nFor risk management, banks need to consider factors such as current and future\ninterest rate on loans and deposits, the likelihood of deposit withdrawals and\ncredit defaults and the like. On top of this, banks are tightly regulated.\nBanks need to follow the standards of Basel III, which consist of 3 Pillars\n\n * Pillar 1: Capital requirements ( [ Capital Ratio ](https:\/\/en.wikipedia.org\/wiki\/Capital_adequacy_ratio) , [ Leverage Ratio ](https:\/\/en.wikipedia.org\/wiki\/Basel_III#Leverage_ratio) ,..), Credit risk, Market risk, [ Operational Risk ](https:\/\/en.wikipedia.org\/wiki\/Operational_risk) as well as [ LCR & NSFR ](https:\/\/www.ecb.europa.eu\/pub\/financial-stability\/macroprudential-bulletin\/html\/ecb.mpbu201910_2%7E3237802727.en.html) . \n * Pillar 2: mainly supervision like the Supervisory Review Evaluation Process ( [ SREP ](https:\/\/www.bankingsupervision.europa.eu\/banking\/srep\/html\/index.en.html#:%7E:text=This%20activity%20is%20called%20the,supervisory%20measures%20to%20be%20taken.) , [ ICAAP & ILAAP ](https:\/\/www.eba.europa.eu\/guidelines-on-icaap-and-ilaap-information) ,... \n * Pillar 3: ... \n\nLong story short, banks cannot simply give away loans without end. Also,\ncredit creation by banks relies on a lot more details than the traditional\nreserve requirement (money multiplier) argument suggests. Specifically, it is\nnot just the consideration of outflows (of deposits) and inflows being spread\nover time and normally only being a fraction of total deposits (which is where\nthe name _fractional reserve banking_ originated, as only a fraction of\ncustomer deposits have to be covered all the time). Since deposits created by\nthe banking system belong to the banks\u2019 customers, the main driving force\nbehind credit creation are customers, not banks themselves.\n\nIn fact, the FED does not even aim to affect the broad money supply. St.Louis\nFED research claims it's best to forget what we learnt about money aggregates\nin [ undergrad Econ\n](https:\/\/research.stlouisfed.org\/publications\/page1-econ\/2021\/09\/17\/teaching-\nthe-linkage-between-banks-and-the-fed-r-i-p-money-multiplier) . The so called\n_dual-mandate_ aims to (somewhat interestingly) promote three (not two) goals:\nmaximum employment, stable prices, and moderate long-term interest rates\n\nNeither of these goals mentions money supply. Also, [ New York FED - Money\nSupply ](https:\/\/www.newyorkfed.org\/aboutthefed\/fedpoint\/fed49.html) states\nthat\n\n> In 2000, when the Humphrey-Hawkins legislation requiring the Fed to set\n> target ranges for money supply growth expired, the Fed announced that it was\n> no longer setting such targets, because money supply growth does not provide\n> a useful benchmark for the conduct of monetary policy.\n\nand [ FED - what is money supply\n](https:\/\/www.federalreserve.gov\/faqs\/money_12845.htm) claims that\n\n> Over recent decades, however, the relationships between various measures of\n> the money supply and variables such as GDP growth and inflation in the\n> United States have been quite unstable. As a result, the importance of the\n> money supply as a guide for the conduct of monetary policy in the United\n> States has diminished over time\n\n[ This answer ](https:\/\/economics.stackexchange.com\/a\/49858\/40033) has a few\ncharts illustrating that total money supply is not really directly impacted by\nmonetary policy. For example, normalizing each series to Dec 2005 = 100\n(monetary base - [ BOGMBASE ](https:\/\/fred.stlouisfed.org\/series\/BOGMBASE) , [\nM1 ](https:\/\/fred.stlouisfed.org\/series\/WM1NS) \\- be careful, the definition\nchanged in May 2020 if you look at the data on FRED, [ M2\n](https:\/\/fred.stlouisfed.org\/series\/WM2NS) ), the data looks like this (the\ncode to replicate this is in the link: QE stands for quantitative easing)\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/muKZf.png)\n](https:\/\/i.sstatic.net\/muKZf.png)\n\nThe dual-mandate got its name from employment and the price level, because\nunder [ full employment ](https:\/\/en.wikipedia.org\/wiki\/Full_employment) (not\nno unemployment), and stable prices, interest rates settle at moderate levels.\nFor details, you can read [ Frederic S. Mishkin (2007) speech at Bridgewater\nCollege\n](https:\/\/www.federalreserve.gov\/newsevents\/speech\/mishkin20070410a.htm) .\n\nTo summarize, the main goals of the FED are to keep prices stable and to\npromote full employment (closely related to a healthy economy \/ GDP). However,\nas just seen, there is only a weak relationship between measures of the money\nsupply and GDP growth and inflation. Therefore, the central bank focuses on\ninterest rates, not only the FED Funds market, but also longer term interest\nrates. Changing interest rates has a direct impact on demand for credit (by\ncustomers) and the risk metrics of banks (current and future interest rate on\nloans and deposits, the likelihood of deposit withdrawals and credit defaults,\nBasel III calculations, ...).\n\nAs written above, monetary policy has evolved considerably. For example, in\nthe EUR area, so called Targeted longer-term refinancing operations (TLTROs)\naim to offer commercial banks attractive long-term funding conditions. To\nstimulate lending to the real economy, the interest rate is negative and in a\nnutshell, the more loans participating banks issue to non-financial\ncorporations and households (except loans to households for house purchases),\nthe more attractive (the more negative) the interest rate. While ultimately\nthere needs to be demand from customers, these favourable funding conditions\nmake it easier (cheaper) to offer loans."} +{"query":"How can the exchange rate be virtually constant with major inflation rate difference and a widening trade deficit?\n\nI have recently had a discussion with a colleague about what seems to be a rather strange macroeconomic aspect in Romania (which has RON currency):\n\nEUR-RON exchange rate has been virtually a flat line in the past year\nRomania's Balance of Trade widened to an almost record value\nEUR annual inflation rate is about 9% now\nRON annual inflation rate is almost double now\nAll these factors seem to put pressure on RON to lose value when compared to EUR and yet it is rather constant for quite a while.\n\nThe only factor I know is very different between EUR and RON is the base interest rate and I guess is acting in the opposite direction:\n\nEURIBOR is barely positive\nROBOR 3M is more than 8%\nWhat could maintain such a robust exchange rate for the EUR-RON pair? Namely, how can RON not lose value in this context for a rather long time (many months)? I am interested in a mostly qualitative answer.","reasoning":"The inflation rate differences and bank overnight rate differences seem consistent with uncovered interest rate parity, which would not incentivize movement in the FX markets. We can check the equations and approximations of Uncovered interest rate parity.","id":"40","excluded_ids":["N\/A"],"gold_ids_long":["interest_rate_parity\/InterestrateparityUncoveredinterestrateparity.txt"],"gold_ids":["interest_rate_parity\/InterestrateparityUncoveredinterestrateparity_11.txt","interest_rate_parity\/InterestrateparityUncoveredinterestrateparity_9.txt"],"gold_answer":"$\\begingroup$\n\nYou are right, all your points suggest that there is downward pressure on RON\n(it should depreciate). Also, (un)covered interest rate parity would suggest\nthat RON should depreciate!\n\nThe [ UIP equation\n](https:\/\/en.wikipedia.org\/wiki\/Interest_rate_parity#Uncovered_interest_rate_parity)\nlooks like this:\n\n$$(1+i_{\\\\\\\\\\$})={\\frac {E_{t}(S_{{t+k}})}{S_{t}}}(1+i_{c})$$\n\nor rearranged:\n\n$${{S_{t}}}\\frac {(1+i_{\\\\\\\\\\$})}{(1+i_{c})} = E_{t}(S_{{t+k}})$$\n\n**EDIT**\n\nUIP is traditionally about expected (future) exchange rates. You can\ntechnically solve to imply spot, but that would require you to know the future\nexpected FX rate. Since one observes (or say at the beginning of the year\nobserved) spot, and has interest rate data, one typically uses this\nrelationship to compute the expected FX rate. Either way, you can turn the\nequation around as much as you want, the higher interest rate country's\ncurrency is expected to depreciate over time.\n\nI did not use the OP data because I copy pasted this example (and swapped USD\nfor RON). Also, the data provided in the question is insufficient. You can\nfind current spot, that EURIBOR is $\\approx 0 \\%$ and ROBOR 3m is $>8 \\%$\n. In real applications (say Bloomberg's [ ` FXFA `\n](https:\/\/quant.stackexchange.com\/q\/41799\/54838) ), one would use the\nassociated interest rates for a tenor (maturity \/ date in the future) from\nbootstrapped Swap curves and cross currency basis curves as opposed to simply\nspot rates. Ultimately, for the purpose of the question, all that matters is\nwhat happens to the higher interest rate currency relative to the other. To\nmake it a bit closer to the OP data, I will use the approximate values.\n\nFor EURRON (how many RON per EUR, say 4.8931 at the time of writing according\nto the [ Source ](https:\/\/www.investing.com\/currencies\/eur-ron-chart) provided\nin the question, if the RON interest rate is 8% and the EUR rate is 0% you get\nthe value of (assuming for a full year, to avoid having to compute year\nfractions for the interest rate),\n\n$${4.8931}*\\frac {(1+0.08)}{(1+0.0)} \\approx 5.284548$$\n\nIn other words, you need more RON per EUR - the RON depreciated, EUR\nappreciated.\n\nThe image below is from a [ Quant SE answer\n](https:\/\/quant.stackexchange.com\/a\/67940\/54838) and shows the Turkish\ninflation rate as well as the USDTRY exchange rate.\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/utNqb.png)\n](https:\/\/i.sstatic.net\/utNqb.png)\n\nAs you can see, the currency of the country with the higher inflation (Turkey)\ntends to depreciate against the currency of the country (US) with lower\ninflation rates (also Turkey has higher interest rates compared to the US).\n\n**Why RON does not depreciate?**\n\nIn the case of RON, there are a few aspects, like forex inflows generated by\ndiaspora (especially during the summer season where lots of people return\nhome). Also, interest rates [ did increase\n](https:\/\/tradingeconomics.com\/romania\/interest-rate) relative to the EUR zone\nin the last few months, which will results in immediate upward pressure\n(potentially counterbalancing the general downward pressure), just like the\nUSD appreciated against the EUR in the last months (the US also increased\nrates).\n\n * on May 10th 2022: central bank of Romania raised its key monetary policy rate by 75bps to 3.75% \n * on July 6th, 2022: raised by 100bps to 4.75% \n * on August 5th, 2022: raised by 75bps to 5.5% \n\nHowever, the biggest factor is likely (direct) intervention by Romania's\ncentral bank (NBR). NBR's governor Mugur Isarescu [ explained\n](https:\/\/www.economica.net\/isarescu-euro-va-ramane-sub-5-lei-atat-cat-\ntrebuie-nu-dai-drumul-la-un-instrument-de-control-cand-ai-razboiul-la-usa-\nincercam-sa-trecem-de-inflatie-fara-\nrecesiune_604421.html?mc_cid=65b2f26dba&mc_eid=607bd7d763) that the monetary\nauthority would remain active and not leave the national currency to weaken\nbelow RON 5 to EUR. The statement alone may help the RON exchange rate (even\nif there were no actual interaction by the NBR) as long as the central bank is\ncredible because market participants may not try to bet against the RON if it\nis seen as an uphill fight against a central bank. A good example for a strong\ncentral bank being able to fight speculative attack is the Hong Kong Monetary\nAuthorities history of being able to [ maintain the peg\n](https:\/\/www.reuters.com\/article\/uk-hongkong-protests-currency-explainer-\nidUKKCN1VJ0WH) .\n\nThe article about Isarescu is in Romanian but Google translate works well\nusually. The main quote of interest is at the very beginning:\n\n> The EURO will remain below 5 LEI as long as necessary. You don't let go of a\n> control tool when you have war at your door.\n\n[ NBRs inflation report\n](https:\/\/www.bnro.ro\/DocumentInformation.aspx?idInfoClass=6896&idDocument=40072&directLink=1)\n(PDF download) states on P.31 that\n\n> the stability of the EUR\/RON exchange rate was an essential concern\n> throughout this period.\n\n[ ING FEB 2022 ](https:\/\/think.ing.com\/snaps\/ce4-fx-how-tolerant-are-cbs-to-\nfx-weakness) mentions that\n\n> the increased turnover around 4.95 suggests official offers have been\n> protecting the leu.\n>\n> in the base case the NBR will most likely try to keep the FX rate stable\n> through a combination of FX interventions and spiking carry rates if needed.\n\n[ Erste Group Research\n](https:\/\/www.erstegroup.com\/de\/research\/report\/en\/SR263009) also states that\n\n> the NBR is trying to curb the slide of the national currency\n\nbut mentions that depreciation pressures should persist.\n\n**A bit of context**\n\nA common misconception is that higher interest rates will lead to an\nappreciation in the future, but in fact, UIP claims the opposite: any higher\ninterest in one country will be offset by a depreciation in that countries\ncurrency so that an investor will be equally well off. In other words it\ndoesn't matter where you invest.\n\nForwards follow covered interest rate parity. Uncovered interest rate parity\nis the same arbitrage condition but unhedged with FX forwards (hence\nuncovered). If you have access to Bloomberg, you can look at ` FRD ` for the\nformer, and ` FXFA ` for the latter.\n\nFRD:\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/yUuzC.png)\n](https:\/\/i.sstatic.net\/yUuzC.png) [ ![!\\[enter image description\nhere](https:\/\/i.sstatic.net\/I0yVt.png) ](https:\/\/i.sstatic.net\/I0yVt.png)\n\n$\\color{blue}{SP}$ stands for Spot, Pts (Points, the most common way of\nquoting FX forwards)) represent what is added to Spot, Fwds are the forwards\ncomputed from quoted Spot and Pts. The darker values at the bottom indicate\nthat these are implied (computed) from interest rates via no arbitrage. This\nis done when there aren't enogh liquid market quotes. You can see that a\nsubstantial depreciation of RON relative to EUR is quoted \/ implied at the\ntime of writing. FXFA is the tool to compute this systematically.\n\nFXFA: Interest rates are deliberately chosen to follow your EURIBOR \/ ROBOR\nexample. [ ![enter image description here](https:\/\/i.sstatic.net\/RPXpI.png)\n](https:\/\/i.sstatic.net\/RPXpI.png) Note, that you see the warning ` Mismatched\nyield curves ` because it would not be the curves conventionally used but\nmarket standards do not matter for the purpose of this question and the next\nscreenshot from the results section shows that the implied forward is not even\nthat different from the actual market quoted forward. The column 7) FX Swap\ncorresponds to the market quote, which combined with yield curves (column 8)\nEUR Yield and 9) RON Yield) can be used to compute FX Swap implied (the\nhighlighted column). The column with red and green values shows the difference\nbewteen the market quoted forwards and implied forwards. [ ![enter image\ndescription here](https:\/\/i.sstatic.net\/BRjLm.png)\n](https:\/\/i.sstatic.net\/BRjLm.png)\n\nBottom line, these BBG screens should just help to showcase that (un)covered\ninterest rate parity also suggests that RON should depreciate.\n\n` Shouldn't higher interest rates lead to an appreciation in that currency\nbecause it is more favourable? ` Usually higher interest rates (if\nunanticipated) will result in the higher rates currency appreciating very\nquickly (if it was fully anticipated, that effect happened already before the\nhike). This is the basis of the so called [ overshooting models\n](https:\/\/en.wikipedia.org\/wiki\/Overshooting_model) that were developed to\nexplain the excess volatility puzzle (typically, FX volatility exceeds that of\nthe underlying economic fundamentals substantially). Since FX reacts asap but\ngoods prices are delayed, the spot rate must overshoot its value in the short\nrun, followed by a depreciation (UIP) afterwards.\n\nOvershooting models are also called [ sticky price monetary models\n](https:\/\/www.minneapolisfed.org\/research\/sr\/sr277.pdf) . They combine capital\nmarkets, goods markets and money markets. [ Dornbusch\n](https:\/\/www.mit.edu\/%7E14.54\/handouts\/dornbusch76.pdf) (1976) was the first\nto develop this theory."} +{"query":"Why did the Federal reserve balance sheet capital drop by 32% in Dec 2015?\n\nHere's a graph of the capital on the Federal reserve balance sheet from 2003 until present:\n\nhttps:\/\/fred.stlouisfed.org\/series\/WCTCL\n\nCapital dropped by 32% in December 2015.\n\nIs there anywhere I can read about why the capital dropped so dramatically at that point?\n\nAnd perhaps why has it not changed much since then?","reasoning":"This may be related to the enforcement of some act. We can check factors affecting reserve balance.","id":"41","excluded_ids":["N\/A"],"gold_ids_long":["fed_reserve_balance\/20151231.txt"],"gold_ids":["fed_reserve_balance\/20151231_6.txt","fed_reserve_balance\/20151231_7.txt","fed_reserve_balance\/20151231_3.txt","fed_reserve_balance\/20151231_2.txt","fed_reserve_balance\/20151231_5.txt","fed_reserve_balance\/20151231_1.txt","fed_reserve_balance\/20151231_8.txt"],"gold_answer":"$\\begingroup$\n\nThe Fixing America's Surface Transportation Act (FAST), which was enacted on\nDecember 4, 2015, requires that aggregate Federal Reserve Bank surplus not\nexceed \\$10 billion.\n\nThe amounts of the line items \"Other liabilities and capital\" on table 1, and\n\"Surplus\" on tables 5 and 6 reflect the payment of approximately \\$19.3\nbillion to Treasury on December 28, 2015, which was necessary to reduce\naggregate Reserve Bank surplus to the \\$10 billion limitation in the FAST Act.\n\n[ Source: Factors Affecting Reserve Balances - H.4.1\n](https:\/\/www.federalreserve.gov\/releases\/h41\/20151231\/)"} +{"query":"How to pass long lag of unobserved into Kalman filter\n\nI am trying to replicate multivariate filter for potential output from paper.\n\nI have already understood which variables in the model are observed and which are unobserved. This model should be rewrited in the state space model setup in order to be stimated. However, in the formula for long run GPD there is NAIRU that is lagged for 20 quarters (see picture below).\n\nformula for long-run GDP\n\nBoth NAIRU in period t and NAIRU in period t-20 are unobserved, which mean that they should enter vector of unobserved variables.\n\nIn this case I do not understand how we are able to formulate transition matrix since we have NAIRU(t) and NAIRU(t-19) on LHS of transition equation and NAIRU(t-1) and NAIRU(t-20) on RHS.\n\nWhat coefficients should I put into row of transition matrix for NAIRU(t-19).\n\nI want to estimate this model by bayesian sampling. This mean that I am able to sample coefficients for transition matrix and pass them to Kalman filter.","reasoning":"We may need to preserve the identities of the lags in the transition matrix. To fit the specific case, it is ideal that state space representation can be modified to include an arbitrary number of quarterly variables.","id":"42","excluded_ids":["N\/A"],"gold_ids_long":["kalman_filter\/ssrn.txt"],"gold_ids":["kalman_filter\/ssrn_37.txt","kalman_filter\/ssrn_36.txt","kalman_filter\/ssrn_38.txt","kalman_filter\/ssrn_34.txt"],"gold_answer":"$\\begingroup$\n\nThe \"trick\" is to preserve the identities of the lags in the transition\nmatrix. In period $t$ , you have (for some state variable $x$ ) $$ x_t =a\nx_{t-1} \\\\\\ x_{t-1} = x_{t-1}\\\\\\ x_{t-2} = x_{t-2}\\\\\\ \\cdots $$ So, the\ncorresponding block of the transition matrix becomes: $$ \\begin{bmatrix} x_t\n\\\\\\ x_{t-1}\\\\\\ x_{t-2}\\\\\\ \\vdots\\\\\\ x_{t-20}\\\\\\ \\end{bmatrix} =\n\\begin{pmatrix} a & 0 & \\cdots && 0\\\\\\ 1 & 0 & \\cdots && 0\\\\\\ 0 & 1 & 0 \\cdots\n&& 0\\\\\\ \\vdots & & \\ddots && \\vdots\\\\\\ 0 & \\dots & 0 & 1 & 0 \\end{pmatrix}\n\\begin{bmatrix} x_{t-1} \\\\\\ x_{t-2}\\\\\\ x_{t-3}\\\\\\ \\vdots\\\\\\ x_{t-21}\\\\\\\n\\end{bmatrix} $$ The $a$ coefficient captures the one-period transition for\n$x$ from $t-1$ to $t$ . This coefficient may or may not be estimated.\nNote that lag 21, which you don't want, has a zero coefficient and is not\ncarried along.\n\nSee, for example, appendix B in [ Banbura, Marta and Giannone, Domenico and\nReichlin, Lucrezia, Nowcasting (November 30, 2010). ECB Working Paper No. 1275\n](https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=1717887)"} +{"query":"Why do E85 prices track gasoline prices?\n\nIt's easy to understand why, as the Ukraine war disrupts global oil markets, gasoline prices are rising. What puzzles me is that E85, which is at most 49% gasoline, rises the same amount. It seems that it would me affected at most half as much as gasoline. Do we import a lot of ethanol from Russia?\n\nI'm curious to know if there's a valid economic basis for this that doesn't involve price gouging.","reasoning":"The similar rise may be related to the energy return in making ethanol.","id":"43","excluded_ids":["N\/A"],"gold_ids_long":["e85_oil\/es05.txt"],"gold_ids":["e85_oil\/es05_8.txt"],"gold_answer":"$\\begingroup$\n\nIn the U.S., most ethanol is corn ethanol. [ Research indicates\n](https:\/\/doi.org\/10.1021\/es052024h) that corn ethanol has an [ energy return\non investment ](https:\/\/en.wikipedia.org\/wiki\/Energy_return_on_investment)\nratio of between 0.84 and 1.65. This means, basically, that it takes almost as\nmuch energy to make a gallon of ethanol, as that gallon contains -- you're\neither losing 16% in the process, or gaining 65%. Compare this to conventional\noil with a ratio above 18.\n\nSo the question isn't really about why ethanol prices track gasoline, but why\ndo energy prices track gasoline -- which of course is easier to answer."} +{"query":"How to view GDP as a network graph?\n\nI was looking at the GDP by industry data at https:\/\/apps.bea.gov\/iTable\/iTable.cfm?reqid=150&step=2&isuri=1&categories=gdpxind\n\nI'm curious how to view all these industries in a network.\n\nFor example, someone who owns a rental property may use their rental income to buy a car. So this would contribute to an arrow from the \"real estate\" industry to the \"motor vehicle and parts\" industry.\n\nSomeone who receives a social security payment may go out to eat at a resturaunt, the restaurant uses that money to pay a loan, etc.\n\nSo all these sectors would have arrows to and from each other.\n\nHas anyone done research to look at this or produce some graph or visualization? I am curious what the sources and drains are in this graph or what kind of loops there are.","reasoning":"This is related to input-output models, which represent the interdependencies between different sectors. The question is to find related information, e.g., derivation, examples, etc.","id":"44","excluded_ids":["N\/A"],"gold_ids_long":["gdp_network\/InputE28093outputmodel.txt"],"gold_ids":["gdp_network\/InputE28093outputmodel_22.txt","gdp_network\/InputE28093outputmodel_15.txt","gdp_network\/InputE28093outputmodel_14.txt","gdp_network\/InputE28093outputmodel_13.txt","gdp_network\/InputE28093outputmodel_23.txt","gdp_network\/InputE28093outputmodel_11.txt","gdp_network\/InputE28093outputmodel_12.txt","gdp_network\/InputE28093outputmodel_10.txt","gdp_network\/InputE28093outputmodel_20.txt","gdp_network\/InputE28093outputmodel_18.txt","gdp_network\/InputE28093outputmodel_19.txt","gdp_network\/InputE28093outputmodel_21.txt","gdp_network\/InputE28093outputmodel_16.txt","gdp_network\/InputE28093outputmodel_17.txt"],"gold_answer":"$\\begingroup$\n\nSeems like you are looking for [ Input-output models\n](https:\/\/en.wikipedia.org\/wiki\/Input%E2%80%93output_model) .\n\n> In economics, an input\u2013output model is a quantitative economic model that\n> represents the interdependencies between different sectors of a national\n> economy or different regional economies.\n\nThe representation of is numerical, not graphical, but other than this, it\nseems to be what you want."} +{"query":"How is Elon Musk's free Tesla charge a sustainable business model?\n\nI came across this video, where Elon says charging a Tesla car at Tesla supercharge station is free and will be free always\n\nHow is this a sustainable business model? Where will the money for electricity come from?","reasoning":"When companies give away a product or service at a loss, they usually do it to promote the sale of complementary goods. We can check more details about complementary goods.","id":"45","excluded_ids":["N\/A"],"gold_ids_long":["tesla_free_charging\/Complementarygood.txt"],"gold_ids":["tesla_free_charging\/Complementarygood_42.txt","tesla_free_charging\/Complementarygood_40.txt","tesla_free_charging\/Complementarygood_45.txt","tesla_free_charging\/Complementarygood_39.txt"],"gold_answer":"$\\begingroup$\n\n**It's hard to know how to answer this question as stated, without a lot more\ninformation about basic parameters** (as of 2022-04-28, 10:45 AM CDT).\n\n * When companies give away a product or service at a loss, they _usually_ do it to promote the sale of [ complementary goods ](https:\/\/en.wikipedia.org\/wiki\/Complementary_good) . The classic example of this would be [ Gillette's longstanding business practice of giving away safety-razor handles in order to sell their replaceable safety-razor blades ](https:\/\/www.bbc.com\/news\/business-39132802) . (Caveat: [ Randal Picker argues that this actually has a more complicated business history than is commonly understood ](https:\/\/hbr.org\/2010\/09\/gillettes-strange-history-with) ; but Gillette certainly has given away a lot of free safety-razor handles, and even started mailing them out unsolicited using marketing lists. I got one in the mail myself when I turned 18, back in the 1990s.) \n\nWhen this is sustainable, the money to do it typically comes from the sale\nprice of the complementary good(s), when and if this is enough to cover losses\nfrom the giveaway. The profit in it comes from the shift in demand, if this\nincreases the quantity of the complement that can be sold or the price at\nwhich it can be sold.\n\n * In economics jargon, the strategy is a particular form of a [ two-part tariff ](https:\/\/en.wikipedia.org\/wiki\/Two-part_tariff) : there is a lump-sum entry fee for using the composite product (the fixed cost of the razor handle, or the printer, or the Tesla car), and a variable cost for usage (the cost of replaceable blades, or ink cartridges, or electrical charging stations). The basic strategy here is to try to take a loss on the revenue from one of the two parts of the composite product, and subsidize it by increasing revenue from the other. \n\nIn some sense, Tesla's strategy here works in the opposite direction from\nGillette's. Gillette lowers the fixed cost of the razor-blade handle to 0, and\nthen covers the cost from the increased revenue that they make from the\nvariable usage cost (selling new replaceable blades). As you've described it,\nTesla's strategy seems to be to lower to 0 the _variable_ cost of discharging\nyour Tesla car (by driving it around) and then recharging at a Tesla charging\nstation; they presumably hope to cover the cost from the increased revenue\nthat they make from the _fixed_ lump-sum sale price when people buy new\nTeslas. (There are lots of reasons they might think this will be useful for\nincreasing sales; besides any direct financial effects of the subsidy or\npsychological effects toward brand loyalty, the availability of free charging\nstations is also almost certainly intended to help reduce [ range anxiety\n](https:\/\/www.jdpower.com\/cars\/shopping-guides\/what-is-range-anxiety-with-\nelectric-vehicles) in prospective customers when they consider buying a new\nTesla.)\n\nWill that turn out to be sustainable over the long term? Well, that depends on\na lot of details, as well as quite a bit of unpredictable luck. In no\nparticular order:\n\n * How many stations will Tesla build? \n * How much of the recharging do they expect Tesla drivers to do at company charging stations (as opposed to charging up at home, or charging at a 3rd party charging station)? (I.e., what share of the variable costs of driving a Tesla are they actually expecting to be subsidizing?) \n * How much does it cost them on the margin to provide the electrical charge? \n * How much or how little does this boost sales of Tesla cars? (I.e., how much do they expect to increase their revenue from the lump-sum fee?) \n * How long do people who buy a new Tesla car keep driving it? (I.e., does the lump-sum payment generally have to cover 6 months' worth of driving on average, or 2 years', or 5, or 10, or...? This will make a big difference to how affordable it is likely to be.) \n * How much does this influence people's likelihood to replace their old Tesla car with a new Tesla car? \n\nThere are a lot of other questions that you might want to ask. If you have\nsome possible ballpark estimates or reasonable minimum-maximum ranges for any\nof these figures -- either in terms of what Tesla might expect the figures to\nbe, or what they might realistically turn out to be in fact, or whatever\nscenario you want to run -- then those could provide some guidance for a more\nconcrete answer to your question. In the absence of that, the best answer\nyou're likely to be able to find is to get some kind of model that will lay\nout the blanks you'd need to be able to fill in to get a concrete answer."} +{"query":"What is the economic nature of water?\n\nHow should one classify water in economic terms?\n\nIs it a commodity, a natural resource, can it be both? Does it depend on how it is being used (e.g., as input\/raw material in some process)? I was curious about all the ways water could be characterized economically (e.g., as a rival good, a commodity etc.) and would be grateful for some pointers.","reasoning":"To check whether water is commidity or natural resource, we can check their definitions.","id":"46","excluded_ids":["N\/A"],"gold_ids_long":["water_economic_nature\/1627.txt","water_economic_nature\/commodity.txt"],"gold_ids":["water_economic_nature\/commodity_4.txt","water_economic_nature\/1627_0.txt","water_economic_nature\/commodity_3.txt"],"gold_answer":"$\\begingroup$\n\nMost of the terms you mention are not mutually exclusive.\n\nCommodity is not a specialized term and it just denotes some (mostly) fungible\neconomic good (see [ here ](https:\/\/www.merriam-\nwebster.com\/dictionary\/commodity) or [ here ](https:\/\/www.economist.com\/the-\neconomist-explains\/2017\/01\/03\/what-makes-something-a-commodity) ). So water is\na commodity.\n\nNatural resources can be defined as ( [ OECD 2005\n](https:\/\/stats.oecd.org\/glossary\/detail.asp?ID=1740#:%7E:text=Definition%3A,for%20economic%20production%20or%20consumption.)\n):\n\n> Natural resources are natural assets (raw materials) occurring in nature\n> that can be used for economic production or consumption.\n\nSo water is also a natural resource.\n\nIn terms of economic classification water (tap or bottled) is _private good_\nbecause it is both rival and excludable (see Mankiw Principles of Economics pp\n226). However, here it also depends which water we are talking about. I assume\nyou mean tap or bottled drinking water. An argument could be that water in\nrivers, lakes and seas is non-excludable (in the case of lakes it would also\ndepend on size of the lake). If we would be talking about water in a sea it\nwould _common resource_ because of non-excludability."} +{"query":"Does low nominal interest rate encourage lending?\n\nIn expansionary monetary policy, it's written:\n\nThe Fed purchases more government bonds to drive down interest rates and increase the money supply.\n\nNow, low interest rate can infer two things:\n\nPeople find it easier to take loans and invest. So, investment goes up.\n\nBut, at the same time, low interest rate may mean that lenders may not be very keen on giving out loans. For example, people would find no use in keeping cash into bank accounts which provide low interest rates. This may drive down lending, and hence investment.\n\nIs point (2) correct? If so, why is low n.i.r. considered good for economic growth?\n\nThanks","reasoning":"We can check the influence of low nominal interest rate from more perspectives, e.g., the difference between leading and other companies.","id":"47","excluded_ids":["N\/A"],"gold_ids_long":["nominal_interest_rate\/ECTA17408.txt"],"gold_ids":["nominal_interest_rate\/ECTA17408_9.txt","nominal_interest_rate\/ECTA17408_10.txt","nominal_interest_rate\/ECTA17408_11.txt","nominal_interest_rate\/ECTA17408_12.txt","nominal_interest_rate\/ECTA17408_4.txt","nominal_interest_rate\/ECTA17408_8.txt","nominal_interest_rate\/ECTA17408_23.txt","nominal_interest_rate\/ECTA17408_21.txt","nominal_interest_rate\/ECTA17408_6.txt","nominal_interest_rate\/ECTA17408_20.txt","nominal_interest_rate\/ECTA17408_28.txt","nominal_interest_rate\/ECTA17408_3.txt","nominal_interest_rate\/ECTA17408_2.txt","nominal_interest_rate\/ECTA17408_24.txt","nominal_interest_rate\/ECTA17408_26.txt","nominal_interest_rate\/ECTA17408_17.txt","nominal_interest_rate\/ECTA17408_16.txt","nominal_interest_rate\/ECTA17408_0.txt","nominal_interest_rate\/ECTA17408_14.txt","nominal_interest_rate\/ECTA17408_7.txt","nominal_interest_rate\/ECTA17408_27.txt","nominal_interest_rate\/ECTA17408_19.txt","nominal_interest_rate\/ECTA17408_15.txt","nominal_interest_rate\/ECTA17408_1.txt","nominal_interest_rate\/ECTA17408_5.txt","nominal_interest_rate\/ECTA17408_22.txt","nominal_interest_rate\/ECTA17408_18.txt","nominal_interest_rate\/ECTA17408_29.txt","nominal_interest_rate\/ECTA17408_25.txt","nominal_interest_rate\/ECTA17408_13.txt"],"gold_answer":"$\\begingroup$\n\n> But, at the same time, low interest rate may mean that lenders may not be\n> very keen on giving out loans. For example, people would find no use in\n> keeping cash into bank accounts which provide low interest rates. This may\n> drive down lending, and hence investment.\n\nThis is not entirely true. Bank profitability and willingness to lend depend\non intermediation margin. Difference between the deposit or Fed funds interest\nrate and interest on loans they make.\n\nFor example a bank would be much happier to lend at 3% interest rate when\nfederal funds rate or deposit rate is 0% than it would be happy to lend when\ninterest rate is 20% but federal funds rate or deposit rate is 19.5% as they\nearn more profit at the lower rate with higher intermediation margin.\n\nWhat low interest does is to discourage supply of saving, although even here\nthe decision depends more on real interest rate than nominal one. Nonetheless,\nbanks could always just get their funds from Fed, so unless Fed decides not to\nprovide ample reserves they could in principle go on lending as much as they\nwant.\n\n> If so, why is low n.i.r. considered good for economic growth?\n\nIt can actually be considered bad for the growth, but it is because of their\neffect on market concentration and firm productivity not because of any limits\non loans. For example, [ Liu et al 2022\n](https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.3982\/ECTA17408?casa_token=5FHV2nV9xc4AAAAA:LKhvLjACcREu3imovDbGTyrLDncbAR-\ndhn8epTo-t15rpBf1B66iV0BAhnzmlxq401RaIHVUxbLjIsg) show that:\n\n> a decline in the long-term interest rate can trigger a stronger investment\n> response by market leaders relative to market followers, thereby leading to\n> more concentrated markets, higher profits, and lower aggregate productivity\n> growth. This strategic effect of lower interest rates on market\n> concentration implies that aggregate productivity growth declines as the\n> interest rate approaches zero.\n\nMoreover, [ Bikker & Vervliet (2017)\n](https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1002\/ijfe.1595) argue low\ninterest rates promote excessive risk taking which is clearly bad for\nmacroeconomic stability.\n\nLow interest rates can stimulate economic activity in short run, but long run\neconomic growth depends on how fast productivity of different factors is\ngrowing."} +{"query":"What are the reasons for Russian Ruble to strengthen in the past few days?\n\n1 USD was about 75 Russian Rubles before the war in Ukraine began. After the sanctions, Russian Ruble quickly declined and 1 USD skyrocketed to 139 Rubles. Now it is back to around 85. What are the reasons for it? As the West is proposing more and more sanctions, shouldn't it be declining even more?","reasoning":"The stronger ruble may be related to the increased rate bt Russian central bank. We can check whether there is news.","id":"48","excluded_ids":["N\/A"],"gold_ids_long":["ruble_sanction\/89890648cms.txt"],"gold_ids":["ruble_sanction\/89890648cms_53.txt","ruble_sanction\/89890648cms_12.txt","ruble_sanction\/89890648cms_17.txt"],"gold_answer":"$\\begingroup$\n\nIt\u2019s likely combination of several factors.\n\n 1. Some of it it\u2019s because Russia has instituted capital controls (see [ Reuters ](https:\/\/www.reuters.com\/world\/europe\/russia-says-capital-controls-were-tit-for-tat-move-after-reserves-were-frozen-2022-03-25\/) ). That means that Russians are limited or in some cases even prevented from exchanging rubles for other currencies. Exchange rates are ultimately determined by supply and demand. If people are not allowed to supply rubbles to the market, ruble supply shifts to the left and that, ceteris paribus, causes exchange rate to appreciate. \n\n 2. Russian central bank increased interest rates to 20% (see [ the economist ](https:\/\/m.economictimes.com\/news\/international\/business\/russias-central-bank-hikes-key-rate-to-20-percent\/amp_articleshow\/89890648.cms) ) that strengthens the ruble. \n\n 3. Exchange rate often overshoots (see [ Dornbusch 1976 ](https:\/\/www.mit.edu\/%7E14.54\/handouts\/dornbusch76.pdf) ). The initial depreciation in ruble likely did not just reflect fundamentals. \n\n 4. Some of it is also because Russia recently started using Gazprombank to partially evade sanctions on Russian central bank (see my [ answer ](https:\/\/economics.stackexchange.com\/a\/50980\/15517) here about that). Russia essentially requires other countries to pay for energy exports there and Gazprombank then converts the foreign currency to rubles, since it is not sanctioned. \n\n 5. It\u2019s partially also because while the sanctions for sure hurt Russia they do not hurt much Russian energy exports. EU imports 26.9% of oil, 41.1% of gas and 46.7% of coal from Russia (see [ Eurostat ](https:\/\/ec.europa.eu\/eurostat\/cache\/infographs\/energy\/bloc-2c.html#carouselControls?lang=en) ). These important energy imports are not yet sanctioned and realistically there is no way how EU could manage to get some substitutes for that in near term. As long as Russia can continue its energy exports and as long as exports will be greater than imports (with which the sanctions help actually) Russia will be accumulating current account surplus which will help prop up the ruble."} +{"query":"Optimal stopping (reference request)\n\nI am interested in the following optimal stopping problem:\n\nOn each day, a number \ud835\udc4e\ud835\udc56\n is drawn from a (possibly fixed) distribution.\nI can either stop now, getting a payoff of \ud835\udc4e\ud835\udc56\n, or wait for a later draw.\nIn principle, this could go on forever. However, future payoffs are discounted at a (possibly constant) rate.\nI know this kind of problem has been analysed extensively. Can anyone recommend some references on how one characterises optimal strategies in this context?","reasoning":"This is known as the McCall search model in economics. We need to check how the optimal strategy is determined.","id":"49","excluded_ids":["N\/A"],"gold_ids_long":["optimal_stopping\/2351065.txt"],"gold_ids":["optimal_stopping\/2351065_30.txt","optimal_stopping\/2351065_14.txt","optimal_stopping\/2351065_26.txt","optimal_stopping\/2351065_19.txt","optimal_stopping\/2351065_34.txt","optimal_stopping\/2351065_17.txt","optimal_stopping\/2351065_28.txt","optimal_stopping\/2351065_9.txt","optimal_stopping\/2351065_7.txt","optimal_stopping\/2351065_29.txt","optimal_stopping\/2351065_18.txt","optimal_stopping\/2351065_35.txt","optimal_stopping\/2351065_23.txt","optimal_stopping\/2351065_33.txt","optimal_stopping\/2351065_8.txt","optimal_stopping\/2351065_2.txt","optimal_stopping\/2351065_20.txt","optimal_stopping\/2351065_38.txt","optimal_stopping\/2351065_27.txt","optimal_stopping\/2351065_15.txt","optimal_stopping\/2351065_36.txt","optimal_stopping\/2351065_21.txt","optimal_stopping\/2351065_5.txt","optimal_stopping\/2351065_24.txt","optimal_stopping\/2351065_37.txt","optimal_stopping\/2351065_3.txt","optimal_stopping\/2351065_10.txt","optimal_stopping\/2351065_4.txt","optimal_stopping\/2351065_31.txt","optimal_stopping\/2351065_32.txt","optimal_stopping\/2351065_0.txt","optimal_stopping\/2351065_1.txt","optimal_stopping\/2351065_25.txt","optimal_stopping\/2351065_11.txt","optimal_stopping\/2351065_12.txt","optimal_stopping\/2351065_13.txt","optimal_stopping\/2351065_16.txt","optimal_stopping\/2351065_22.txt","optimal_stopping\/2351065_6.txt"],"gold_answer":"$\\begingroup$\n\nThis is known as the McCall search model in economics. The original paper\nshows that the optimal stopping strategy rule is given by a \"reservation\nwage\", there is a threshold such that it is optimal to accept any draw above\nthis threshold:\n\nMcCall, John J. \" [ The economics of information and optimal stopping rules\n](https:\/\/www.jstor.org\/stable\/2351065) .\" _The Journal of Business_ 38.3\n(1965): 300-317."} +{"query":"Loans that don't have to be paid back (only the interest)\n\nA normal loan has to be paid back with interest. Every now and then there are interest-free loans where only the loan has to be paid back but no interest, e.g. among relatives or friends, but also as a form of state subsidy.\n\nI am looking for the name and examples of interest-only loans from practice where only an interest has to be paid for some period of time, but the loan itself doesn't have to be paid back. I can imagine situations where such loans (or gifts) may make sense (again among relatives or friends or as a form of state subsidy). The idea might be: The borrower must continuously prove that he is serious and worth the gift.","reasoning":"Classic example are the British consol bonds. British consol bonds are perpetuities so that means the principal never has to be paid back (although government could repurchase them on an open market).","id":"50","excluded_ids":["N\/A"],"gold_ids_long":["interest_free_loan\/Consolbond.txt"],"gold_ids":["interest_free_loan\/Consolbond_5.txt"],"gold_answer":"$\\begingroup$\n\nClassic example are the British [ consol bonds\n](https:\/\/en.wikipedia.org\/wiki\/Consol_\\(bond\\)) . British consol bonds are\nperpetuities so that means the principal never has to be paid back (although\ngovernment could repurchase them on an open market).\n\nConsols only pay coupon payments (interest equivalent for bonds) and since\nthey are perpetual the principal never has to be paid back.\n\nGeneral term for such loans where principal does not need to be repaid is\n_perpetual loans_ (although most people will just use term perpetuity which is\numbrella term that can be used for any asset, not just loan, which entitles\nowner to perpetual interest rate payments)."} +{"query":"Why is Russia demanding oil payments in rubles?\n\nIn response to sanctions, Russia is demanding that oil purchases be conducted in rubles to prop up the value of the ruble. Why is this advantageous for Russia to have foreign buyers convert dollars to rubles before purchasing oil? Could they not just take payment in dollars and then buy rubles on the domestic market? Oil companies and the banks that service them are currently not under sanctions, to my knowledge.","reasoning":"This is related to the sanctions in Russia, which freezes Russian central bank assets.","id":"51","excluded_ids":["N\/A"],"gold_ids_long":["russia_sanction_oil\/ussanctionsrussiacentralbankhtml.txt"],"gold_ids":["russia_sanction_oil\/ussanctionsrussiacentralbankhtml_0.txt","russia_sanction_oil\/ussanctionsrussiacentralbankhtml_1.txt"],"gold_answer":"$\\begingroup$\n\n 1. If we talk about private companies they would not necessarily have incentive to always buy the Rubble because Rubble is not as stable as dollar. In fact the whole reason why oil and many other commodities are traded in dollar is that many companies do not need to hold the local currency and have little incentive to do so. As [ Fed ](https:\/\/www.federalreserve.gov\/econres\/notes\/feds-notes\/the-international-role-of-the-u-s-dollar-20211006.htm) explains: \n\n> For most of the last century, the preeminent role of the U.S. dollar in the\n> global economy has been supported by the size and strength of the U.S.\n> economy, its stability and openness to trade and capital flows, and strong\n> property rights and the rule of law. As a result, the depth and liquidity of\n> U.S. financial markets is unmatched, and there is a large supply of\n> extremely safe dollar-denominated assets. This note reviews the use of the\n> dollar in international reserves, as a currency anchor, and in transactions.\n\n 2. I believe this is response to US unprecedented freezing the USD holdings of Russian central bank (see this being reported by [ NY Times ](https:\/\/www.nytimes.com\/2022\/02\/28\/us\/politics\/us-sanctions-russia-central-bank.html) ). \n\nPreviously Russia was actually accumulating dollars from the oil & gas sales\nso they have hard currency to use exactly in times of crisis like this (this\nwas part of the 'Fortress Russia' strategy - see The Economist [ here\n](https:\/\/www.economist.com\/europe\/2020\/03\/25\/russias-economy-is-isolated-\nfrom-the-global-rout) ). Considerable share of the Russian central bank\nreserves were in dollars, because prior to 2022 Russian invasion that resumed\nhostilities in Russo-Ukrainian war of 2014, this was considered more or less\nunthinkable.\n\nThis demonstrated that US is willing to declare USD invalid for\npolitical\/geopolitical reasons. This means that if you are an autocratic\ncountry you have to think twice before using dollars, since at any moment they\ncould become worthless. Dollars are ultimately just worthless pieces of paper\nif you are not allowed to use them. As a result you now see not just Russia,\nbut also some OPEC countries and China trying to move away from using dollars\nto using different currencies such as yuan (see [ WSJ\n](https:\/\/www.wsj.com\/articles\/saudi-arabia-considers-accepting-yuan-instead-\nof-dollars-for-chinese-oil-sales-11647351541) reporting on that)."} +{"query":"Is there a long-run tradeoff between inflation and unemployment?\n\nIn most first-year undergrad macroeconomics courses, students are taught about the Phillips curve and how whilst there may exist a tradeoff between inflation and unemployment in the short-run, there is no such thing in the long-run as a result of agents adjusting inflation expectations. Is this result valid or could it be the case that the natural rate of unemployment changes with the output gap, in which case there could exist a long-run trade-off between inflation and unemployment?","reasoning":"This depends on the long-run shape of Philips curve. We need to check studies to see how Philips curve looks like in the long-run.","id":"52","excluded_ids":["N\/A"],"gold_ids_long":["inflation_unemployment\/S0304393215000793.txt"],"gold_ids":["inflation_unemployment\/S0304393215000793_2.txt","inflation_unemployment\/S0304393215000793_12.txt","inflation_unemployment\/S0304393215000793_5.txt","inflation_unemployment\/S0304393215000793_6.txt"],"gold_answer":"$\\begingroup$\n\n> could exist a long-run trade-off between inflation and unemployment?\n\nThis depends on the long-run shape of Philips curve. Well done and highly\ncited empirical studies generally cannot reject the long run Philips curve is\nflat implying there is no long run inflation unemployment trade-off, although\nalternative explanation is that current studies are simply not powered enough\nto detect (very) small slope of long run Phillips curve.\n\nFollowing [ Benati 2015\n](https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0304393215000793) :\n\n> Both cointegration methods, and non-cointegrated structural VARs identified\n> based on either long-run restrictions, or a combination of long-run and sign\n> restrictions, are used in order to explore the long-run trade-off between\n> inflation and the unemployment rate in the post-WWII U.S., U.K., Euro area,\n> Canada, and Australia. Overall, neither approach produces clear evidence of\n> a non-vertical trade-off. The extent of uncertainty surrounding the\n> estimates is however substantial, thus implying that a researcher holding\n> alternative priors about what a reasonable slope of the long-run trade-off\n> might be will likely not see her views falsified.\n\n....\n\n> Overall, the evidence discussed in this paper provides essentially no\n> support to the notion of a non-vertical long-run Phillips trade-off\u2014in\n> particular of a trade-off which can be actively exploited by a policymaker\n> in order to permanently reduced the unemployment rate.\n\nIn addition you can also see from data that there does not seem to be any\nlong-run relationship (see ibid pp 258).\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/yKoTC.png)\n](https:\/\/i.sstatic.net\/yKoTC.png)\n\n* * *\n\nPS: One should not confuse hysteresis effect (situation where temporary shocks\nhave permanent effect), with Phillips curve inflation-unemployment trade-off\nwhere increase in inflation can decrease unemployment and vice versa.\n\nLet me illustrate the difference.\n\n**Inflation Unemployment Trade-off**\n\nSuppose that the Phillips curve is given by following equation (see Romer\nAdvanced Macroeconomics pp 259):\n\n$$\\pi = \\pi^* + \\beta (y_A- \\bar{y}_A) + e$$\n\nwhere $\\pi$ is inflation $\\pi^*$ expected inflation and $y$ is a log of\nactual output and $\\bar{y}$ log of natural level output so the expression in\nbrackets is output gap. Output gap in turn determines the level of employment\nin the economy.\n\nNow in short run there will be a inflation\/unemployment trade-off (non-flat\nPhillips curve) as long as $\\pi \\neq \\pi^*$ (since $\\pi \\neq \\pi^*$\nimplies inflation can affect output and thus employment which correlates with\noutput). However, suppose that in long run $\\pi = \\pi^*$ . In that case we\nwill have no inflation employment trade-off.\n\n**Hysteresis**\n\nNow suppose we have not just one equation but 2\n\n$$\\pi = \\pi^* + \\beta (y_A- \\bar{y}_A) + e \\tag{1}$$\n\nand\n\n$$\\pi = \\pi^* + \\beta (y_B- \\bar{y}_B) + e \\tag{2}$$\n\nwhere $(y_B- \\bar{y}_B)< (y_A- \\bar{y}_A)$ , with a switching rule that we\nmove from equation (1) to (2), whenever output gap $(y_A- \\bar{y}_A)$ passes\nsome threshold for triggering hysteresis $k$ .\n\nClearly as long as we assume that $\\pi= \\pi^*$ in long run Philips curve is\nflat (no employment inflation trade-off) in spite of the hysteresis effect."} +{"query":"Game for negotiations\n\nFor a while I've been bouncing around an idea for a game to be used in negotiations, with a quantitative voting element. The basic idea goes like this: there is a list of things each side of the negotiation wants (e.g. in a peace negotation, this might be a list of towns). Each side gets 100 points. Then the two sides each secretly write down how many points they're willing to bid for each of those things. Then both sides reveal what they wrote down, and the highest bidder for each thing gets the thing they bid on. The idea is that some things are worth more to one side than the other, so they will hopefully bid over their opponent for the things they want more than their oppenent, and vice versa.\n\nIs there already a game like this floating around? I'd like to read about it.","reasoning":"This is related to Colonel Blotto game, where players (officers) are tasked to simultaneously distribute limited resources over several objects.","id":"53","excluded_ids":["N\/A"],"gold_ids_long":["blotto\/Blottogame.txt"],"gold_ids":["blotto\/Blottogame_4.txt"],"gold_answer":"$\\begingroup$\n\nThis is very much like a [ Colonel Blotto game\n](https:\/\/en.wikipedia.org\/wiki\/Blotto_game) , except there the cities are\nordered and the players can only assign points in a decreasing sequence.\n\nSimultaneous multi-unit auctions with a budget constraint may also be\nrelevant."} +{"query":"Why would a company sabotage its product's ability to be used for a particular purpose?\n\nI saw in the news recently that NVIDIA has placed limits on the hash rate for mining Ethereum cryptocurrency. This is purportedly to get more GPUs into the hands of gamers instead of crypto miners.\n\nWhat is the advantage to NVIDIA to doing this? Why are they not aiming to simply maximize the demand for their product?","reasoning":"This is related to segments of the market, where companies can charge elastic\/inelastic customers with different prices.","id":"54","excluded_ids":["N\/A"],"gold_ids_long":["sabotage_product\/monopoly3rddegreepricediscrimination.txt"],"gold_ids":["sabotage_product\/monopoly3rddegreepricediscrimination_73.txt","sabotage_product\/monopoly3rddegreepricediscrimination_74.txt","sabotage_product\/monopoly3rddegreepricediscrimination_71.txt"],"gold_answer":"$\\begingroup$\n\nThis topic is explained by [ market segmentation with price discrimination in\nmonopolies ](https:\/\/www.youtube.com\/watch?v=o6d8vbWcIO4) .\n\nThis is widely used in, for example, price discounts for senior\/retired\npeople.\n\nBy targetting different market segments with different prices you can maximize\nprofits because [ their price sensitivity is different\n](https:\/\/www.tutor2u.net\/economics\/reference\/monopoly-3rd-degree-price-\ndiscrimination) (i.e. miners are likely to buy at higher prices, while regular\nconsumers will sharply reduce their willing to buy at those same prices).\n\nThis is true as long as the following conditions are met:\n\n * Consumers can't easily switch between segments (i.e. a retired man can't suddenly turn into a child) \n * Consumers can't (easily) resell their goods to the other segments \n * It's a monopoly (or oligopoly). In a perfect market situation, your competitors will offer cheaper prices for your more costly segment. \n\nThis is the same reason why consoles used to be region locked. Otherwise it\nwould be convenient to travel to the cheapest region, and then bring those\nproducts back to your home country for resell.\n\nIn order to enforce these requirements, GPUs used for gaming can't be used for\nmining.\n\nAdditionally there could be other reasons. For example NVIDIA wants to heavily\npush Raytracing because it gives them an edge compared to the competition.\n\nHowever games will not adopt Raytracing if a large portion of their users\ndon't have capable cards. Right now the number of titles supporting RT is low.\nAnd those who do, it is just an optional gimmick that doesn't add much to the\nexperience; and the first gen RT cards (i.e. RTX 2080) aren't fast enough for\na raytracing-rich experience."} +{"query":"Why did Allied countries not freeze German and Japanese central bank assets in WWII?\n\nSupposedly such an action would cause massive turmoil in their domestic markets, cause exchange rates to rise, inflation to rise, and thus prevent them from engaging in meaningful trade. Why did Allied countries not try this in WWII? Was the world simply not as connected then?","reasoning":"We need to first check whether German and Japan assets were freezed. The US government may have already used the economic weapons.","id":"55","excluded_ids":["N\/A"],"gold_ids_long":["freeze_gemany_japan\/StaffChapter3html.txt"],"gold_ids":["freeze_gemany_japan\/StaffChapter3html_11.txt","freeze_gemany_japan\/StaffChapter3html_13.txt","freeze_gemany_japan\/StaffChapter3html_12.txt","freeze_gemany_japan\/StaffChapter3html_9.txt","freeze_gemany_japan\/StaffChapter3html_10.txt","freeze_gemany_japan\/StaffChapter3html_15.txt","freeze_gemany_japan\/StaffChapter3html_16.txt","freeze_gemany_japan\/StaffChapter3html_14.txt"],"gold_answer":"$\\begingroup$\n\nYou mean something like this?\n\n> **Freezing Foreign-owned Assets**\n>\n> Germany invaded Denmark and Norway on April 8, 1940, and the United States\n> quickly responded to the aggression. In an attempt to keep the Germans from\n> taking control of Danish and Norwegian assets held in the United States,\n> Executive Order 8389 \"froze\" all financial transactions involving Danes and\n> Norwegians. The freezing order prohibited [...]\n>\n> The U.S. government had first considered the use of such economic weapons in\n> 1937. In response to the Japanese bombing and sinking of the American\n> gunboat Panay in Chinese waters, Herman Oliphant, General Counsel in the\n> Treasury Department, suggested to Treasury Secretary Henry Morgenthau that\n> foreign exchange controls and a system of licenses for financial\n> transactions could be instituted against the Japanese. Tensions with Japan\n> subsequently eased and Oliphant's proposals were shelved. [...]\n>\n> As Germany continued its invasions, the U.S. government successively froze\n> assets, country by country, over the European continent. Thus, on May 10,\n> 1940, FFC extended freezing controls to cover the Netherlands, Belgium, and\n> Luxembourg. The assets of France and Monaco (June 17), Latvia, Estonia, and\n> Lithuania (July 10), and Romania (October 9) were subsequently frozen that\n> year.8 By the end of April 1941, the United States added Bulgaria, Hungary,\n> Yugoslavia, and Greece to the list.\n>\n> The further extension of controls to belligerents and neutrals remained\n> controversial. While Treasury favored a rapid extension of controls, the\n> State Department, concerned about maintaining America's status as a neutral\n> as well as U.S. diplomatic privileges, objected. Assistant Secretary of\n> State for Economic Affairs Dean Acheson noted that \"from top to bottom our\n> [State] Department, except for our corner of it, was against Henry\n> Morgenthau's campaign to apply freezing controls to Axis countries and their\n> victims.\"\n>\n> Eventually, the course of the war dictated a shift in U.S. policy. On June\n> 14, 1941, through Executive Order 8785, the United States extended freezing\n> controls to cover all of continental Europe, including \"aggressor\" nations\n> and annexed or invaded territories (Germany and Italy; Danzig, Austria, and\n> Poland) as well as neutral nations, small principalities, and countries not\n> previously included (Spain, Sweden, Portugal, and Switzerland; Andorra, San\n> Marino, and Liechtenstein; Albania and Finland). Turkish assets were never\n> blocked, and Soviet assets were only blocked for a relatively short time\n> until Germany invaded Russia in June 1941. As the United States moved from\n> being a neutral to a belligerent, the role of FFC, an administrative agency\n> within the Treasury Department, expanded.\n\nFrom \" [ Plunder and Restitution: Chapter III\n](https:\/\/govinfo.library.unt.edu\/pcha\/PlunderRestitution.html\/html\/StaffChapter3.html)\n\"\n\nBut you also asked why the US did not freeze Japanese assets?\n\n> On July 26, 1941, President Franklin Roosevelt seizes all Japanese assets in\n> the United States in retaliation for the Japanese occupation of French Indo-\n> China.\n\n[ https:\/\/www.history.com\/.amp\/this-day-in-history\/united-states-freezes-\njapanese-assets ](https:\/\/www.history.com\/.amp\/this-day-in-history\/united-\nstates-freezes-japanese-assets)\n\n> President Roosevelt last night issued an order freezing Japanese funds in\n> this country. Thus again was emphasized the fact, if at this stage further\n> emphasis be necessary, that this world of ours has become an inextricably\n> bound-up entity.\n\n[ https:\/\/www.nytimes.com\/1941\/07\/26\/archives\/war-has-no-boundary.html\n](https:\/\/www.nytimes.com\/1941\/07\/26\/archives\/war-has-no-boundary.html)"} +{"query":"GDP: business incorporation location vs citizenship criteria [duplicate]\n\nThis question already has answers here:\nTextbooks claim that the difference between GDP and GNP (or GNI) is about geography vs citizenship--is this correct? (2 answers)\nClosed 2 years ago.\nSuppose a US citizen resides in Germany. He is a business-owner in Germany and the business is legally incorporated in Germany. However, he is a US citizen. The value of the good produced by this business will be part of Germany's GDP since they are produced in Germany. But do they count towards German GDP since he is a resident and the business is incorporated there, or towards US GDP since he is a US citizen?","reasoning":"This is concerned about the domestic measures and national measure. We can check their definitions.","id":"56","excluded_ids":["N\/A"],"gold_ids_long":["us_german\/chapter02pdf.txt"],"gold_ids":["us_german\/chapter02pdf_5.txt"],"gold_answer":"$\\begingroup$\n\nSame as this [ question ](https:\/\/economics.stackexchange.com\/q\/42216\/37817) .\nIf in doubt, look at the source, not at textbooks or wherever.\n\nIn the US, that is [ BEA ](https:\/\/www.bea.gov\/resources\/methodologies\/nipa-\nhandbook\/pdf\/chapter-02.pdf) .\n\nChapter 2: 2-6 Domestic measures cover activities that take place within the\ngeographic borders of the United States, while national measures cover\nactivities that are attributable to U.S. residents. Footnote 17 accompanied\nwith this statement defines that in detail.\n\n\u201cU.S. residents\u201d includes individuals, governments, business enterprises,\ntrusts, associations, nonprofit institutions, and similar organizations that\nhave the center of their economic interest in the United States and that\nreside or expect to reside in the United States for 1 year or more. (For\nexample, business enterprises residing in the United States include U.S.\naffiliates of foreign companies.) In addition, U.S. residents include all U.S.\ncitizens who reside outside the United States for less than 1 year and U.S.\ncitizens residing abroad for 1 year or more who meet one of the following\ncriteria: owners or employees of U.S. business enterprises who reside abroad\nto further the enterprises\u2019 business and who intend to return within a\nreasonable period; U.S. government civilian and military employees and members\nof their immediate families; and students who attend foreign educational\ninstitutions.\n\nHTH"} +{"query":"Difference between two different ways of adjusting for inflation daily given annual rate\n\nThis is a follow up question to this\n\nI found two different ways to \"adjust for inflation\" on a daily basis given an annual rate (without doing daily compounding). Both yield the same result at the end of the timeframe however I can't determine if there is any error in the interpolated result.\n\n\ud835\udc5f:annual rate\n\ud835\udc61:time in days\n\ud835\udc66\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc60:total time in years, constant\n\ud835\udc43\ud835\udc49:Present Value, constant\n\ud835\udc39\ud835\udc49(\ud835\udc61):Future Value for a given time \ud835\udc61 (in days)\n\nFor example, if \ud835\udc66\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc60=2\n then \ud835\udc61\u2208[0,730]\n\nPlots below are generated using \ud835\udc5f=0.50\n and \ud835\udc43\ud835\udc49=1000\n\nThe first approach is using Giskard's proposal (see related anwer) and let the rate fixed and let the time vary:\n\ud835\udc65=1+\ud835\udc5f\u203e\u203e\u203e\u203e\u203e\u221a365\u22121\n\ud835\udc39\ud835\udc49(\ud835\udc61)=\ud835\udc43\ud835\udc49(1+\ud835\udc65)\ud835\udc61\n\nThe second solution fixes the time and changes the interest rate based on time:\n\ud835\udc65(\ud835\udc61)=\ud835\udc5f365\u2217\ud835\udc61365\n\ud835\udc39\ud835\udc49(\ud835\udc61)=\ud835\udc43\ud835\udc49(1+\ud835\udc65(\ud835\udc61))\ud835\udc66\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc60\n\nI think the second approach is incorrect because when computing 2 years, the value for one year changed from 666.67\n to \u2248640\n whereas in the first method the original value is kept but I cannot justify why this happens.","reasoning":"The calculation is related to that compound interest. We can check the formula there.","id":"57","excluded_ids":["N\/A"],"gold_ids_long":["adjustinflation\/Compoundinterest.txt"],"gold_ids":["adjustinflation\/Compoundinterest_22.txt","adjustinflation\/Compoundinterest_20.txt","adjustinflation\/Compoundinterest_21.txt"],"gold_answer":"$\\begingroup$\n\n**Your graphs do not seem to match your explanation of the formulas.**\n\n* * *\n\nLet us look at the formulas $$x(t) = \\frac{r}{365} \\cdot \\frac{t}{365}$$\n$$FV(t) = PV(1 + x(t))^{years}$$\n\nat the end of the period, that is where $t = 365 \\cdot years$ .\n\n$$x(365 \\cdot years) = \\frac{r}{365} \\cdot \\frac{365 \\cdot years}{365} =\n\\frac{r}{365} \\cdot years$$ $$FV(365 \\cdot years) = PV(1 + x(365 \\cdot\nyears))^{years} = PV\\left(1 + \\frac{r}{365}\\cdot years\\right)^{years}.$$\n\nFor $years = 1$ this would mean an $FV$ of $PV\\left(1 +\n\\frac{r}{365}\\right)$ , much smaller than the expected $PV\\left(1 +\nr\\right)$ .\n\n* * *\n\nPerhaps you meant to write $x(t) = \\frac{r}{365} \\cdot t?$\n\nFor $years = 1$ and small $r$ values this results a decent but imperfect\napproximation of the result yielded by daily compounding formula with $x =\n\\sqrt[365]{1 + r} - 1$ .\n\nHowever for $years = 10$ we have $$FV(365 \\cdot 2) = PV\\left(1 + r \\cdot\n10\\right)^{10}.$$\n\nThe compounding is going on for 10 years as expected, but the yearly interest\nrate has also increased _tenfold_ , which is probably not intentional.\n\n* * *\n\nInstead of experimenting with your own formulas for $FV$ , I recommend\nreading up on this is a [ well-explored subject, as there are several formulas\nfor discounting, including one for continuous time\n](https:\/\/en.wikipedia.org\/wiki\/Compound_interest#Calculation) , and it is\nproven that the smaller time period you choose the closer you get to this."} +{"query":"When debt to GDP is only around 85%, how can this article quote as 300%?\n\nI was reading an article in FT, where debt to GDP is quoted as 300% for US and Japan. However, all the official sites for US quotes the debt to GDP as 85% for US.\n\nThis article is from a reputed investment banker in Wall Street. Can someone explain? Link:\n\nEconomic Trends 2022\n\nFrom the article: Twenty-five countries including the US and China have total debt above 300 per cent of GDP,","reasoning":"We first need to figure out components of gross national debt, and then check the amount (percentage) of each part of debt.","id":"58","excluded_ids":["N\/A"],"gold_ids_long":["debt2gdp\/publicdebt.txt","debt2gdp\/NationaldebtoftheUnitedStates.txt","debt2gdp\/householddebt.txt"],"gold_ids":["debt2gdp\/NationaldebtoftheUnitedStates_10.txt"],"gold_answer":"$\\begingroup$\n\nThe author of the FT article is talking about total debt.\n\nFor Q3 2021, US federal government debt alone was 122% of GDP. Source: [\nhttps:\/\/fred.stlouisfed.org\/series\/GFDEGDQ188S\n](https:\/\/fred.stlouisfed.org\/series\/GFDEGDQ188S)\n\nFor Q2 2021, US household debt was 78% of GDP. Source: [\nhttps:\/\/fred.stlouisfed.org\/series\/HDTGPDUSQ163N\n](https:\/\/fred.stlouisfed.org\/series\/HDTGPDUSQ163N)\n\nTo get total debt you would also need to add the debt of corporations and\nstate and local governments. The total being over 300% seems very likely."} +{"query":"Can there be a game where there are no opponents?\n\nI am considering a scenerio where all the players a in collaboration with each other in an attempt to maximize some profit. However, each player is not 100% sure of the strategy of other player so have to figure out the strategy of other player to play his own strategy to collectively maximize some profit.\n\nIs this covered by game theory? i.e. there are no opposition or state of nature?","reasoning":"This is related to coordination game where players earn higher payoff when they choose the same course, e.g., their interests are perfectly aligned.","id":"59","excluded_ids":["N\/A"],"gold_ids_long":["coordination_game\/Coordinationgame.txt"],"gold_ids":["coordination_game\/Coordinationgame_5.txt"],"gold_answer":"$\\begingroup$\n\nIn a [ **coordination game**\n](https:\/\/en.wikipedia.org\/wiki\/Coordination_game) , players' interests are\nperfectly aligned, so there is no \"opposition\" in the ordinary sense of the\nword. The game has simultaneous moves, which means, at the time of decision,\nplayers don't know what the others would choose."} +{"query":"Simple description of how interest impacts inflation\n\nI always was taught that inflation is impacted by interest like so:\n\nLower interest rate => Loaning money is cheaper => More money in the system => Higher inflation\n\nHowever recently I am also hearing opposite theories as to how lower interest rate can lead to lower inflation.\n\nI found some discussions and explanations but all of them stretch several paragraphs. Is there a short and easy to follow logic like the one I just quoted that can explain this directional impact?\n\nI won\u2019t prevent people from adding context or evaluation as to when each direction is relevant, but please make sure this is clearly separated from the actual impact explanation.","reasoning":"This is related to Fisher effect, which states a positive\nrelationship between the nominal interest\nrate and inflation.","id":"60","excluded_ids":["N\/A"],"gold_ids_long":["inflation_interest\/neofisherismpdf.txt"],"gold_ids":["inflation_interest\/neofisherismpdf_6.txt","inflation_interest\/neofisherismpdf_4.txt","inflation_interest\/neofisherismpdf_3.txt","inflation_interest\/neofisherismpdf_5.txt"],"gold_answer":"$\\begingroup$\n\nThe theory is called Neo-Fisherishm. The [ Fisher equation\n](https:\/\/en.wikipedia.org\/wiki\/Real_interest_rate#Fisher_equation) states\n$$r \\approx i - \\pi_e,$$\n\nwhere $r$ is the real interest rate, $i$ the nominal, and $\\pi_e$ the\nexpected inflation rate.\n\nPrimary determinants of long-term equilibrium real rates are mostly non-\nmonetary: potential growth rates; demographics; risk preferences in\nportfolios.\n\n> Real rates $r$ are determined by fundamentals: increasing $i$ => Higher\n> inflation\n\nThere are a lot of nuances and details for this to \"hold\". However, the same\nis true for\n\n> Lower interest rate => Loaning money is cheaper => More money in the system\n> => Higher inflation"} +{"query":"What would Russia gain economically speaking if it conquered Ukraine?\n\nRussia seems to take preparations for an invasion of Ukraine. Would Russia get richer or economically more powerful if it managed to annex ukraine? If so, in what ways? On the one hand Ukraine has a lot of natural resources like iron ore, coal, manganese, natural gas, oil, salt, sulfur, graphite, titanium, magnesium, kaolin, nickel, mercury, and arable land. But on the other hand the country is poorer than russia.","reasoning":"Thie is related to resource curse, where ountries with an abundance of natural resources are not necessarily rich and better.","id":"61","excluded_ids":["N\/A"],"gold_ids_long":["russia_rich\/Resourcecurse.txt"],"gold_ids":["russia_rich\/Resourcecurse_41.txt"],"gold_answer":"$\\begingroup$\n\n> Would Russia get richer or economically more powerful if it managed to annex\n> ukraine?\n\nLikely not.\n\n 1. Countries generally do not get rich by having a lot of natural resources. In fact it is often the opposite (e.g. see [ paradox of plenty ](https:\/\/www.economist.com\/special-report\/2005\/12\/20\/the-paradox-of-plenty) or also the [ Dutch disease ](https:\/\/www.investopedia.com\/terms\/d\/dutchdisease.asp) which is one reason for paradox of plenty). Empirical studies also show that most resource rich countries are economically poor save for few exceptions (Venables 2016). Also Russia is already is resource rich country with abundance of natural gas, oil etc. \n\n 2. How 'rich' country is in economic terms generally depends on how much output country can produce. Larger countries of course can produce more output just because they are large, so typically for any international comparison we would compare output **per person** as that is more accurate measure of how rich country is. Currently Russia's output _per person_ is about \\$10000 and Ukraine's output per person is about \\$3700 (according to the [ World Bank data ](https:\/\/data.worldbank.org\/indicator\/NY.GDP.PCAP.CD?locations=UA-RU) ). \n\nHence if Russia would annex Ukraine (and productivity of Ukraine would not\nchange), _new_ Russia (e.g. Russia with extra state of Ukraine), would become\npoorer in per capita terms.\n\nOnly if under new Russian management would Ukraine somehow manage to become\nsignificantly more productive than it currently is, Russia would become\nricher."} +{"query":"Has Evergrande defaulted or not?\n\nso many news headlines harbinger evergrande's default! today, dec 9 2021, bloomberg reports Evergrande Declared in Default as Huge Restructuring Looms. but scroll down!\n\nEven Fitch has struggled to get information from Evergrande, noting on Thursday that the developer didn\u2019t respond to its request for confirmation on this week\u2019s coupon payments. \u201cWe are therefore assuming they were not paid,\u201d Fitch analysts wrote in a statement. Bloomberg reported earlier this week that bondholders hadn\u2019t received the money.\n\nso fitch is JUST \"assuming\" evergrande defaulted. but this doesn't prove evergrande defaulted!!!!","reasoning":"Default is also a legal concept, and as such it is sometimes debatable whether an entity is in the state of default or not. It is helpful to see some related discussions, e.g., Greece crisis.","id":"62","excluded_ids":["N\/A"],"gold_ids_long":["evergrande\/greecedebtcrisisdefaultanalysisimf.txt"],"gold_ids":["evergrande\/greecedebtcrisisdefaultanalysisimf_36.txt","evergrande\/greecedebtcrisisdefaultanalysisimf_35.txt"],"gold_answer":"$\\begingroup$\n\n_Default_ is also a legal concept, and as such it is sometimes debatable\nwhether an entity is in the state of default or not.\n\nE.g., see [ Greece debt crisis: when is a default not a default?\n](https:\/\/www.theguardian.com\/business\/2015\/jun\/30\/greece-debt-crisis-default-\nanalysis-imf)"} +{"query":"Unresolved paradoxes or puzzles in financial economics\n\nWhat are some (unresolved) paradoxes or puzzles in financial economics?\n\nI am looking for paradoxes or puzzles like for example:\n\nThe equity premium puzzle (Mehra & Prescott, 1985).\nSiegel's paradox (Siegel, 1972).\nDividend puzzle (Black 1976)\nI am especially looking for some that were never resolved or only partially so, and I would also be interested in some reference that describes\/identifies the puzzle or paradox.","reasoning":"One examples is merger paradox, where mergers between firms may not be always profitable.","id":"63","excluded_ids":["N\/A"],"gold_ids_long":["merger_paradox\/jpet12448.txt"],"gold_ids":["merger_paradox\/jpet12448_2.txt"],"gold_answer":"$\\begingroup$\n\nMerger paradox from industrial organization: I believe a fairly good overview\nis here: Garcia, F, Paz y Mi\u00f1o, JM, Torrens, G. The merger paradox, collusion,\nand competition policy. J Public Econ Theory. 2020; 22: 2051\u2013 2081. [\nhttps:\/\/doi.org\/10.1111\/jpet.12448 ](https:\/\/doi.org\/10.1111\/jpet.12448) .\n\nYou can find a fair deal about it here as well: R. Rothschild, John S.\nHeywood, Kristen Monaco, Spatial price discrimination and the merger paradox,\nRegional Science and Urban Economics, Volume 30, Issue 5, 2000, Pages 491-506\n[ https:\/\/doi.org\/10.1016\/S0166-0462(00)00044-2\n](https:\/\/doi.org\/10.1016\/S0166-0462\\(00\\)00044-2)\n\nThe general overview of this paradox is that it is unclear, under traditional\nmodels, why one firm would bother to purchase another. If they do so, the only\npoint is to cut total overall production. But the sale price of any such firm\nwill (under most conditions) be prohibitively expensive. Furthermore, the\nfirms that are excluded from this merger benefit from reduced production - and\nthey didn't have to bother to buy anyone. Some solutions include assuming\nextremely concave production functions, sticky\/immobile customers distributed\nalong some preference space, and others.\n\nPreviously, IO had an unsolved mystery of how firms chose to enter\nsequentially in spatially price-discriminating markets for the n-firm case in\nlinear markets and circular markets. I know circular markets have been solved,\nand I think linear markets have also been solved recently."} +{"query":"In a setting with N goods how many combinatorial bits do we need to construct a preference map\n\nI am reading this paper:\n\nhttps:\/\/www.researchgate.net\/publication\/5208445_The_market_for_preferences\n\nBy P.E Earl and J.Potts\n\nOn page 3 the following is written:\n\n\"If we think of individual preference orderings as units, then if there are n goods and we ignore computational overheads, each preference map will require (n-1)^2 combinatorial bits to construct.\"\n\nThis confused me a little bit because with just trial and error you can see that in a market with 2 goods, 1 combinatorial bit is needed\n\nI.e\n\nGood A > Good B\n\nHowever when we have 3 goods we only need 2, not as the formula suggests 4 as,\n\nGood A > Good B\n\nGood B > Good C\n\nWhere Good A > Good C is implied due to transitivity, but even if not this comes out to 3 bits not 4","reasoning":"Preference relations are also called total preorders or weak orders in mathematics. We need to find the formula to calculate number of distinct weak orders on an n-element set.","id":"64","excluded_ids":["N\/A"],"gold_ids_long":["bell_number\/Weakordering.txt"],"gold_ids":["bell_number\/Weakordering_29.txt","bell_number\/Weakordering_28.txt"],"gold_answer":"$\\begingroup$\n\nFirst, a correction. What you are counting is the number of _total orders_ .\nYou forgot the possibility of _indifference_ . For example, $A \\sim B \\sim C$\nis a perfectly fine preference relation. The correct count of preference\nrelations on $3$ elements should be $13$ .\n\nPreference relations are also called [ total preorders or weak orders\n](https:\/\/en.wikipedia.org\/wiki\/Weak_ordering#Total_preorders) in mathematics.\nAccording to this [ wiki page\n](https:\/\/en.wikipedia.org\/wiki\/Weak_ordering#Combinatorial_enumeration) , the\nnumber of total preorders on a finite set of cardinality $n$ is given by the\n[ ordered Bell number $a(n)$\n](https:\/\/en.wikipedia.org\/wiki\/Ordered_Bell_number) .\n\nThe ordered Bell number $a(n)$ satisfies the following recurrence relation:\n\n$$ a(n) = \\sum_{i=1}^n {n \\choose i} a(n-i) $$\n\nThe exact formula is given by the following double sum:\n\n$$ a(n) = \\sum_{k=0}^n \\sum_{j=0}^k (-1)^{k-j} {k \\choose j} j^n $$\n\nFor large $n$ , we can approximate $a(n)$ by\n\n$$ a(n) \\approx \\frac{n!}{2 (\\ln 2)^{n+1}} $$\n\nI won't go into details, but using the [ Stirling approximation\n](https:\/\/en.wikipedia.org\/wiki\/Stirling%27s_approximation) , you can prove\nthat the number of required bits is on the order of\n\n$$ \\log_2 a(n) = O(n \\log_2 n) $$\n\nThus, you are correct that **a preference relation on $n$ elements requires\nmuch less than $(n-1)^2$ bits to specify ** ."} +{"query":"New Business - How do I set a price for my product?\n\nThis is my first question here, but since I am used to the Stack Exchange format I will give as much detail about my problem without giving an overload of details.\n\nIn a nutshell I work with US weather radar data and decided I could make an app for storm chasers and emergency management. I have done research and know about what my fees to customers will be, and I have an expectation of how many signups I will have. During the process of creating this, I have also created some custom data that I realized I could sell to weather data provider companies. They will not compete with my web app, so this would be great extra income.\n\nThis is where the problem is. A company has expressed interest in licensing this data from me on a monthly basis. Besides the cost of my servers ($1,000 to $2,000 a month?) I do not know how much to offer this data for. The last thing I want to ask them is \"how much would you like to pay?\".\n\nTo them, this would be helpful as they can re-sell the data to their own emergency management customers. I feel like it would be worth about $1,000 to $3,000 a month on top of server costs. It's not super revolutionary data, but it's going to be useful for them and they might be expecting to pay $8,000 a month (random guess). I am afraid to ask for that right off the bat because they may laugh it off. I am not a bashful person, but I don't want to come across so far left field that they just say nevermind to the whole thing.\n\nI am looking for any sort of advice on how to approach this problem. I think the main problem with something new like this is that the actual value of the product seems either somewhat arbitrary, or the actual value could be anything within a range from $3,000 to $8,000 or more.","reasoning":"First of all, we will need to built a complete business canva and answer the following questions: (a) what's the mission?, (b) what's the long-term vision?, (c) want your business to incorporate economic and\/or social impact considerations? Answering those questions will help to determine if we want to make a profit. Not every business needs to make a profit. We can have a small business and plan your price to cover only interest payments, labor cost, capital cost, and depreciation. Doing the business model canva will also lead to evaluate the operating costs (what need to pay to run the business on a day to day basis). We will also need to detail all the operating cost according to the cost structure.","id":"65","excluded_ids":["N\/A"],"gold_ids_long":["weather_data\/coststructure.txt","weather_data\/businessmodelcanvastemplate.txt"],"gold_ids":["weather_data\/businessmodelcanvastemplate_16.txt","weather_data\/coststructure_27.txt","weather_data\/coststructure_35.txt","weather_data\/coststructure_16.txt","weather_data\/coststructure_41.txt","weather_data\/coststructure_39.txt","weather_data\/coststructure_24.txt","weather_data\/coststructure_23.txt","weather_data\/coststructure_31.txt","weather_data\/coststructure_38.txt","weather_data\/coststructure_10.txt","weather_data\/coststructure_29.txt","weather_data\/coststructure_32.txt","weather_data\/businessmodelcanvastemplate_12.txt","weather_data\/coststructure_30.txt","weather_data\/coststructure_45.txt","weather_data\/coststructure_22.txt","weather_data\/coststructure_14.txt","weather_data\/coststructure_19.txt","weather_data\/coststructure_43.txt","weather_data\/coststructure_26.txt","weather_data\/coststructure_18.txt","weather_data\/coststructure_40.txt","weather_data\/coststructure_28.txt","weather_data\/coststructure_34.txt","weather_data\/coststructure_11.txt","weather_data\/businessmodelcanvastemplate_15.txt","weather_data\/coststructure_15.txt","weather_data\/coststructure_36.txt","weather_data\/businessmodelcanvastemplate_18.txt","weather_data\/coststructure_47.txt","weather_data\/coststructure_46.txt","weather_data\/coststructure_25.txt","weather_data\/coststructure_12.txt","weather_data\/businessmodelcanvastemplate_11.txt","weather_data\/businessmodelcanvastemplate_8.txt","weather_data\/businessmodelcanvastemplate_14.txt","weather_data\/businessmodelcanvastemplate_13.txt","weather_data\/businessmodelcanvastemplate_17.txt","weather_data\/coststructure_8.txt","weather_data\/businessmodelcanvastemplate_19.txt","weather_data\/businessmodelcanvastemplate_20.txt","weather_data\/coststructure_9.txt","weather_data\/coststructure_37.txt","weather_data\/coststructure_33.txt","weather_data\/coststructure_17.txt"],"gold_answer":"$\\begingroup$\n\nto answer this question you'll need a bit more information (which you may have\nor not at the moment). I'll assume that there's no competition for your app,\nand that there's no explicit price on the market (prices from other firms\noffering the same service).\n\nFirst of all, you'll need to built a complete business canva and answer the\nfollowing questions: (a) what's your mission?, (b) what's your long-term\nvision?, (c) do you want your business to incorporate economic and\/or social\nimpact considerations? Answering those questions will help you determine if\nyou want to make a profit. Not every business needs to make a profit. You can\nhave a small business and plan your price to cover only interest payments,\nlabor cost, capital cost, and depreciation. Doing the business model canva\nwill also lead you to evaluate your operating costs (what you need to pay to\nrun the business on a day to day basis).\n\nHere's a good reference to write your business model canva: [\nhttps:\/\/corporatefinanceinstitute.com\/resources\/knowledge\/strategy\/business-\nmodel-canvas-template\/\n](https:\/\/corporatefinanceinstitute.com\/resources\/knowledge\/strategy\/business-\nmodel-canvas-template\/) . You'll need to detail all your operating cost\naccording to those categories: [ https:\/\/www.strategyzer.com\/business-model-\ncanvas\/cost-structure ](https:\/\/www.strategyzer.com\/business-model-\ncanvas\/cost-structure)\n\nNow that you have a better vision of your whole business, there's two main way\nto estimate pricing.\n\n_Pricing without profit_\n\nIf you don't want to generate profit and only give yourself a good salary, you\ncan calculate the price for your product in the following way:\n\n\\begin{equation*} P\\ =\\ OPEX\/n \\end{equation*}\n\nIn that case, the product price is equal to your total (annual) operating\nexpenses (OPEX) divided by the amount of client you expect to sell to (n). A\nbit of market research will be needed to estimate the size of your expected\nclientele.\n\n_Pricing with profit_\n\nIf you decide to generate a profit (ex: if you want to invest in your business\nfor expansion, you'll need to set some money aside for advance payments), you\ncan calculate the price for your product in the following way:\n\n\\begin{equation*} P\\ =\\ \\frac{OPEX\\ +\\ CAPEX\\ +\\ SEI}{n} \\end{equation*}\n\nIn that case, the price of your product is the sum of operating expenses\n(OPEX), capital investment expense (CAPEX), and social\/environmental\ninvestment (SEI), divided by the expected number of sales (n).\n\n_Some advice_\n\nIf you currently live in a medium-size or large-size town, I would suggest you\nto get in touch with the entrepreneurial ecosystem and support network. There\nmay be some public or private business incubator that could provide you with\nadvanced support and a office to start."} +{"query":"Does julia's speed advantage over python make any difference for DSGE modeling?\n\nWhen compared to Python the main selling point of Julia is its speed as it is often argued. However, from my own personal experience I never noticed any significant difference in speed between Julia and Python. If there is a trivial few seconds difference that would not matter practically. This could be problem for models using big-data but outside machine learning are there any (even large) DSGE models where this makes difference?\n\nI am looking for some examples of significant differences between the two.\n\nPS: Despite reference request tag I am happy to accept answer that showcases some examples directly.","reasoning":"The Federal Reserve Bank of New York is using julia. We can check their reasons, which are potentially related to advantages of julia. ","id":"66","excluded_ids":["N\/A"],"gold_ids_long":["julia_python\/nyfed.txt"],"gold_ids":["julia_python\/nyfed_1.txt","julia_python\/nyfed_2.txt"],"gold_answer":"$\\begingroup$\n\nJulia is actually a lot faster than Python, also when running DSGE models.\n\nThe [ NY FED moved their DSGE model to Julia\n](https:\/\/juliacomputing.com\/case-studies\/ny-fed\/) because it allows them to:\n\n> * Estimate models 10x faster\n> * Complete 'solve' test 11x faster\n> * Reduce number of lines of code in half, saving time, increasing\n> readability and reducing errors\n>\n\nThe original results can be found [ here ](http:\/\/frbny-\ndsge.github.io\/DSGE.jl\/latest\/MatlabToJuliaTransition\/) .\n\nThese speedups are compared to Matlab which the FED used before. However, with\nregards to speed, Matlab should be quicker than Python for DSGE models (any\nmatrix statistics, iteration and recursion and so forth). That is also what\nthe first suggested paper in the accepted answer finds (I only looked at the\nrevised 2018 version). By the way, the conclusion in the paper reads as\nfollows:\n\n> ... C++ is the fastest alternative, Julia offers great balance of speed and\n> ease of use, and Python is too slow.\n\nThe screenshot below is directly from the paper. [ ![enter image description\nhere](https:\/\/i.sstatic.net\/EANf4l.png) ](https:\/\/i.sstatic.net\/EANf4l.png)\n\nThe slowest Julia implementation takes only 1.6% of the fastest Python\nversion.\n\nThe second paper (2020) essentially tests a model that runs only a few seconds\n(less than 1 in Julia). [ ![enter image description\nhere](https:\/\/i.sstatic.net\/yCY5ml.png) ](https:\/\/i.sstatic.net\/yCY5ml.png)\n\nNonetheless, as soon as it's more than a fraction of a second (where compile\ntime latency of Julia will almost certainly be an issue), Julia is\nsignificantly quicker. It may not be a difference you really feel or that\nmatters for you, because waiting for 10 or 15 seconds is essentially not a\ndeal breaker.\n\nUnlike Python, Julia is significantly more complex when it comes to potential\ncode implementation. Everything that runs fast in Python is not Python, but C,\nFortran and co. Julia can also turn slow when type inference fails or the JIT\ncompiler has insufficient information to optimize effectively. If you notice\nno speedup to Python, it is either because you use examples (or benchmark that\nway) where compile time latency matters more than computation, or it is\ninefficient code. I did some work a while ago on Julia's speed and comparison\nto Python. I will copy paste some sections in case anyone is interested in\nsome details why Julia is actually fast.\n\nIn dynamic languages like Python, classes could be subclassed. This is also\npossible in some statically typed languages that are object oriented such as\nJava. For this reason, a single integer in ` Python 3.x ` actually contains\nfour pieces:\n\n * ` ob_refcnt ` , a reference count that helps Python silently handle memory allocation and deallocation \n * ` ob_type ` , which encodes the type of the variable \n * ` ob_size ` , which specifies the size of the following data members \n * ` ob_digit ` , which contains the actual integer value that we expect the Python variable to represent. \n\nThis means that there is some overhead in storing an integer in Python as\ncompared to an integer in say Julia or C. Python uses 28 bytes to store an\ninteger, Julia only 8 bytes.\n\nA great feature of Julia is that types are extremely lightweight, and user-\ndefined types are exactly as performant as anything built-in: this is a big\ndifference to languages like Python where constructing a new instance of a\nclass is very expensive. The amount of data required to store a Struct (Class)\ncalled Foo is the same as the amount of data required to store a pointer to a\nstring. At compilation, Julia (just like Fortran and C) doesn't have\nadditional overhead contrary to how an object (in say Python) is represented\nin memory (includes type information, reference counter etc).\n\nJulia:\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/rfcFqm.png)\n](https:\/\/i.sstatic.net\/rfcFqm.png)\n\nPython:\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/WK8LXm.png)\n](https:\/\/i.sstatic.net\/WK8LXm.png)\n\nYou can see the massive overhead in Python. It's probably even better to say\nthat Julia isn't slow (not that it is fast), meaning it doesn't get in the way\nof the hardware and lets the hardware do its job efficiently. Mathematically,\nthe integer 5 is exactly identical to the floating point 5.0, but it has a\ndifferent bitwise representation. This is why Types are important in\ncomputers. Slow programming languages have a very hard time to know in advance\nwhat types things are. So they test for all sorts of combinations to figure\nout what routine to use to call the correct CPU instructions, because at the\nlowest level, the microprocessor has to perform different instructions when\ndealing with for example integers or floats. A microprocessor does not know\ntypes. A CPU needs special instructions for all different types; e.g. signed\nand unsigned integers, floats with single precision or double precision, or\ncombinations of floating points and integers, for which you will need\nconversion (promotion) of types. Usually there will be different special\ninstructions for every possible type.\n\nAs long as your task is computational, Julia will always beat Python (provided\nboth are optimized). Where it starts to matter is when you really work with\nlots of data and \/ or complex models.\n\nFor example look at the question [ Why is this task faster in Python than\nJulia? ](https:\/\/stackoverflow.com\/questions\/70987896\/why-is-this-task-faster-\nin-python-than-julia) on stackoverflow. It is almost always code written in\nnon-performant ways if you get slow Julia code. Without any optimization, just\ngetting rid of compile time latency, resulted in Julia taking 8s vs 30s in\nPython. Regarding the stackoverflow question, [ Quantinsti\n](https:\/\/blog.quantinsti.com\/julia-programming\/) has some good tests for\nMultiple operations on large datasets. For 100 groups of ~10,000,000 rows,\nPython (pandas package) and R (dplyr package) resulted in an internal error\nand out of memory error, respectively while Julia took 2.4 seconds the first\ntime and 1.8 seconds the second time.\n\nJulia is fast because of its design decisions that allow a compiler to make\nefficient code. This is achieved with type-stability through specialization\nvia multiple-dispatch as a core design decision. To illustrate this, you can\ntry to compute ` 2^3 ` .\n\nIn Julia, you can run ` @code_llvm ` as well as ` @code_native ` macros to\nprint the LLVM bitcodes as well as the native assembly instructions generated\nfor running the method matching the given generic function and type.\n\nIf you use ` @code_llvm exponential(2,3) ` as well as ` @code_llvm\nexponential(2,3) ` and compared it to the 2^3 counterpart, you will be\nsurprised how messy the \u201cmanual\u201d exponential solution is compared to Julia\u2019s\nbuilt in.\n\nManual (ignoring almost half the output):\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/5XNk7l.png)\n](https:\/\/i.sstatic.net\/5XNk7l.png)\n\nBuilt-in (entire output):\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/FR0qMl.png)\n](https:\/\/i.sstatic.net\/FR0qMl.png)\n\nThe built in solution is something you can decipher fairly quickly with some\nbasic assembly.\n\n * The lines beginning with ` ; ` are comments, and explain which section of the code the following instructions come from. Calling it with ` debuginfo=:none ` removes the comments in the assembly code \n * ` pushq ` , ` movq ` , ` subq ` and ` addq ` are to save stack frames, move content to or from [ registers ](https:\/\/en.wikipedia.org\/wiki\/64-bit_computing#Architectural_implications) (the suffix ` q ` is short for ` quad ` , which means 64-bit integer), allocate to registers and dealocate. Neither is important for our purpose. For short, a CPU operates only on data present in registers. These are small, fixed size slots (e.g. 8 bytes in size) inside the CPU itself. A register is meant to hold one single piece of data, like an integer or a floating point number. \n * ` movabsq ` is the so called [ Exponentiation by squaring ](https:\/\/en.wikipedia.org\/wiki\/Exponentiation_by_squaring) . For example if you compute $x^{15}$ , it will look like this \n\n \n \n x^15 = (x^7)*(x^7)*x \n x^7 = (x^3)*(x^3)*x \n x^3 = x*x*x\n \n\nOr only the last line in our simple example above, which is $x^3$ only.\n\nThe reason this works is that Julia's built in function made some clever\ndesign decisions. For instance, ` x^-n ` works for an integer ` x ` and\nliteral ` n ` but not for expressions because Julia developers [ defined\n](https:\/\/github.com\/JuliaLang\/julia\/issues\/20527) a special syntactic\ntreatment of literals.\n\n * The benefit is quick and reliable exponentiation for something like $x^-3$ . The advantage is that [ literal powers ](https:\/\/docs.julialang.org\/en\/v1\/base\/math\/#Base.:%5E-Tuple%7BNumber,%20Number%7D) automatically become type-stable, because they turn into a fixed number of * calls. This is why Julia now lowers ` x^-3 ` to ` inv(x)^3 ` . \n * The downside is that $p = -3$ and $x^p$ does not work and throws an error which seems potentially quite confusing. However, the ` x^literal ` has a different meaning than ` x^expression ` . In essence, referential transparency was sacrificed, and type stability \"extended\": That is why ` ^ ` to a literal integer power is different than raising to a variable with the same integer value. \n\n**Note:** The crux here is that type-stability doesn\u2019t mean that the type\nreturned by a function is the same as the input. It means that the type can be\ninferred at every step of the way. If inv of an integer is always a floating\npoint number, then that\u2019s type stable.\n\nThe problem however, is that most casual Python users will expect x^p to work\nfor p = -3. \n \n\nSimilarly, scope matters a lot. You can look at this simple function.\n\n \n \n b = 1.0\n function g(a)\n global b\n tmp = a\n for i in 1:1_000_000\n tmp = tmp + a + b\n end\n return tmp\n end\n \n\nb is global here, which is like poison for performance because the type of a\nglobal variable is never certain.\n\nIf you instead write\n\n \n \n function g(a, b)\n tmp = a\n for i in 1:1_000_000\n tmp = tmp + a + b\n end\n return tmp\n end\n \n\nyou are eliminating the global variable which\n\n\u2022 reduces the number of allocations to zero (from 3000001 allocations: 45.78\nMiB) \n\u2022 speeds up execution and \n\u2022 produces clean and fairly simple machine code.\n\nFor short, with some care, Julia can be on par with the fastest languages.\n\nThese allocations matter massively. On my private laptop for example, finding\na single file (including opening and closing the file) takes about 1.4\nmilliseconds. Accessing 1,000,000 integers from memory takes 75.7\nmilliseconds. So my RAM is almost 20,000 times faster than my disk. I ran this\nwith a [ reasonably fast\n](https:\/\/ssd.userbenchmark.com\/SpeedTest\/1317771\/E12S-512G-PHISON-SSD-B27B)\nSSD but even the newer Optane technology disks are usually thousands of times\nslower than RAM.\n\nI ran a speed test between C, Python and Julia, which is based on a notebook\nfrom an introductory [ MIT math course\n](https:\/\/github.com\/mitmath\/18S096\/blob\/master\/lectures\/lecture1\/Boxes-and-\nregisters.ipynb) .\n\nIt is a speed comparisons between C, Python and Julia by implementing a\n**sum** function ` sum(a) ` , which computes\n\n$$ \\mathrm{sum}(a) = \\sum_{i=1}^n a_i $$\n\nfor an array ` a ` with ` n ` elements. I compared built in ` sum ` functions\nin Julia and Python along with hand-coded implementations in C, Python, and\nJulia. If you are a Windows user and want to replicate this on your own, you\nwill need to install the MinGW (GCC\/G++) compiler.\n\nThe C code is:\n\n \n \n C_code = \"\"\"\n #include \n double c_sum(size_t n, double *X) {\n double s = 0.0;\n for (size_t i = 0; i < n; ++i) {\n s += X[i];\n }\n return s;\n }\n \"\"\"\n \n const Clib = tempname() # make a temporary file\n \n # compile to a shared library by piping C_code to gcc\n open(`gcc -fPIC -O3 -msse3 -xc -shared -o $(Clib * \".\" * Libdl.dlext) -`, \"w\") do f\n print(f, C_code) \n end\n \n\nJulia's\n\n \n \n function my_sum1(x)\n result = zero(eltype(x))\n for element in x\n result += element\n end\n return result\n end\n \n\nNot only is the Julia code significantly easier, but also, unlike C, it works\nin Julia for any type of array (any iterable that provides an ` eltype `\nmethod).\n\nPython (within a Julia Kernel using PyCall)\n\n \n \n py\"\"\"\n def mysum(a):\n s = 0.0\n for x in a:\n s = s + x\n return s\n \"\"\"\n mysum_py = py\"mysum\"\n \n\nThe results are\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/RBhSI.png)\n](https:\/\/i.sstatic.net\/RBhSI.png)\n\nThere are a few interesting aspects.\n\n * Starting with C, testing the function on an array of $10^7$ random numbers in [0,1] gives a result of \u2248 10ms for summing these numbers, which means that we have about 1 billion additions per second. This sounds a lot, but is well below the clock-speed of 2.5GHZ my [ processor ](https:\/\/www.cpubenchmark.net\/cpu.php?cpu=Intel+Core+i5-7300HQ+%40+2.50GHz&id=2922) has. This is because each Floating-Point addition needs to perform several additional calculations to load the next element of the array from memory, plus the time needed to access memory itself. \n * Probably most interesting is that Python's built in sum is fairly slow, even though the Python sum function is written in C (Julia's sum is built using only Julia). It takes almost 4x longer than the equivalent C code and allocates memory. Python code pays a price for being generic and being able to handle arbitrary iterable data structures. See above for memory allocation differences between Julia and Python. Therefore, there are not only the computations involving addition, but also the overhead from fetching each item from memory. \n * Julia equivalent of a Python list is a Vector{Any}: internally, this is an array of pointers to \"boxes\" that can hold any type (Any). This makes things much slower: each + computation on an Any value must dynamically look up the type of object, figure out what + function to call, and allocate a new \"box\" to store the result. If you run this logic in Julia, it is even slower than Python. This is the same problem as with the manual Numpy array implementation. Python is faster (better optimized for dealing) with untyped ` Any ` values. This is because Julia expects you to use \"concretely\" typed arrays in all performance-critical cases and why you should always avoid abstract types in Julia. \n * On the other hand, Numpy arrays can take advantage of the fact that all of the elements are of the same type. This is actually faster than C! The reason is that NumPy gets and extra turbo boost from exploiting [ SIMD instructions ](https:\/\/en.wikipedia.org\/wiki\/Streaming_SIMD_Extensions) . \n\nYou can run SIMD for C as well as Julia as well.\n\nJulia\n\n \n \n function mysum(A)\n s = zero(eltype(A)) \n @simd for a in A\n s += a\n end\n return s\n end\n \n\nC\n\n \n \n const Clib_fastmath = tempname() # make a temporary file\n \n # The same as above but with a -ffast-math flag added\n open(`gcc -fPIC -O3 -msse3 -xc -shared -ffast-math -o $(Clib_fastmath * \".\" * Libdl.dlext) -`, \"w\") do f\n print(f, C_code) \n end\n \n # define a Julia function that calls the C function:\n c_sum_fastmath(X::Array{Float64}) = ccall((\"c_sum\", Clib_fastmath), Float64, (Csize_t, Ptr{Float64}), length(X), X)\n \n\n[ ![enter image description here](https:\/\/i.sstatic.net\/ZoFTDl.png)\n](https:\/\/i.sstatic.net\/ZoFTDl.png)\n\nHowever, once you work with SIMD, you really need to know what you do when you\nimplement and use this. Incorrect use of the @simd macro may cause unexpected\nresults.\n\nWhile this may sound like esoteric computer science stuff, you can look at the\n[ following question ](https:\/\/quant.stackexchange.com\/a\/63891\/54838) to see\nwhat can happen with floating point math results in a reasonable normal\nsituation. The most intriguing example demonstrating the dangers of floating\npoint math I have seen so far comes from [ Stefan Karpinksi\n](https:\/\/discourse.julialang.org\/t\/array-ordering-and-naive-summation\/1929) .\nYou can use this to illustrate the difference between left-to-right summation\nand left-to-right summation with a @simd annotation.\n\nEssentially, if you use this code snippet in Julia,\n\n \n \n function sumsto(x::Float64)\n 0 <= x < exp2(970) || throw(ArgumentError(\"sum must be in [0,2^970)\"))\n n, p\u2080 = Base.decompose(x) # integers such that `n*exp2(p\u2080) == x`\n [floatmax(); [exp2(p) for p in reverse(-1074:969) if iseven(n >> (p-p\u2080))]\n \n -floatmax(); [exp2(p) for p in reverse(-1074:969) if isodd(n >> (p-p\u2080))]]\n end \n \n s = rand()*10^6\n v = sumsto(s)\n \n\nyou order the random vector (10^6) from highest to lowest value. None of the\nstandard algorithms in Julia or Python provide the true sum, although\ninterestingly, in this example SIMD comes closest in this case. The reason is\nthat in the case of summation, strict left-to-right summation is actually one\nof the least accurate algorithms, so putting @simd on the loop makes it faster\nand more accurate at computing the true sum because SIMD largely allows for\nreassociation of mathematically associative operations which are not actually\nassociative in floating point. This includes ` + ` and ` * ` . Generally, ` (a\n+ b) + c ` can be evaluated as ` a + (b + c) ` instead, but this does not\ncompute the same thing for floating-point values.\n\nThe correct answer can be obtained with the [ Kahan-Babuska-Neumaier (KBN)\n](https:\/\/en.wikipedia.org\/wiki\/Kahan_summation_algorithm) summation algorithm\nfor computing sums (and cumulative sums). This comes at a speed difference of\nabout 20x, which is why none of the conventional implementations uses it.\n\nBottom line is, in my opinion, that Julia is serving a niche market. While the\npaper states that Julia comes with a great mix of speed and usability, the\nmain problem is that it is still quite a task to reach C (C++ or Fortran\nspeed) in Julia. If you require real speed, firms will prefer C++ (at least in\nmy field, quant finance, C++ is the by far most frequently used language, with\nsome niche products like OCAML used by Lexifi or Bloomberg's DLIB, which focus\non complex derivs). Most users don't need that speed because the models are to\nsmall, or the data is not big enough. E.g. an everyday Bloomberg user could\nnever download as much data (BBG has a limit on daily data usage) that it\nreally makes a noticeable difference, just like ~150 seconds (compared to 2 in\nJulia) don't make a huge difference if you run a DSGE model because there is\nno need to rerun it every 3 minutes or at even shorter periods.\n\nThat said, packages are really developing in Julia, and charting\nfunctionalities and the DataFrames package are really well done. You can look\n[ here ](https:\/\/quant.stackexchange.com\/a\/69780\/54838) to see how little code\nis needed to make some kinda decent interactive charts (dynamic option PnL\ncharts).\n\nAlso, since the release of 1.0, you don't usually face issues with breaking\ncode due to deprecation anymore. If everyday datasets grow, and computational\npower becomes more important for casual programmers, Julia could very well\nbecome a big player (subjective opinion)."} +{"query":"Solve for the Walrasian demand, Utility of three variables, and Convexity of Preferences?\n\nI am given \ud835\udc48(\ud835\udc65,\ud835\udc66,\ud835\udc67)=\ud835\udc6523\ud835\udc6613+\ud835\udc67\n. I am asked to solve the following:\n\n(i) Prove the convexity of these preferences (convex, strictly convex or neither?)\n(ii) Solve for the Walrasian Demand?\n\nFor part 1, I calculated the determinant of the bordered hessian matrix and got 32081\u22171\ud835\udc6523\ud835\udc6643\n. Here I concluded that if x and y are greater than zero (This was not given to me I assumed this), then determinant is greater than zero so \ud835\udc48\n must be quasi-concave and hence preferences are convex. Is this correct?\n\nFor part 2, I considered three cases.\n\nCase 1: when \ud835\udc67=0\n and \ud835\udc65,\ud835\udc66>0\n. This was just the Walrasian for the standard Cobb Douglas we are left with.\n\nCase 2: when \ud835\udc65\n or \ud835\udc66=0\n and \ud835\udc67>0\n. All wealth is spent on z as well.\n\nCase 3: \ud835\udc65,\ud835\udc66,\ud835\udc67>0\n. This I was unable to compute and did not know how to proceed.\n\nNote: Budget constraint is standard \ud835\udc431\ud835\udc4b+\ud835\udc432\ud835\udc4c+\ud835\udc433\ud835\udc4d=\ud835\udc4a\n\nFor Case 3: I set up Lagrangian and used Kuhn Tucker Conditions:\n\n23\n*(\ud835\udc66\ud835\udc65)13\u2212\ud835\udf06\ud835\udc431\u22640\n13\n*(\ud835\udc65\ud835\udc66)23\u2212\ud835\udf06\ud835\udc432\u22640\n1\u2212\ud835\udf06\ud835\udc433\u22640\n\ud835\udc65[23\n*(\ud835\udc66\ud835\udc65)13\u2212\ud835\udf06\ud835\udc431]=0\n\ud835\udc66[13\n*(\ud835\udc65\ud835\udc66)23\u2212\ud835\udf06\ud835\udc432]=0\n\ud835\udc67[1\u2212\ud835\udf06\ud835\udc433]=0\n\ud835\udc4a\u2212\ud835\udc431\ud835\udc4b\u2212\ud835\udc432\ud835\udc4c\u2212\ud835\udc433\ud835\udc4d\u22650\n\ud835\udf06[\ud835\udc4a\u2212\ud835\udc431\ud835\udc4b\u2212\ud835\udc432\ud835\udc4c\u2212\ud835\udc433\ud835\udc4d]=0\nImposing \ud835\udc65,\ud835\udc66,\ud835\udc67>0\n and Walras's law, I know that I can equate 1, 2, 3, and 7 to zero. Essentially I end up with an equation which says that at optimal Marginal Utility to Price Ratio of each good is same.\n\nAfter simplifying by equating \ud835\udf06\n, I get this from 1, 2:\n\n\ud835\udc65\ud835\udc432=2\ud835\udc66\ud835\udc431\n.\nMy problem is that I can't solve for \ud835\udc67\n as I can't get my budget constraint all in terms of one variable.","reasoning":"The part 1 looks correct. For part 2, recall that Walrasian demand is the solution to utility maximization subject to budget constraint. One important step is to derive the Kuhn-Tucker conditions. We may check its definition and example.","id":"67","excluded_ids":["N\/A"],"gold_ids_long":["walrasian_demand\/t.txt"],"gold_ids":["walrasian_demand\/t_2.txt"],"gold_answer":"$\\begingroup$\n\nFor part (i), in complete rigor, you should also check [ the determinants of\nall the leading principal minors of the bordered Hessian and make sure that\nthey have alternating signs\n](https:\/\/mjo.osborne.economics.utoronto.ca\/index.php\/tutorial\/index\/1\/qcc\/t#d:borderedHessian)\n. Your final conclusion looks correct though.\n\nFor part (ii), recall that Walrasian demand is the solution to utility\nmaximization subject to budget constraint. So you should setup a Lagrangian,\nderive the [ Kuhn-Tucker conditions\n](https:\/\/mjo.osborne.economics.utoronto.ca\/index.php\/tutorial\/index\/1\/ktc\/t)\n, and then solve for $x,y,z$ as functions of the prices and income. These\nwill be the Walrasian demands."} +{"query":"Test which functional form that best explains data\n\nTried asking this on Math Stack Exchange. Got no answer after a week, so trying here.\n\nI had this question in an exam lately and I was not sure how to answer it. Now the exam is done, and I can't go back, but it's been in my head ever since and I'm really curious about the answer.\n\nSuppose you have a data set, with variables:\n\n\ud835\udc4e\ud835\udc54\ud835\udc52\n: A persons age\n\n\ud835\udc4e\ud835\udc54\ud835\udc522\n: age to the power of 2\n\nAnd dummy variables: \ud835\udc3745=(\ud835\udc4e\ud835\udc54\ud835\udc52=45)\n, \ud835\udc3746=(\ud835\udc4e\ud835\udc54\ud835\udc52=46)\n ... \ud835\udc3755=(\ud835\udc4e\ud835\udc54\ud835\udc52=55)\n etc.\n\nAnd suppose you have two models, where\n\n\ud835\udc66=\ud835\udefd1\ud835\udc651+\ud835\udefd2\ud835\udc4e\ud835\udc54\ud835\udc52+\ud835\udefd3\ud835\udc4e\ud835\udc54\ud835\udc522\nand\n\ud835\udc66=\ud835\udefd1\ud835\udc651+\ud835\udefd2\ud835\udc3745+\ud835\udefd3\ud835\udc3746...\ud835\udefd22\ud835\udc3765\n\nHow would you guys test which of the functional forms in the two models best explains the data?\n\nI suppose we are to test \ud835\udefd\ud835\udc56=0\n for both models. But I am not sure.\n\nWhat would you guys have done in this situation?\n\nKind regards","reasoning":"We can test for the fit of the models using something like a likelihood ratio test, i.e. the ratio indicates the expression\/fitness of the model.","id":"68","excluded_ids":["N\/A"],"gold_ids_long":["likelyhood\/Likelihoodratiotest.txt"],"gold_ids":["likelyhood\/Likelihoodratiotest_10.txt","likelyhood\/Likelihoodratiotest_6.txt","likelyhood\/Likelihoodratiotest_8.txt","likelyhood\/Likelihoodratiotest_4.txt","likelyhood\/Likelihoodratiotest_12.txt","likelyhood\/Likelihoodratiotest_7.txt","likelyhood\/Likelihoodratiotest_11.txt"],"gold_answer":"$\\begingroup$\n\nNotice that: $$ y = \\beta_1 x_1 + \\beta_2 age + \\beta_3 age^2 $$ is a more\nrestrictive model than: $$ y = \\delta_1 x_1 + \\delta_2 D_{45} + \\delta_3\nD_{46} + \\ldots $$ where you have a dummy for every age level.\n\nTo see this, take for example the case where age is $a$ . Then the first\nregression gives: $$ y = \\beta_1 x_1 + \\beta_2 \\times a + \\beta_3 \\times\n(a)^2 $$ The second regression gives: $$ y = \\delta_1 x_1 + \\delta_a. $$\nwhere $\\delta_a$ is the coefficient of the dummy $D_a$ .\n\nSo for $\\delta_1 = \\beta_1$ and $\\delta_a = \\beta_2 \\times a + \\beta_3\n\\times (a)^2$ the two are the same.\n\nThis means that the first regression is a special case of the second one where\nwe specify $\\delta_1 = \\beta_1$ and $\\delta_a = \\beta_2 a + \\beta_3 a^2$ .\n\nGiven that the first regression is a restrictive version of the second one, in\nprinciple, you could test for the fit of the first model versus the second\nmodel using something like a likelihood ratio test [ wiki\n](https:\/\/en.wikipedia.org\/wiki\/Likelihood-ratio_test) ."} +{"query":"Why did Germany not suffer from Great Inflation in the 1970s\/80s?\n\nComparing the inflation rate of some of the industrial countries around 1975 (Great Inflation) in between the two oil shocks of 1973 and 1979\/80, it seems odd that most countries' inflation rate was very high at around 22%, whereas Germany does not seem to have suffered from such a high rate (at most 7% which is quite low compared to the others)\n\nWhy is that? Are there any reasons for that, f.ex. due to Germany leaving the Bretton Woods System in 1973? What are some reasonable explanations?","reasoning":"This is related to the monetary policy conducted by Bundesbank.","id":"69","excluded_ids":["N\/A"],"gold_ids_long":["germany_inflation\/w14596.txt"],"gold_ids":["germany_inflation\/w14596_30.txt"],"gold_answer":"$\\begingroup$\n\nThere is actually a paper on this topic by [ Beyer et al (2009)\n](https:\/\/www.ecb.europa.eu\/pub\/pdf\/scpwps\/ecbwp1020.pdf) titled \u2018Opting out\nof the great inflation: German monetary policy after the breakdown of Bretton\nWoods\u2019 that exactly addresses your question.\n\nAs the authors explain:\n\n> During the turbulent 1970s and 1980s the Bundesbank established an\n> outstanding reputation in the world of central banking. Germany achieved a\n> high degree of domestic stability and provided safe haven for investors in\n> times of turmoil in the international financial system\u2026 We derive an\n> interest rate rule and show empirically that it approximates the way the\n> Bundesbank conducted monetary policy over the period 1975-1998. We compare\n> the Bundesbank's monetary policy rule with those of the FED and of the Bank\n> of England. We find that the Bundesbank's policy reaction function was\n> characterized by strong persistence of policy rates as well as a strong\n> response to deviations of inflation from target and to the activity growth\n> gap. In contrast, the response to the level of the output gap was not\n> significant.\n\nBasically, to sum up and bit oversimplify the paper, it was thanks to\nBundesbank\u2019s masterful use of inflation expectation anchoring as well as\npursuing relativity tight monetary policy.\n\nMoreover, Germany was viewed as a safe country for investors so it had capital\ninflows that also helped."} +{"query":"Irreversibility in the Creation of Value\n\nIn his book The Origin of Wealth, Eric D. Beinhocker says that\n\nA pattern of matter, energy, and\/or information has economic value if the following three conditions are jointly met:\n\nIrreversibility: All value-creating economic transformations and transactions are thermodynamically irreversible.\nEntropy: All value-creating economic transformations and transactions reduce entropy locally within the economic system, while increasing entropy globally.\nFitness: All value-creating economic transformations and transactions produce artifacts and\/or actions that are fit for human purposes.\nThese conditions seems pretty logical for me, particularly the \"fitness\" thing. However, I wonder about the \"irreversibility\" part: it sounds logical to state that, say, once a tree is transformed into a chair it cannot be turned into a tree again; so, making a chair is clearly irreversible. In the same vein, if a barber cut the hair of someone, he or she can not \"uncut\" it - irreversibility, of course.\n\nBut what if I was a doorman? My job would be to open a door for the others the whole day, which is a service that many people value to some extent, i.e., I would be creating some value in the economic sense of the term, right? But I can close the door as easily as I can open it, so it is something clearly reversible. Thus, it seems obvious I may do something \"reversible\" that is valued by someone - should I conclude Beinhocker is wrong or - I as suspect - I am missing something?","reasoning":"This is about the understanding of Irreversibility. We need to check the concept of thermodynamic irreversibility.","id":"70","excluded_ids":["N\/A"],"gold_ids_long":["Irreversibility\/Irreversibleprocess.txt"],"gold_ids":["Irreversibility\/Irreversibleprocess_1.txt","Irreversibility\/Irreversibleprocess_2.txt"],"gold_answer":"$\\begingroup$\n\nThe author talks about thermodynamic irreversibility. Thermodynamic\nirreversibility exist any time that original conditions cannot be restored\nwithout expenditure of energy. Opening and closing doors is thermodynamically\nirreversible (eg see [ here\n](https:\/\/en.m.wikipedia.org\/wiki\/Irreversible_process) ). Only actions where\nthere is no dissipation are thermodynamically reversible, as implied by the\nlaws of thermodynamics.\n\nHowever, this being said Beinhocker uses extremely fringe definition of value.\nIn economics we do not typically define value in terms of thermodynamics.\n\nIn economics, the dominant theory of value is the subjective theory of value\nthat posits that things have value because individuals subjectively perceive\nthem to have value."} +{"query":"Why is there such a big difference of purchasing power toward Coca-Cola between Euro in France and USD in USA, comparing to Big-Mac?\n\nI have an assignment that requires me to pick an item and use it to create my own price level index, similar to Big-Mac index.\n\nIn March 2021, the price of 0.5 litre Coca-Cola in France was 0.97 Euro and in the United States was 2.75 USD. Therefore, the \u201cCoca-Cola exchange rate\u201d was 2.84 USD per Euro. (Source: https:\/\/www.globalproductprices.com\/France\/coca_cola_price\/)\n\nHowever, in 2018 the price of Big-Mac in France was 4.2 Euro. Meanwhile, in the United States it costs only 5.65 dollars. This ratio is 1.35 USD per Euro, approximately half of the one derived from price of Coca-Cola. (Source: https:\/\/data.nasdaq.com\/data\/ECONOMIST\/BIGMAC_FRA-big-mac-index-france, https:\/\/data.nasdaq.com\/data\/ECONOMIST\/BIGMAC_USA-big-mac-index-united-states)","reasoning":"Thie is likely to be related to the different taxation in the US and France. We can check the detailed data.","id":"71","excluded_ids":["N\/A"],"gold_ids_long":["purchasing_power_us_frence\/IndexaspxDataSetCodeCTSETR.txt"],"gold_ids":["purchasing_power_us_frence\/IndexaspxDataSetCodeCTSETR_345.txt"],"gold_answer":"$\\begingroup$\n\nThis could be for several reasons:\n\n 1. this can be because coca-cola is tradable good whereas Big Mac is non-tradable. You would expect the law of one price to hold better for tradables than non-tradables. Even though the prices\/wages in tradable sector affect those in non-tradable (e.g. Balassa-Samuelson effect), there is no good reason to assume the law of one price actually holds for non-tradables. \n\n 2. Limitation of Big Mac or Coca-Cola etc indexes are that they also capture effect of local taxes, import duties, competition etc. \n\nAs far as data show the taxation in the US and France are not exactly same\n(e.g. see [ OECD ](https:\/\/stats.oecd.org\/Index.aspx?DataSetCode=CTS_ETR) and\n[ USCIB ](https:\/\/www.uscib.org\/value-added-tax-rates-vat-by-country\/) ).\n\nAs pointed in the comments there are likely differences between level of\ncoopetition in US vs Fr retail, and fastfood."} +{"query":"What does chi p(q) mean?\n\nIt is a beginner question but I did not find a good explanation so I am asking here. Hope that I received the help from the community.\n\nToday I run a joint null test individually like that\n\ntest (tau0=0) (tau1=0) (tau2=0)\nAnd the result is\n\n( 1) tau0 = 0\n ( 2) tau1 = 0\n ( 3) tau2 = 0\n\n chi2( 3) = 1.12\n Prob > chi2 = 0.7717\nI am wondering what does the numbers 2 and 3 in chi2(3) mean. I did a search from Wikipedia but I did not fully get it.\n\n","reasoning":"This is related to chi-squared distribution with k degrees of freedom","id":"72","excluded_ids":["N\/A"],"gold_ids_long":["chi_squared\/Chisquareddistribution.txt"],"gold_ids":["chi_squared\/Chisquareddistribution_47.txt"],"gold_answer":"$\\begingroup$\n\nThis is most likely a [ $\\chi^2$ distribution\n](https:\/\/en.wikipedia.org\/wiki\/Chi-squared_distribution) with a degree of\nfreedom of 3. From Wikipedia:\n\n> In probability theory and statistics, the chi-squared distribution [...]\n> with k degrees of freedom is the distribution of a sum of the squares of k\n> independent standard normal random variables."} +{"query":"How to compare investments with different risk and expected return?\n\nSupposing I can choose to invest money in several different investments, each having\n\nrisk \ud835\udf0e\ud835\udc56\n, for example, calculated as standard deviation\nand expected return \ud835\udc5f\ud835\udc56\nlet's assume they have the same duration and same initial investment to keep things simpler\nFor example if \ud835\udf0e1=\ud835\udf0e2\n, I can easily choose the one for which \ud835\udc5f\ud835\udc56\n is greater. if for example \ud835\udf0e1=1.1\ud835\udf0e2\n and \ud835\udc5f1=2\ud835\udc5f2\n, reasonably \ud835\udc5f1\n is better. From the above, I could decide to pick the investment for which \ud835\udc5f\ud835\udc56\ud835\udf0e\ud835\udc56\n is greater. But this is just empirical and arbitrary. Is this method any good? Is there any more rigorous way of choosing between investments?","reasoning":"One approach is to approximate expected utility by a function\n of mean and variance","id":"73","excluded_ids":["N\/A"],"gold_ids_long":["invest_risk_return\/1807.txt"],"gold_ids":["invest_risk_return\/1807_13.txt","invest_risk_return\/1807_7.txt","invest_risk_return\/1807_2.txt","invest_risk_return\/1807_10.txt","invest_risk_return\/1807_14.txt","invest_risk_return\/1807_16.txt","invest_risk_return\/1807_17.txt","invest_risk_return\/1807_5.txt","invest_risk_return\/1807_8.txt","invest_risk_return\/1807_1.txt","invest_risk_return\/1807_9.txt","invest_risk_return\/1807_3.txt","invest_risk_return\/1807_6.txt","invest_risk_return\/1807_11.txt","invest_risk_return\/1807_4.txt","invest_risk_return\/1807_15.txt","invest_risk_return\/1807_12.txt","invest_risk_return\/1807_0.txt"],"gold_answer":"$\\begingroup$\n\nThe trade-off between risk and expected returns depends on your own\npreferences.\n\nAssume that you are expected utility maximizer and let the return of the\ninvestment be given by the random variable $X$ . Your utility is given by.\n$$ \\mathbb{E}(u(X)) $$\n\nLet $\\mu$ be the mean of $X$ and let $\\sigma^2$ be the variance of $X$\nthen taking a Taylor expansion of $u(x)$ around $u(\\mu)$ gives: $$ u(x)\n\\approx u(\\mu) + u'(\\mu)(x - \\mu) + \\frac{u''(\\mu)}{2}(x - \\mu)^2 $$ Taking\nexpectations on both sides gives: $$ \\begin{align*} \\mathbb{E}(u(X)) &\\approx\nu(\\mu) + u'(\\mu)(\\mu - \\mu) + \\frac{u''(\\mu)}{2}\\sigma^2,\\\\\\ &= u(\\mu) +\n\\frac{u''(\\mu)}{2} \\sigma^2.\\\\\\ \\end{align*} $$\n\nSo the higher the curvature of $u$ (the more negative $u''(\\mu)$ ) the\nmore negative the second term will be. Intuitively, the curvature measures the\ndegree of aversion for uncertainty.\n\nRemark that this approximation will only be good when $X$ does not deviate\ntoo much from the mean, so the Taylor expansion is good.\n\nYou can also take a Taylor approximation around zero. This gives: $$\n\\begin{align*} u(x) \\approx u(0) + u'(0) x + \\frac{u''(0)}{2} x^2,\\\\\\\n\\end{align*} $$ So: $$ \\mathbb{E}(u(X)) \\approx u(0) + u'(0) \\mu +\n\\frac{u''(0)}{2} (\\sigma^2 + \\mu^2) $$\n\nSee also the paper of [ Levy and Markowitz (1979), \"Approximating Expected\nUtility by a Function of Mean and Variance\", American economic review, 69,\n308-317 ](https:\/\/www.jstor.org\/stable\/1807366) ."} +{"query":"IV regression: first stage in logs, second stage in levels?\n\nI have a regression in levels, derived from theory.\n\nI want to instrument one of the variables, but the best instrument I find has a weak correlation to the endogenous variable in levels, and a strong correlation in logs. Both are very heteroskedastic.\n\nIs it possible to somehow instrument with the first stage in logs, and the second stage in levels?","reasoning":"This is called forbidden regression. We need to check calculation and results there.","id":"74","excluded_ids":["N\/A"],"gold_ids_long":["forbidden_regression\/notesonforbiddenregressionspdf.txt"],"gold_ids":["forbidden_regression\/notesonforbiddenregressionspdf_0.txt","forbidden_regression\/notesonforbiddenregressionspdf_4.txt","forbidden_regression\/notesonforbiddenregressionspdf_2.txt","forbidden_regression\/notesonforbiddenregressionspdf_3.txt","forbidden_regression\/notesonforbiddenregressionspdf_1.txt"],"gold_answer":"$\\begingroup$\n\nWhat you are describing is the so called \" _forbidden regression_ \", which (in\ngeneral) does not give consistent estimates. This is a summary of the [ notes\nof Ben Williams\n](https:\/\/benjamindwilliams.weebly.com\/uploads\/6\/8\/5\/7\/68575765\/notesonforbiddenregressions.pdf)\n\nConsider a (nonlinear) first stage regression of $X$ on the instruments $Z$\ngiving fitted values (e.g. using a log-log specification): $$ \\hat X = \\hat\n\\mu(Z) $$ Consider the structural (causal) equation: $$ Y = X'\\beta + u $$\nWhat you propose is to use $\\hat X : \\hat \\mu(Z)$ instead of $X$ in the\nsecond stage. This gives: $$ \\begin{align*} \\hat \\beta &= (\\hat X' \\hat\nX)^{-1} \\hat X Y,\\\\\\ &= (\\hat X' \\hat X)^{-1} \\hat X (X' \\beta + u),\\\\\\ &=\n(\\hat X' \\hat X)^{-1} \\hat X (\\hat X' \\beta) + (\\hat X' \\hat X)' \\hat X'(X -\n\\hat X)'\\beta + (\\hat X' \\hat X)^{-1}\\hat X u,\\\\\\ &= \\beta + \\underbrace{(\\hat\nX' \\hat X)' \\hat X'(X - \\hat X)'\\beta}_A + \\underbrace{(\\hat X' \\hat\nX)^{-1}\\hat X u}_B, \\end{align*} $$ If $Z$ is a valid instrument, on can\nexpect that the $B$ vanishes as $\\hat X = \\hat \\mu(Z)$ and by assumption\n$\\mathbb{E}(u|Z) = 0$ .\n\nNow the $A$ terms is the real problem. Notice that we can always write: $$\nX = \\mathbb{E}(X|Z) + \\eta,\\\\\\ \\text{ with } \\mathbb{E}(\\eta|Z) = 0 $$ (here\n$\\eta$ is simply $X - \\mathbb{E}(X|Z)$ ).\n\nThen taking the middle part of the $A$ term gives: $$ \\hat X'(X - \\hat X) =\n\\hat X'(\\mathbb{E}(X|Z) - \\hat X) + \\hat X'\\eta,\\\\\\ $$ The last term should\nvanish as $\\mathbb{E}(\\eta|Z) = 0$ . The first term however, will (in\ngeneral) only vanish if $\\hat X = \\hat \\mu(Z)$ is consistent for\n$\\mathbb{E}(X|Z)$ , which will be the case if $\\mu(Z)$ is a correct\nspecification of $\\mathbb{E}(X|Z)$ .\n\nThe usual 2SLS however is consistent as in this case: $$ \\hat X =\nZ(Z'Z)^{-1}Z'X. $$ Then: $$ \\begin{align*} \\hat X'(X - \\hat X) &=\nX'Z(Z'Z)^{-1}Z'(X - Z(Z'Z)^{-1}Z'X),\\\\\\ &= X'Z(Z'Z)^{-1}Z'X -\nX'Z(Z'Z)^{-1}Z'Z(Z'Z)^{-1}Z'X,\\\\\\ &= X'Z(Z'Z)^{-1}Z'X - X'Z(Z'Z)^{-1}Z'X = 0\n\\end{align*} $$ So either you do normal 2SLS, which will be consistent if\n$Z$ is uncorrelated with $u$ or you can use what is called indirect least\nsquares.\n\n 1. Regress $X$ on $Z$ using a nonlinear regression (e.g. loglinear regression). \n\n 2. Use the fitted values $\\hat X = \\hat \\mu(Z)$ as instruments themselves in a 2SLS of $Y$ on $X$ . So run 2SLS with instruments $\\hat X = \\hat \\mu(Z)$ instead of $Z$ . As $\\mu(Z)$ is a function of $Z$ , we also have that $\\mathbb{E}(u|\\mu(Z)) = 0$ , so these should be valid instruments."} +{"query":"How could you partition Household Food Expenditures into Producing Industries?\n\nFrom the 2019 Consumer Expenditure Survey, the average household annual food expenditure was $8,169 for that year. There are subcategories such as \"Food at Home\" and \"Food away from Home,\" but for simplicity, I'll keep the scope to annual household food expenditures.\n\nI want to partition this $8,169 into the BEA industries (aggregation level is \"summary\") for the ultimate goal of assessing pollution impact. That is, if buying bread is related to a farm purchasing harvesting equipment, maybe some of those dollars should be allocated towards both farms and industrial machinery.\n\nHere are a few immediately relevant BEA industry summaries:\n\n111CA: Farms\n113FF: Forestry, Fishing & Related activity (the fish is sold to markets?)\n311FT: Food and Beverage & tobacco\n445: Food and Beverage Stores\n722: Food services and Drinking places\nIn my studies of the Leontief Input\/Output model, one thought is just to pile all $8,169 into 311FT and let the \"recipe\" tell me how much that 111CA contributed, but that feels contradictory since it predicts 0 final demand at the household level for 111CA, when clearly that industry has a large GDP number.\n\nSo what is the best way to \"attribute\" the household expenditures on food into those industries?","reasoning":"You should try to avoid mixing data sets unless you have a good way to bridge them. There are very precise methodologies used to construct each data set and mixing them can lead to nonobvious problems. You might consider looking at the BEA's IO tables at a more disaggregated level.","id":"75","excluded_ids":["N\/A"],"gold_ids_long":["food_percent\/inputoutputaccountsdata.txt"],"gold_ids":["food_percent\/inputoutputaccountsdata_4.txt","food_percent\/inputoutputaccountsdata_7.txt","food_percent\/inputoutputaccountsdata_5.txt","food_percent\/inputoutputaccountsdata_6.txt"],"gold_answer":"$\\begingroup$\n\nYou should try to avoid mixing data sets unless you have a good way to bridge\nthem. There are very precise methodologies used to construct each data set and\nmixing them can lead to nonobvious problems. You might consider looking at the\n[ BEA's IO tables at a more disaggregated level\n](https:\/\/www.bea.gov\/industry\/input-output-accounts-data) . For example, the\nUse table shows Personal Consumption Expenditures at a granularity that you\nmight be interested in. For example, in the screenshot below it shows that in\n2012, households consumed \\$32 billion worth of goods from the \"Vegetable and\nmelon farming\" product category.\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/ttNDd.png)\n](https:\/\/i.sstatic.net\/ttNDd.png)\n\nAt this level of disaggregation, you can see how much households purchased\nfrom different kinds of product categories: restaurant goods, from frozen\nfoods, from snack food manufacturing, etc. You can also see how much each of\nthese industries purchased from each other product type category, etc.\nContinuing with the example, you'll see that the Vegetable and melon farming\nindustry purchased \\$106 billion worth of goods from \"Farm machinery and\nequipment manufacturing\" product category.\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/IeMqW.png)\n](https:\/\/i.sstatic.net\/IeMqW.png)\n\nNote that I haven't stated that the Vegetable and melon farming industry\nnecessarily produces goods in the Vegetable and melon farming product\ncategory. To get that information, you would have to explore the [ BEA supply\ntables. ](https:\/\/www.bea.gov\/industry\/input-output-accounts-data) For those\ninterested, more information is in the answer to this other question: [\nhttps:\/\/economics.stackexchange.com\/a\/47689\/59\n](https:\/\/economics.stackexchange.com\/a\/47689\/59)\n\nHope this helps.\n\nThe link to the spreadsheet from the BEA is given here: [\nhttps:\/\/apps.bea.gov\/industry\/xls\/io-\nannual\/Use_SUT_Framework_2007_2012_DET.xlsx\n](https:\/\/apps.bea.gov\/industry\/xls\/io-\nannual\/Use_SUT_Framework_2007_2012_DET.xlsx)"} +{"query":"Why is having a big corporation keep its money in foreign countries a bad thing in the public's eye?\n\nI have been trying to dig deeper into corporate tax evasion, and how it works, to really determine if they're doing something bad or what. I don't know much about economics or politics, but I hear people all the time saying \"tax the corporations! they are finding loopholes not to pay taxes and that is bad!\" I don't understand why, and want to know what is going on and why people think it's bad.\n\nFor example, I read this:\n\nThe law installed a \"territorial\" system in which global corporations aren't taxed on foreign profit. The TCJA encourages them to reinvest it in the U.S. This benefits pharmaceutical and high-tech companies the most.3\n\nMultinationals were taxed on foreign income earned under the prior \"worldwide\" system. They didn't pay tax until they brought the profits home. As a result, many corporations reinvested profits earned overseas into those markets. It was cheaper for them to borrow at low interest rates in the U.S. than to bring earnings home. As a result, corporations became debt-heavy in the U.S. and cash-rich in overseas operations.\n\nBasically, if you go to the Cayman Islands and pay a low tax rate, somehow money you make there (I don't know how you \"make money\" in these foreign places if your real business is elsewhere), and you don't \"bring it home\" to the U.S.. Instead you leave it there and reinvest (into what, I don't know). This in theory means you can make money (somehow) and pay lower taxes on it (in the tax haven).\n\nCan you explain how this works and why it is considered bad by some people on the left specifically? Why would they borrow money in the US and have cash in another country? How much money is a large corporation even borrowing? Basically, how does this work, what is an example?","reasoning":"It is related to transfer pricing, where profit centers are set up to maximize firm profits.","id":"76","excluded_ids":["N\/A"],"gold_ids_long":["tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1.txt"],"gold_ids":["tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_14.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_0.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_8.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_3.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_4.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_20.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_15.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_5.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_6.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_17.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_18.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_16.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_10.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_12.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_9.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_1.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_11.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_13.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_7.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_2.txt","tax_evasion\/2350664pdfrefreqidfastlydefault3Aa3ced6ea50c1465ac18e25d786e32b4aabsegmentsorigininitiatoracceptTC1_19.txt"],"gold_answer":"$\\begingroup$\n\nFirms can always shift their profits from one tax territory to another via\nvarious methods.\n\nConsider a simplistic example, let\u2019s say we have firm ABC that has most of its\nbusiness in Sweden. The owners of ABC can set up another parent company, let\u2019s\nsay ABC global in some tax haven like Luxembourg. Afterwards they can transfer\nsome intellectual property (or other difficult to value assets - typically\nsome intangibles) to the ABC global.\n\nThen when ABC Sweden manages to have to have let\u2019s say 100,000\u20ac profit, the\nLuxembourg based ABC global will decide to lease the intellectual property (eg\nlogo) to ABC Sweden for 100,000\u20ac. This extra cost will cause ABC Sweden to\nhave zero profit in Sweden while now ABC global in Luxembourg gets 100,000\u20ac\nprofit that now will be taxed there at the their very low corporate taxes.\n\nThe above is just an oversimplified example of course, the methods how this is\ndone are always evolving as government regulators always play cat and mouse\ngame with corporations. If you want to know more details you can have look at\ntopics such as transfer pricing (see [ Alles,& Datar, 1998\n](https:\/\/www.jstor.org\/stable\/2350664) or [ Hirshleifer, 1956\n](https:\/\/www.researchgate.net\/profile\/Srikant-\nDatar\/publication\/227447111_Strategic_Transfer_Pricing\/links\/09e41510fb40bca7f8000000\/Strategic-\nTransfer-Pricing.pdf) ).\n\nAlternatively, and this what is directly referenced in that article, the\nparent company even can be located in the US, and just let it\u2019s foreign\ndaughter companies to reinvest profits they earn in each individual foreign\nmarket (eg use that profit to buy new building, invest in R&D etc, or to build\na war chest). This still makes sense for the company, as A) doing so still\nmakes company more valuable and thus increases shareholder value, B) if they\nbuild a war chest they can always bring that money home at some unspecified\nfuture if US decides to lower its taxes (like the [ Trump tax cuts\n](https:\/\/www.google.nl\/amp\/s\/www.cnbc.com\/amp\/2019\/03\/27\/us-companies-bring-\nhome-665-billion-in-overseas-cash-last-year.html) ).\n\nAgain the above is only very rough sketch of how this is done, the details\nalways evolve following current tax regulations that are changing virtually\nevery year in some small or bigger way around the world.\n\n> Can you explain how this works and why it is considered bad by some people\n> on the left specifically?\n\nThis is not an economics question but moral question, so you would do better\nto ask this on philosophy stack exchange. This being said in short, answer\nwhether the above is bad or ok depends on your moral framework. On a\nfundamental level this is not much different from ordinary people also trying\nto optimize their tax rates by taking advantage of various loopholes (eg\npeople trying to claim as many deductions as possible or people avoiding VAT\nby having friend with a business buy them the product for them - since\nbusiness get VAT rebates). So if you adopt some deontological (rules\/action\nbased) ethics, which says trying to minimize your tax rate (within legal\nbounds) is acceptable, then the (legal)corporate tax optimization should also\nnot be a moral issue.\n\nHowever, the ideology of modern (liberal) left is underlined by Rawlsian moral\nframework (or at least in literature that\u2019s the basis for it, I doubt that\nmost people on either left or right know any moral philosophers) which is\ngoverned by consequentialist Max-Min principle (although Rawls full framework\nhas [ non-consequentialist elements\n](https:\/\/faculty.washington.edu\/wtalbott\/phil410\/trrawls.htm) so technically\nit is a hybrid moral framework for economics the Max-Min rule is the most\nrelevant part). Under this ethical system actions are good if they help to\nmaximize welfare for the poorest members of society. So under the Max-Min\nprinciple it\u2019s consequences that matter not rules. Taking advantage of tax\nloopholes is fine if you are poor, but if you are rich corporation it is bad\nas resources from those taxes could be redistributed to the poor (assuming\nthat the parameters of economy are such that this would still increase the\nwelfare of poor eg we are not already at the point where taxes are so high\nthat extra taxes cause so much distortions to the economy that it lowers\nwelfare of poor more than they gain from extra redistribution)."} +{"query":"Filling gap in data with correlated series\n\nI have two time series, of different length. A time series is GDP growth.\n\nThe gdp growth is the series I need, and it is also the longer series, but it has two gaps in two periods one after the other. The aggregate economic indicator is shorter, but it covers the period where I find the two gaps, and for the period in which both data are available, they are strictly correlated (r=0.94).\n\nHow could I fill these two gaps?\n\nOne possibility would be to use the autoregressive forecast. AR(1) describes quite well the series R\u00b2=0.91, and no significant autocorrelation in the residuals. But I do not think it is the optimal solution, because:\n\nI have two gaps, the second gap will be filled with the two step ahead forecast, which is less precise,\nI have data AFTER the point, and neglecting part of the information does not seem to be the best solution,\nI have also some external information (the correlated series), which I could also exploit.\nWhich method would be the most appropriate?\n\nThanks for the tips!","reasoning":"One possible approach is Catmull\u2013Rom spline, where original set of points also make up the spline curve.","id":"77","excluded_ids":["N\/A"],"gold_ids_long":["fill_in_data_gap\/CubicHermitespline.txt"],"gold_ids":["fill_in_data_gap\/CubicHermitespline_18.txt"],"gold_answer":"$\\begingroup$\n\nIf I understood correctly the gap is in the middle of the data. In such cases\nyou should not use forecasts that extrapolate the data, but some interpolation\nmethod.\n\nIf there is relatively large amount of variation in the data you would have\nbest results using something like [ Catmull\u2013Rom spline\n](https:\/\/en.wikipedia.org\/wiki\/Cubic_Hermite_spline#Catmull%E2%80%93Rom_spline)\n. Catmull\u2013Rom spline has some nice properties (see [ here\n](https:\/\/splines.readthedocs.io\/en\/latest\/euclidean\/catmull-rom-\nproperties.html) ). The main advantage of the Catmull-Rom spline is that all\nreal observations you have will become part of it and it allows you to\nestimate missing points in non-linear way.\n\nIf you work in R you can easily implement it with [ splineCR\n](https:\/\/rdrr.io\/github\/jedalong\/PathInterpolatR\/man\/splineCR.html) , if you\nwork with Python you can have look at [ this github\n](https:\/\/github.com\/vmichals\/python-algos\/blob\/master\/catmull_rom_spline.py)\ncode. Nowadays you will find it also part of programs such as EViews or Stata."} +{"query":"Why isn't economic growth defined in terms of the increase in national wealth?\n\nEconomic growth is defined as the increase in production or output per unit time. Why isn't it instead defined in terms of the increase in national wealth?","reasoning":"We need to figure out the definition of economy and wealth, and check whether they match.","id":"78","excluded_ids":["N\/A"],"gold_ids_long":["economy_wealth\/wealthasp.txt","economy_wealth\/indexphptitleGlossaryEconomicactivity.txt"],"gold_ids":["economy_wealth\/indexphptitleGlossaryEconomicactivity_3.txt","economy_wealth\/wealthasp_2.txt"],"gold_answer":"$\\begingroup$\n\nBecause the word _economic_ in economic growth refers to growth of economic\nactivity. By common definition (see [ Eurostat\n](https:\/\/ec.europa.eu\/eurostat\/statistics-\nexplained\/index.php?title=Glossary:Economic_activity) )\n\n> An economic activity takes place when resources such as capital goods,\n> labour, manufacturing techniques or intermediary products are combined to\n> produce specific goods or services. Thus, an economic activity is\n> characterised by an input of resources, a production process and an output\n> of products (goods or services).\n\nWealth is an _accumulated_ difference between production and consumption over\ntime, since any production not consumed will eventually become part of wealth\n(typically measured in terms of net assets).\n\nThus wealth is not in itself economic activity, as defined above, but rather\nan accumulation of the output from the economic activity that society did not\ndecided to consume."} +{"query":"Are there any behavioral macro models with rigorous micro-foundations?\n\nI am looking for some paper that tries to establish rigorous micro-foundations the behavioral New Keynesian (or any other) macro models.\n\nThis is surprisingly hard, most work on this topic (like De Grauwe 2012), simply starts already by setting up IS and LM curves and adding some sort of mix of rational and behavioral agents (mostly agents with non-rational expectations) on top of it. However, I can't find any paper that would provide full rigorous micro-foundations for such model (i.e. deriving IS and LM from the micro behavior of these rational and behavioral agents across time).\n\nThis is of course difficult, as without the rational expectations it is difficult to solve macro models, but on other hand it is hard to believe this issue would be ignored by the practitioners.\n\nConsequently, my question is: are there any papers that establish rigorous micro-foundations for behavioral macro models?","reasoning":"The macro-models can be related to monetary and\nfiscal policy. We need to find some micro-foundations.","id":"79","excluded_ids":["N\/A"],"gold_ids_long":["micro_foundation\/behavioralnewkeynesianmodelpdf.txt"],"gold_ids":["micro_foundation\/behavioralnewkeynesianmodelpdf_72.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_84.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_67.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_58.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_38.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_7.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_25.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_62.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_12.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_22.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_35.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_73.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_45.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_37.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_11.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_75.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_69.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_59.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_66.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_55.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_70.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_46.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_56.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_40.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_81.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_15.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_30.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_53.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_10.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_28.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_78.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_80.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_68.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_49.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_57.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_51.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_8.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_16.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_5.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_76.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_61.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_4.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_27.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_19.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_47.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_60.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_17.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_39.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_83.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_44.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_82.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_64.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_20.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_54.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_65.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_52.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_50.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_34.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_74.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_31.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_14.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_36.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_63.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_6.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_0.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_26.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_48.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_24.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_33.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_77.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_21.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_1.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_79.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_13.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_29.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_41.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_43.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_3.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_23.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_18.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_9.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_2.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_42.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_32.txt","micro_foundation\/behavioralnewkeynesianmodelpdf_71.txt"],"gold_answer":"$\\begingroup$\n\n[ Gabaix, X. (2020). A behavioral New Keynesian model. American Economic\nReview, 110(8), 2271-2327.\n](https:\/\/scholar.harvard.edu\/files\/xgabaix\/files\/behavioral_new_keynesian_model.pdf)\nhas microfoundation. And Gabaix has more papers that might interest you.."} +{"query":"Is there any example for \"pluralistic ignorance\" in economics?\n\nToday I read a paper recently written by Bursztyn, 2021 saying that\n\nThe origin, persistence, and rigidity of misperceptions about others can in principle be explained by different conceptual frameworks, such as stereotyping (e.g., Bordalo et al. 2016), motivated reasoning (e.g., Benabou and Tirole 2016), and pluralistic ignorance (e.g., Kuran 1997; Bursztyn, Egorov and Fiorin 2020; Bursztyn, Gonz\u00b4alez and Yanagizawa-Drott 2020)\n\nFrom what I understand, pluralistic ignorance is\n\nthe situation in which almost all members of a group privately reject group norms, yet believe that virtually all other group members accept them\n\nHowever, I still not yet got the idea of this behaviour, could you give me an example in economic or finance to understand it more?\n\nUpdate: Adding a great explaination of \"pluralistic ignorance\" as suggested by @Lason\n\nThe term \u201cpluralistic ignorance\u201d was coined to describe the situation in which almost all members of a group privately reject group norms, yet believe that virtually all other group members accept them (Katz and Allport, 1931). Under such situations, individuals predict that they would lose social standing if they behaved as they wished. Behaving against the group norm could result in negative reactions from other ingroup members. Therefore, people are likely to follow perceived group norms to maintain positive impressions in their groups, even when they do not support the norms (Miller and McFarland, 1987; Miller and Prentice, 1994; Prentice and Miller, 1996; Geiger and Swim, 2016). In line with this idea, in situations of pluralistic ignorance, some people even actively enforce the perceived norms (i.e., publicly criticizing a \u201cmisfit\u201d into accepting the norm), although they privately disapprove of the norms (Willer et al., 2009). Consequently, public behaviors of groups as a whole do not coincide with the majority of group members' private preferences under circumstances of pluralistic ignorance. Thus, the situation of pluralistic ignorance is well represented in the following sentence: \u201cNo one believes, but everyone believes that everyone else believes\u201d (Krech and Crutchfield, 1948).","reasoning":"This can be related to the perceptions\/beliefs about the public of the Beliefs of the Public.","id":"80","excluded_ids":["N\/A"],"gold_ids_long":["pluralistic\/1872200redirectedFromfulltext.txt"],"gold_ids":["pluralistic\/1872200redirectedFromfulltext_1.txt"],"gold_answer":"$\\begingroup$\n\nPluralistic ignorance is a theory explaining why social practices continue to\nbe perpetuated when almost no individual seems to support them. When\npluralistic ignorance is at play, agents act in the way they envision others\nwant them to act, so as not to lose their social standing by acting as they\nwished.\n\nSome examples in the indirect economic world include: [ Heavy drinking in\ncollege\n](https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0065260108602385?via%3Dihub)\n, [ Racial segregation perceptions ](https:\/\/academic.oup.com\/poq\/article-\nabstract\/40\/4\/427\/1872200?redirectedFrom=fulltext) , and [ many others (see\nreferences)\n](https:\/\/www.frontiersin.org\/articles\/10.3389\/fpsyg.2017.01508\/full) .\n\nI have not found or studied any example in finance, but I would argue that the\nsame logic applies and any experienced stockbroker or individual investor\nwould experience the negative effects of pluralistic ignorance; for example\neach investor might not like a particular asset, but for reasons of their own,\nthey might think that everyone else _loves_ this asset, and thus is willing to\nbuy it or hold on to it. The outcome of this process is for the price of an\nunlikeable asset to rise - conforming to the norm that each individual\nunwillingly propagated."} +{"query":"Economic growth in a DSGE model, despite mean-zero shocks\n\nThe DSGEs I've seen have steady-states, and mean-zero shocks.\n\nCan these predict growth in GDP \/ capital etc?\n\nIs this possible despite them being equilibrium models, or do you have to completely change your approach and switch to a Solow-Swan-type model to predict GDP growth?","reasoning":"These models typically have a particular kind of steady-state, which is, more precisely, called balanced growth path, where key indicators growth at the same constant rate.","id":"81","excluded_ids":["N\/A"],"gold_ids_long":["economic_growth\/Balancedgrowthequilibrium.txt"],"gold_ids":["economic_growth\/Balancedgrowthequilibrium_4.txt"],"gold_answer":"$\\begingroup$\n\n**Yes, there are DSGE models that can be used for forecasting.**\n\nThese models typically have a particular kind of steady-state, which is, more\nprecisely, called _balanced growth path_ [ (BGP)\n](https:\/\/en.wikipedia.org\/wiki\/Balanced-growth_equilibrium) . On the BGP (in\nthe absence of shocks), key indicators growth at the same constant rate. For\nexample, GDP, household consumption, investment all grow at 2% a year. This is\nconsistent with constant steady-state ratios of the indicators over GDP, for\nexample $\\frac{K}{Y}$ and $\\frac{C}{Y}$ would be constant, while\nindicators grow at the same rate.\n\nThe rate itself is often built in as an assumption, based on economic priors\nand analysis done outside of the DSGE model (for example estimates of labor-\nproductivity growth). Because of this fact, forecast of equilibrium growths\nare not just boring, but not really forecasts. However, an economy is hardly\never on its balanced growth path, so the strength of these models is in\ndescribing the dynamics back towards equilibrium, for example after a shock to\nglobal oil prices. Immediately after the shock hits, investment will typically\nfall at a faster rate than GDP, and then grow faster after the impact has\nbottomed out.\n\nNote that these models need to be related to actual data via estimation, often\nusing the Kalman Filter to identify unobservable variables, such as the output\ngap, and shocks (such as a technology shock).\n\nTo see how data can be related to model variables, consider the measurement\nequation (assuming the use of a Kalman filter). $Y$ is actual GDP in\nquarterly frequency, $\\hat{y_t}$ is the deviation of output from its trend\nlevel at time $t$ , and 2% is the assumed annual growth rate. Then this can\nbe fed into the model as $$ \\log Y_t - \\log Y_{t-1} - 0.005 = \\hat{y}_t -\n\\hat{y}_{t-1} $$ This means in particular that the model solution can be\ntranslated back into numbers that are consistent with actual data and hence\n_can be used for forecasting_ .\n\n_References:_ For an overall good and detailed example, see the [ ECB's New\nArea Wide Model ](https:\/\/www.ecb.europa.eu\/pub\/pdf\/scpwps\/ecbwp944.pdf) . It\nincludes a section on model dynamics (4), and one on how assumptions are built\ninto the model (3.2 & 3.3). The references in that paper are also worth\nchecking out. For an even more modern version, see their new [ model\n](https:\/\/www.ecb.europa.eu\/pub\/pdf\/scpwps\/ecb.wp2200.en.pdf) , which includes\nmany more financial market dynamics."} +{"query":"Do governments have to pay interest on government debt held by the central bank?\n\nThrough open market operations, the central bank may buy government debt to increase the money supply. Does the government need to pay interest on the debt held by the central bank, or does the debt become interest-free?","reasoning":"We can check typical examples, e.g., whether federal reserve receives interest income from open market operations.","id":"82","excluded_ids":["N\/A"],"gold_ids_long":["government_interest\/other20210111ahtm.txt"],"gold_ids":["government_interest\/other20210111ahtm_79.txt","government_interest\/other20210111ahtm_74.txt"],"gold_answer":"$\\begingroup$\n\nYes, the government has to pay interest on debt even if it is held by the\ncentral bank. That is true at least for major modern central banks.\n\nFor example, from a press [ release\n](https:\/\/www.federalreserve.gov\/newsevents\/pressreleases\/other20210111a.htm)\nfrom the US Federal Reserve Board:\n\n> Net income for 2020 was derived primarily from $100 billion in interest\n> income on securities acquired through open market operations...\n\nHowever, although modern central banks are in their decision making\nindependent from the government, they are still public entities (or equivalent\nto such), and profits (if there are any) will ultimately be transferred back\nto the government. So, to the extent that profits are due to the holding of\ngovernment debt, the government will get a rebate on interest paid."} +{"query":"Large National Debt by borrowing money from foreign sources\n\nI have a question about the greater severity of large national debt caused by borrowing from foreign sources.\n\nI am reading a book about elementary Economics and am currently on a chapter discussing the strengths and weaknesses of demand-side policies.\n\nThe book (A-level Economics, Anderton, 2015, pp. 204) states the following:\n\nKeynesian economics states that so long as a government can print money to finance its deficit without fueling inflation or borrow money from the financial markets, then the National Debt is not a problem for the short term. Nearly all economists, however, would argue that, in the long term, large National Debts can be a problem particularly if they are financed mainly by borrowing money from foreigners.\n\nHowever, I am struggling to understand how they can be worse than borrowing from domestic debt markets (if I have stated the correct market i.e. from a domestic lender).\n\nMy understanding was that if the debt was financed through printing more money, and had to pay off the loans to a foreign lender, it would find it harder to reduce the supply of money via reversing quantitative easing i.e. selling bonds back into the bond market, so impending inflation in the long term would occur.\n\nHowever, the book does not qualify its statement regading large National Debt as above so I am not sure what the justification is for Anderton's statement above.\n\nWhy would large National Debts financed by foreign lenders be a greater problem than domestic lenders if the problem was to be solved by printing more money?","reasoning":"The central question is to ask for a comparison between domestic and foreign debt.","id":"83","excluded_ids":["N\/A"],"gold_ids_long":["domestic_foreign\/wp0733pdf.txt"],"gold_ids":["domestic_foreign\/wp0733pdf_9.txt","domestic_foreign\/wp0733pdf_17.txt","domestic_foreign\/wp0733pdf_20.txt","domestic_foreign\/wp0733pdf_21.txt","domestic_foreign\/wp0733pdf_6.txt","domestic_foreign\/wp0733pdf_22.txt","domestic_foreign\/wp0733pdf_8.txt","domestic_foreign\/wp0733pdf_19.txt","domestic_foreign\/wp0733pdf_7.txt","domestic_foreign\/wp0733pdf_13.txt","domestic_foreign\/wp0733pdf_11.txt","domestic_foreign\/wp0733pdf_5.txt","domestic_foreign\/wp0733pdf_16.txt","domestic_foreign\/wp0733pdf_24.txt","domestic_foreign\/wp0733pdf_23.txt","domestic_foreign\/wp0733pdf_4.txt","domestic_foreign\/wp0733pdf_15.txt","domestic_foreign\/wp0733pdf_18.txt","domestic_foreign\/wp0733pdf_12.txt","domestic_foreign\/wp0733pdf_10.txt","domestic_foreign\/wp0733pdf_3.txt","domestic_foreign\/wp0733pdf_0.txt","domestic_foreign\/wp0733pdf_2.txt","domestic_foreign\/wp0733pdf_14.txt","domestic_foreign\/wp0733pdf_25.txt","domestic_foreign\/wp0733pdf_1.txt"],"gold_answer":"$\\begingroup$\n\nThe statement should be ideally subjected to several caveats because there is\na lot of heterogeneity between how difficult it is to manage external debt\namong nations (e.g. developed vs developing nations, country in monetary union\nvs country outside monetary union etc), but generally it is argued that:\n\n 1. Foreign debt is usually (but not always) also denominated in foreign currency. In the past sometimes people even used words 'foreign debt' and 'debt denominated in foreign currency' interchangeably ( [ Vasishtha, 2007 ](https:\/\/www.bankofcanada.ca\/wp-content\/uploads\/2010\/02\/wp07-33.pdf) ) even though that of course is not accurate as you can have foreign debt denominated in domestic currency, and domestic debt denominated in foreign currency. Still just the fact that some scholars sometimes slipped and used the terms interchangeably indicates that often foreign owned debt is denominated in foreign currency. Such debt is more difficult to manage because it can't be easily monetized. \n\n 2. Foreign and domestics creditors are different, for many countries, foreign investors will be large international hedge funds or other governments or international organizations like IMF, whereas domestic creditors are typically domestic firms or individuals. It is easier for large international organization to try to sue government, or to do reputational damage to the government which will make its future borrowing costs higher ( [ Vasishtha, 2007 ](https:\/\/www.bankofcanada.ca\/wp-content\/uploads\/2010\/02\/wp07-33.pdf) ). \n\nOn the other hand, domestic creditors are at the mercy of the government.\nSince from creditor's perspective the government debt is an asset, government\ncan always decide to raise taxes on the income domestic creditors earn from\nthese assets (thereby recouping portion or all interest paid on the domestic\ndebt), or levy some wealth tax that will in effect force domestic creditors to\nin effect hand portion of their assets (and debt is an asset for creditor) to\nthe government. To be fair in principle there are ways how domestic government\ncan tax foreign entities as well but it's much more difficult (see discussion\nof that in [ Gros 2011 ](https:\/\/voxeu.org\/article\/external-versus-domestic-\ndebt-euro-crisis) )."} +{"query":"Krugman article - Government debt helps avoid a destructive scramble for cash?\n\nI was just reading this Krugman article which contains the words...\n\nI\u2019ve already mentioned that having at least some government debt outstanding helps the economy function better. How so? The answer, according to MIT\u2019s Ricardo Caballero and others, is that the debt of stable, reliable governments provides \u201csafe assets\u201d that help investors manage risks, make transactions easier and avoid a destructive scramble for cash.\n\nNow the \"help investors manage risks\" part I can understand, but \"make transactions easier\" and \"avoid a destructive scramble for cash\" are a mystery to me.","reasoning":"This is related to the fact that government debt has positive impact on market liquidit. We need to find related literature to support this.","id":"84","excluded_ids":["N\/A"],"gold_ids_long":["government_debt\/S026.txt"],"gold_ids":["government_debt\/S026_8.txt","government_debt\/S026_4.txt","government_debt\/S026_10.txt","government_debt\/S026_11.txt","government_debt\/S026_17.txt","government_debt\/S026_3.txt","government_debt\/S026_5.txt","government_debt\/S026_18.txt","government_debt\/S026_9.txt","government_debt\/S026_19.txt"],"gold_answer":"$\\begingroup$\n\nThis is because there is strand of literature that suggests government debt\nhas positive impact on market liquidity (eg see [ Grobety 2018\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0261560618300470?casa_token=fD0WcDXUE7QAAAAA:oG-\ntMAWmua78NAZSGRn3wJmxl4HHlFPWAak1YFpptrA_BG4wqwNBY-RB-Ypvbsn0k7q_uaoWWg) ).\nWhen Krugman talks about scramble for cash he likely refers to lack of\nliquidity which is clearly bad for business. Also more liquidity makes\ntransactions easier because it\u2019s easier to find buyer for your asset.\n\nHowever, I think Krugman is a bit prone to over-dramatization, and while there\nis some good empirical work supporting the result, it is not completely beyond\ncritique."} +{"query":"Is \"avoiding being the bearer of bad news\" an example of the \"principal-agent problem\"?\n\nFor example:\n\nA king designs a bridge, and he unknowingly does a bad job. He asks an engineer for her opinion, but the engineer fears that if she tells the truth, the dictator will be angry with her.\n\nMore generally, I'm thinking of situations in which the principal requests feedback\/information from the agent, but fear of retribution prevents the latter from being honest.\n\nIt seems like a reasonable example of the principal-agent problem, with conflicting incentives, but I wasn't sure if this situation had its own name.","reasoning":"We need to check the definition of principal-agent problem and see whether this example satisfies.","id":"85","excluded_ids":["N\/A"],"gold_ids_long":["principal_agent\/principalagentproble.txt"],"gold_ids":["principal_agent\/principalagentproble_1.txt"],"gold_answer":"$\\begingroup$\n\nI think it is. [ Intelligent economist\n](https:\/\/www.intelligenteconomist.com\/principal-agent-problem\/) site states:\n\n> The Principal Agent Problem occurs when one person (the agent) is allowed to\n> make decisions on behalf of another person (the principal). In this\n> situation, there are issues of moral hazard and conflicts of interest.\n\n> The agent usually has more information than the principal. This difference\n> in knowledge is known as asymmetric information. The consequence is that the\n> principal does not know how the agent will act. Also, the principal cannot\n> always ensure that the agent acts in the principal\u2019s best interests. This\n> departure from the principal\u2019s interest in the agent\u2019s interest is called an\n> \u201cagency cost.\u201d\n\n * In the example you mention the agent can make decision on behalf of principal (approve or reject design). \n\n * Agent has more information than principal. \n\n * Interest of agent and principal are not aligned. Principal wants honest answer, agent wants to keep her head. So there is conflict of interest. \n\nIf it has all features of principal agent problem I think it qualifies."} +{"query":"The geometry of cities (reference request)\n\nI am interested in reading up on the geometry of cities: what explains their \u2018shape\u2019 (e.g. why is the population often distributed in a rough circle)? I imagine that this is a question with some literature. Does anyone know of the classic papers? Textbook recommendations also appreciated!\n\nP.S. While I would be happy to read about empirical work on this, I am mainly interested in the topic from the theory side.","reasoning":"This is related to the economics of space and time.","id":"86","excluded_ids":["N\/A"],"gold_ids_long":["city_geometry\/1183544087fulltabArt.txt"],"gold_ids":["city_geometry\/1183544087fulltabArt_1.txt"],"gold_answer":"$\\begingroup$\n\nIf you want a highbrow mathematical economics take on the topic, you can take\na look at the 1977 book \"The Economics of Space and Time\" by Arnold M. Faden.\nYou can find a review of the book by Hal Varian [ here\n](https:\/\/projecteuclid.org\/journals\/bulletin-of-the-american-mathematical-\nsociety-new-series\/volume-1\/issue-2\/Review--Arnold-M-Faden-The-economics-of-\nspace-and\/bams\/1183544087.full) ."} +{"query":"Was there any currency backed by other metals than gold, silver or copper?\n\nWas any other metal than gold, silver or copper ever in history used to back a currency?","reasoning":"Other types of metal can include bronze, e.g., the coin once used in central Italy.","id":"87","excluded_ids":["N\/A"],"gold_ids_long":["bronze_coin\/Aesgrave.txt"],"gold_ids":["bronze_coin\/Aesgrave_36.txt"],"gold_answer":"$\\begingroup$\n\n[ Spartan iron coins\n](https:\/\/www.jstor.org\/stable\/1086107?seq=1#metadata_info_tab_contents) .\n\nRoman bronze coins: [ Aes Grave ](https:\/\/en.wikipedia.org\/wiki\/Aes_grave)\n\nChinese bronze coins: [ Ban liang\n](https:\/\/en.wikipedia.org\/wiki\/Ancient_Chinese_coinage#Ban_Liang_coins) (one\ntype was bronze, not all)\n\nA non-metal \"currency\" backed by various industrial metals: the cryptocurrency\n[ Tiberius coin ](http:\/\/www.cda.org.uk\/2019\/10\/tiberius-cryptocurrency-\nbacked-by-metals\/) ."} +{"query":"Non CES Production Functions\nI know CES production functions dominate economics, but I was curious, why? I've never seen a research paper or presentation utilize any form of a production function that is not CES.\n\nwhy is imposing CES so important in our models? ","reasoning":"It is helpful to assess the intrinsic links between production and factor substitution, e.g., CES functions","id":"88","excluded_ids":["N\/A"],"gold_ids_long":["ces_production\/j14676419201200730x.txt"],"gold_ids":["ces_production\/j14676419201200730x_2.txt"],"gold_answer":"$\\begingroup$\n\n> (1) why is imposing CES so important in our models?\n\nBecause although its relatively quite general (relatively to some other widely\nused production functions like Cobb-Douglas - which is a special case of CES)\nit is still easy to estimate with parametric models and generally CES\nproduction functions are easy to work with ( [ McFadden 1963\n](https:\/\/academic.oup.com\/restud\/article-\nabstract\/30\/2\/73\/1517685?redirectedFrom=fulltext) ).\n\nUntil very recently you need Cobb-Douglas, or some CES, with its unitary\nelasticity of substitution due to the normalization problem which precluded\npeople from applying even more general form. For example, as discussed in [\nKlump et al (2011) ](https:\/\/www.ecb.europa.eu\/pub\/pdf\/scpwps\/ecbwp1294.pdf)\n[emphasis mine]:\n\n> Until recently, the application of production functions with non-unitary\n> substitution elasticities (i.e., non Cobb Douglas) was hampered by empirical\n> and theoretical uncertainties. As has recently been revealed,\n> \u201cnormalization\u201d of production functions and production-technology systems\n> holds out the promise of resolving many of those uncertainties and allowing\n> considerations as the role of the substitution elasticity and biased\n> technical change to play a deeper role in growth and business-cycle\n> analysis. Normalization essentially implies representing the production\n> function in consistent indexed number form. **Without normalization, it can\n> be shown that the production function parameters have no economic\n> interpretation since they are dependent on the normalization point and the\n> elasticity of substitution itself** . This feature significantly undermines\n> estimation and comparative-static exercises, among other things.\n\nThe above issue leads to bias in estimation (especially in parametric models)\nso its quite a serious issue. This is because, we can only use observable data\nfor estimation but capital and labor are measured in completely different\nunits (aside from the problem that we actually have no way of accurately\nmeasuring capital in the first place). CES or Cobb-Douglass with unitary fixed\nelasticity gets around the issue by the virtue of elasticity of substitution\nbeing 1, and because the differences in units get absorbed into the scaling\nconstant.\n\nBut what even more, as the above cited paper discussed, the problem of\nnormalization was essentially solved for CES even with non-unitary\nelasticities making it even so more desirable function to use. This is quite\nimportant since empirically elasticity of substitution is generally below\nunity (e.g. have a look at [ Chirinko et al. 1999\n](https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0047272799000249?casa_token=2-aYd3UIB1oAAAAA:S_dVDrAU0IptuS1-HZRgyVQ2ila5vbSTiw5ZTF_erpUFvZol2xmJdY9Qvmt-\nq_mXxQzXdp74r5Q) , [ Klump et al. 2007 ](https:\/\/direct.mit.edu\/rest\/article-\nabstract\/89\/1\/183\/57638\/Factor-Substitution-and-Factor-Augmenting) , [ Leon-\nLedesma et al. 2010\n](https:\/\/www.aeaweb.org\/articles?id=10.1257\/aer.100.4.1330) ).\n\nLastly, estimation of production functions is riddled beyond belief with\nendogeneity issues. One way how to solve the endogeneity issues is to look to\ntheory for guidance of how to set up model (especially how to properly specify\nthe error term) to avoid these issues (e.g. see the work of [ Olley Pakes 1996\n](https:\/\/web.archive.org\/web\/20211112161908\/https:\/\/www.jstor.org\/stable\/2171831?seq=1#page_scan_tab_contents)\n, [ Levinsohn-Petrin 2000 ](https:\/\/www.nber.org\/papers\/w7819) or [ Ackerberg,\nCaves and Frazer 2015\n](https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.3982\/ECTA13408?casa_token=YITxPu4O2wIAAAAA:_unIEQBPoYb8iN9nOKznktKIty_F4MYARvYO2UkwTKD93PSMR3jUOWt6PQqOreVP4MYenrWaUP4F\n--o) ). But lot of these theoretical results are derived assuming CES\nproduction function, so you can't just use theoretical result derived assuming\nCES production to model the structure of the error term and nothing else.\n\n> 2. are there any serious papers or methods that allow for a non-CES\n> production function?\n>\n\nWell yes, there are a lot of papers that apply translog production function\nwhich allows elasticity of substitution to change ( [ Heathfield, Wibe 1987\n](https:\/\/link.springer.com\/chapter\/10.1007\/978-1-349-18721-8_6) ). If you\njust put [ translog production function estimation into google scholar\n](https:\/\/scholar.google.com\/scholar?hl=en&as_sdt=0,5&q=translog%20production%20function%20estimation)\nyou will get a lot of examples.\n\nHowever, estimating translog production function parametrically can be\nproblematic because relative to CES or Cobb-Douglass you have to estimate\nquite a lot of parameters to make it work. Especially if you want to include a\nlot of factors of production the number of parameters virtually explodes\n(Pavelescu, 2011), and this is a problem given the tendency of including more\nand more factors in recent years (e.g. materials, different types of capital\nand so on)."} +{"query":"What is the \"moderator for the relationship\" in interactive regression model?\n\nI have an interactive regresison model as below:\n\n\ud835\udc4c\ud835\udc56,\ud835\udc61=\ud835\udefd0+\ud835\udefd1\n\ud835\udc37\ud835\udc56\n + \ud835\udefd2\n\ud835\udc43\ud835\udc56,\ud835\udc61\n + \ud835\udefd3\n(\ud835\udc37\ud835\udc56\n\ud835\udc43\ud835\udc56,\ud835\udc61\n) + other covariates + error terms\n\nIn this model, why \ud835\udc37\ud835\udc56\n is called as the moderator for the relationship between \ud835\udc43\ud835\udc56,\ud835\udc61\n and \ud835\udc4c\ud835\udc56,\ud835\udc61","reasoning":"We need to check the definition of the moderating variable.","id":"89","excluded_ids":["N\/A"],"gold_ids_long":["relationship_moderator\/moderatingvariable.txt"],"gold_ids":["relationship_moderator\/moderatingvariable_1.txt"],"gold_answer":"$\\begingroup$\n\nBecause [ moderator variables ](https:\/\/www.statisticshowto.com\/moderating-\nvariable\/) are by definition variables that affect the strength of the\nrelationship between one variable and another variable, and that is exactly\nwhat interaction effect does (it allows $P$ to have different effect\ndepending on whether $D=1$ or $D=0$ ).\n\nThis is actually not terminology used widely in economics, but it is used\nwidely in [ psychology ](https:\/\/www.psychologyinaction.org\/psychology-in-\naction-1\/2015\/02\/06\/mediating-and-moderating-variables-explained) and some\nother fields (although I seen it used in few behavioral economics papers). In\neconomics, outside few specific subfields people usually call it interaction\nterm."} +{"query":"Did Capitalism and Adam Smith Support a Central Bank?\n\nDoes Adam Smith and Capitalism support a Central Bank\/Federal Reserve?\n\nI am trying to read through the book \"Wealth of Nations\" to understand. Having a central\/regulatory figure dictate supply and interest rates, seems less laissez faire. However curious if Adam Smith has any documentation on this subject.","reasoning":"We can check his opinions about the role state\/government in banking.","id":"90","excluded_ids":["N\/A"],"gold_ids_long":["capitalism_central_bank\/bookivchapter3.txt"],"gold_ids":["capitalism_central_bank\/bookivchapter3_19.txt"],"gold_answer":"$\\begingroup$\n\n> Does Adam Smith ... support a Central Bank\/Federal Reserve?\n\nAdam Smith was firmly in favor of central banking (although not necessarily in\nthe same way as understood today). In Chapter 3 Book 4 of The Wealth of\nNations he writes:\n\n> \u201cIn order to remedy the inconvenience to which this disadvantageous exchange\n> must have subjected their merchants, such small states, when they began to\n> attend to the interest of trade, have frequently enacted, that foreign bills\n> of exchange of a certain value should be paid not in common [coin] currency,\n> but by an order upon, or by a transfer in the books of a certain bank,\n> established upon the credit, and under the protection of the state\u201d.\n\nRegarding the Federal Reserve, it is impossible to say. Fed is not just\ncentral bank but also research institution and it does banking regulation and\nsupervision. Adam Smith, contrary to widely held beliefs was not principally\nanti-regulation, in fact in WoN he supports all sorts of pragmatic regulations\nsuch as the act of navigation which he rigorously defended.\n\nAs argued by [ Reisman (1998) ](https:\/\/www.jstor.org\/stable\/40752070)\n[emphasis mine]:\n\n> Adam Smith was not a single-minded advocate of a laissez-faire market in\n> which the minimal State had no more than a protective function. Rather, he\n> was a pragmatic social thinker who in each case selected the tool that was\n> the best suited to his meta-objective of rapid economic growth. .... **In\n> Wealth of Nations . . . laissez-faire becomes only a qualified presumption\n> rather than a hard-and-fast rule**\n\nGiven his pragmatic stance on regulation, and that according to our best\nmodern understanding banking regulation is essential, [ Tarullo, 2019\n](https:\/\/pubs.aeaweb.org\/doi\/pdfplus\/10.1257\/jep.33.1.61) ) he would likely\nnot oppose it, although without inventing time machine we can only conjecture\nwhat he would say. In addition, central bank does not fulfill the same role in\npresent day fiat monetary system as it did under gold standard. Again, save of\nhaving time machine we can just conjecture how he would respond to that. Smith\nin his writing does not directly address fiat money systems, these became\ncommonplace only during 20th century (there are arguably few historical\nexceptions (see Peter Bernholz (2003). Monetary Regimes and Inflation:\nHistory, Economic and Political Relationships) but Smith does not discuss\nthese, and he might not have been even aware of such systems.\n\n> Does ... Capitalism support a Central Bank\/Federal Reserve?\n\n 1. Capitalism is name for economic\/social system. A system has no will it cannot support anything. That is like asking if solar system supports banning smoking in public places. \n 2. If you are interested in knowing if having central bank is consistent with capitalism that depends on your definition of capitalism. The word capitalism is typically not used in economics as it is very poorly defined and it can mean anything (there are scholars that consider [ USSR capitalist ](https:\/\/en.wikipedia.org\/wiki\/State_capitalism) country). For example, Mankiws principles of economics have word capitalism printed only three times (in a textbook of over 850 pages), and all of those three times, two are quotes from newspapers and one is from a quote from Churchill used in introduction to one chapter. \n\nHowever, if you wish to use the dictionary definition of capitalism (from [\nMerriam-Webster ](https:\/\/www.merriam-webster.com\/dictionary\/capitalism) ):\n\n> : an economic system characterized by private or corporate ownership of\n> capital goods, by investments that are determined by private decision, and\n> by prices, production, and the distribution of goods that are determined\n> mainly by competition in a free market\n\nthen central banks or Fed are fully consistent with capitalist system. The\ndescription above talks about system that is just characterized by private\nownership, and where the distribution of goods is _mainly_ determined by\ncompetition in free market. Thus, having central bank or even whole sectors\nlike let's say healthcare, controlled by government is fully consistent with\ncapitalism provided the economy _mainly_ relies on free market.\n\nHowever, under some different specific definitions of capitalism the above may\nnot hold. Since capitalism is poorly defined and value laden term it is best\nto just avoid it as done throughout most of modern economic literature (or if\nyou have to use it it is best to refer to some specific definition)."} +{"query":"Is it always a trade off between efficiency and equity?\n\nIs there any situations where we can achieve both equity and efficiency? I'm thinking of Covid 19 vaccine program which is run by Goverment. Although the cost for the program is paid by the money from taxes, it still helps with economic growth right? (in my mind, it's like if people are saved they will have a normal life and buy stuffs, which will boost the production) Or is there anything wrong in my mind?","reasoning":"One example could be the emigration from poor countries to rich countries. This enhances equity. We need to check whether this also benefits efficiency, e.g., gain of GDP.","id":"91","excluded_ids":["N\/A"],"gold_ids_long":["efficiency_equity\/jep25383.txt"],"gold_ids":["efficiency_equity\/jep25383_16.txt"],"gold_answer":"$\\begingroup$\n\n> Is it always a trade off between efficiency and equity?\n\nNo there is not always trade-off between efficiency and equity, but equity\ndoes not simply mean government spends some money on something in presence of\nprogressive tax system (that is extremely na\u00efve view and there are myriad of\nexamples where programs that do that exacerbate inequality).\n\nEquity is one of the words that laymen throw around so much that it is\npractically meaningless in non-technical debate, but _in economics_ equity is\ndefined as having more equal or fair outcomes (see Feldman 1987). In\neconomics, when it comes to equity, the most focus is placed on income, wealth\nor consumption inequality (see the discussion in Atkinson (2015) Inequality).\n\nThere are some examples where equity and efficiency go hand in hand. For\nexample, if we consider equity in global sense (not just national) we have\nsolid evidence that free migration is both good for efficiency. It could by\nreasonable estimates double world's GDP (see [ Clemens, 2011\n](https:\/\/www.aeaweb.org\/articles?id=10.1257\/jep.25.3.83) ) which would be\ngood for efficiency, and from global perspective it would help to reduce\nworld's income inequality, (albeit it may worsen national-level ones) as it\nwould significantly raise incomes and wealth of poor migrants and thus likely\nreduce global income inequality (see some layman friendly discussion of that\nin [ Peterson 2013 ](https:\/\/www.e-ir.info\/2013\/05\/01\/international-migration-\nand-global-economic-inequality\/) ).\n\nAlternative simple textbook example of situation where there is no equity-\nefficiency tradeoff, would be Pigouvian tax on something consumed\npredominantly by rich, like flying. Flying creates an awful lot of pollution,\nand Pigouvian tax would thus be efficient as it would correct for the\nexternality (that airlines without tax can pollute for free), and to an extent\nthat richer people fly more often, it would probably reduce consumption\ninequality somewhat.\n\nOf course, the ones above are just example, there are more cases. Atkinson\n(2015) source mentioned above has a nice overview of some cases where there is\nlittle or none efficiency equity tradeoff in chapter 9 (although note that in\nearlier chapters he warns that not all examples are generally accepted by all\neconomists since some are controversial).\n\nHowever, covid-19 vaccine program is not necessarily good example of that.\nCovid-19 is particularly dangerous to people who are older ( [ Piroth et al\n2020 ](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2213260020305270) ),\nfor example wealth inequality increases with life expectancy as older people\nare expected to have more wealth due to growth dynamics ( [ Vandenbroucke,\n2016) ](https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=2754973) , and\nwhen old people pass to an extent they have more than one child they spread\nout their wealth around making society more equitable.\n\nWhat even more, as [ Deaton (2021) ](https:\/\/www.nber.org\/papers\/w28392)\nargues:\n\n> There is a widespread belief that the COVID-19 pandemic has increased global\n> income inequality, reducing per capita incomes by more in poor countries\n> than in rich. This supposition is reasonable but false. Rich countries have\n> experienced more deaths per head than have poor countries; their better\n> health systems, higher incomes, more capable governments and better\n> preparedness notwithstanding. The US did worse than some rich countries, but\n> better than several others. Countries with more deaths saw larger declines\n> in income. There was thus not only no trade-off between lives and income;\n> fewer deaths meant more income. As a result, per capita incomes fell by more\n> in higher-income countries.\n\nIf equity would be goal, then from global perspective the best course of\naction would be to ship as much vaccines to poor countries and let covid-19\nravage richer countries. What even more, in general research shows that\npandemics like that of bubonic plague in the past or other large catastrophes\n(like large scale wars) were great equalizers (e.g. see [ Alfani 2017\n](https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/ehr.12652) ; [ Milanovic 2016\n](https:\/\/www.nature.com\/articles\/537479a) ; [ Piketty and Saez 2014\n](https:\/\/science.sciencemag.org\/content\/344\/6186\/838.abstract) ), so these\nactually help equity (at least along income and wealth lines). There are some\nthat argue that covid-19 might be somewhat different (e.g. see [ Sayed & Peng\n2021 ](https:\/\/link.springer.com\/article\/10.1007\/s43546-021-00059-4) ) but it\nis more likely than not that it will reduce inequality globally, at least\nalong wealth and income. However, it is precisely in the rich countries where\npandemic and lockdowns that were imposed as a response to it create the\nlargest efficiency loses so from pure efficiency perspective vaccines are more\nimportant."} +{"query":"What is social science and discussion about Type II error preference?\n\nToday, my senior lecturer in economics and finance class teaching about laws impact companies' operation told us one sentence \"Also keep in mind that in (at least social) science, we care (try to avoid) about Type II (making a claim on some new finding while it is actually not true), more than Type I error (rejecting some new findings while it is actually true). \"\n\nFrom this link, the definition of social science is: \"Social science is, in its broadest sense, the study of society and the manner in which people behave and influence the world around us.\"\n\nBased on this definition, why economics relating to social science, and what is other than social science","reasoning":"We need to check more discussions\/research on the linkage between economics, science and social science.","id":"92","excluded_ids":["N\/A"],"gold_ids_long":["econ_social\/PMC5640760.txt"],"gold_ids":["econ_social\/PMC5640760_15.txt","econ_social\/PMC5640760_11.txt","econ_social\/PMC5640760_27.txt","econ_social\/PMC5640760_23.txt","econ_social\/PMC5640760_10.txt","econ_social\/PMC5640760_2.txt","econ_social\/PMC5640760_12.txt","econ_social\/PMC5640760_28.txt","econ_social\/PMC5640760_13.txt","econ_social\/PMC5640760_4.txt","econ_social\/PMC5640760_26.txt","econ_social\/PMC5640760_21.txt","econ_social\/PMC5640760_24.txt","econ_social\/PMC5640760_6.txt","econ_social\/PMC5640760_25.txt","econ_social\/PMC5640760_8.txt","econ_social\/PMC5640760_18.txt","econ_social\/PMC5640760_29.txt","econ_social\/PMC5640760_16.txt","econ_social\/PMC5640760_9.txt","econ_social\/PMC5640760_14.txt","econ_social\/PMC5640760_1.txt","econ_social\/PMC5640760_22.txt","econ_social\/PMC5640760_3.txt","econ_social\/PMC5640760_17.txt","econ_social\/PMC5640760_20.txt","econ_social\/PMC5640760_5.txt"],"gold_answer":"$\\begingroup$\n\n> Based on this definition, why economics relating to social science, and what\n> is other than social science?\n\nThere is actually no clear cut consensus on where Economics belongs (although\nit is fair to say most would likely put it into category of social science).\nSome authors consider it to be science, some social science, some even moral\nscience, and some even argue it should have its own category. A good paper\nthat tries to answer this question for economics is [ Hudson (2017)\n](https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC5640760\/) . As Hudson writes:\n\n> Is economics a science or a social science? Arguments have been made for\n> both orientations, i.e. that economics is a social science (Frey 1999), a\n> science (Frey 1999) and even a moral science (Schabas 2009). In terms of\n> sciences it has been linked with physics, with many physicists doubling as,\n> or transforming into, economists and a subpart of physics, econophysics,\n> specifically evolving which deals with economics (Stanley et al. 1999).\n> There are also links with biology (Marshall 1920; Daly 1968). Frey (1999)\n> argues economics is a social science as it is part of those sciences which\n> deal with actual problems of society, i.e. it is the subject matter which\n> makes it a social science. However, he also points out that in practice\n> economists tend to fill the journals with axioms, lemmas and proofs, i.e.\n> they adopt what they perceive as a scientific, and particularly a\n> mathematical, methodology. Mayer (1980) argues that economists act as though\n> economics is a hard science and this is reflected by their sophisticated use\n> of mathematics. However, he then goes on to argue that econometrics is not\n> sufficiently advanced to enable us to test theories as a hard scientist\n> would.\n\nBut this being said, as mentioned in my first paragraph most would call it\nsocial science as its main focus are societal issues (although you can have 1\nperson Robinson Crusoe economy, typically economies involve large groups of\npeople).\n\n> Why he can make the claim above regarding Type I and Type II in social\n> science?\n\nI do not think there is any rigorous defense of such claim. In any science if\nyou care about the science itself both type I and type II errors are equally\nproblematic. You should ask your professor for why he thinks that, but I do\nnot think that you can rigorously defend such position.\n\nI guess in some _specific_ circumstances you could defend the claim. If we\ntest if some food is poisonous then you should do everything you can about\navoiding type II error (e.g. saying the food is not poisonous when it actually\nis), since if you commit type II error you get someone poisoned, whereas if\nyou commit type I error you just think some safe food is poisonous, but\nassuming you have plenty of other food options that you know are safe you wont\nstarve and you won't kill anyone.\n\nYou could probably set up similar policy scenarios in the subfield of policy\neconomics, but extending that claim about type I and type II error to all\nsocial science or even all economics is hardly defensible (if anything I can\nthink of far more examples from medicine which is not considered social\nscience)."} +{"query":"What is the difference between Impression Management and Signaling Theory?\n\nI'm interested in theories on how organisations shape their stakeholders' (especially consumers' and investors') perceptions and decisions. I read about Impression Management and Signaling Theory. They seam to be quite similar and I'm unsure when to apply which of them and where they differ. I could not find an article comparing both theories. Signaling Theory is especially for cases with information asymmetry but it seams like Impression Management is also more common in cases with scare information on one side.","reasoning":"We need to check the definitions of impression management and signaling theory.\n","id":"93","excluded_ids":["N\/A"],"gold_ids_long":["impression_management\/Signallingeconomics.txt","impression_management\/Impressionmanagement.txt"],"gold_ids":["impression_management\/Signallingeconomics_7.txt","impression_management\/Impressionmanagement_4.txt"],"gold_answer":"$\\begingroup$\n\nI don't believe those two terms are used in the same spheres.\n\nTo me, an economic theorist, [ signaling\n](https:\/\/en.wikipedia.org\/wiki\/Signalling_\\(economics\\)) plays a role in\nmodels with asymmetric information when the informed party moves first and the\nuninformed player reacts, treating the first action as a signal about the\nprivate information. This idea goes back to [ Spence\n](https:\/\/en.wikipedia.org\/wiki\/Michael_Spence) , and also plays a role in\nbiology with non-human interaction.\n\nIn contrast, [ impression management\n](https:\/\/en.wikipedia.org\/wiki\/Impression_management) is (maybe just to me)\nan umbrella term for all kinds of behavior aiming at improving the perception\n(\"image\") of something. I would use it loosely like \"public relations\" and not\nwith a formal definition in context of a mathematical model in mind (as\nopposed to signaling). Because the history of thought is so different, both\nterms are difficult to compare. The idea goes back to the sociological book \"\n[ The Presentation of Self in Everyday Life\n](https:\/\/en.wikipedia.org\/wiki\/The_Presentation_of_Self_in_Everyday_Life) \"\nand there is no mathematical model. Impression management is essentially his\n\"self-presentation theory, which suggests that people have the desire to\ncontrol the impressions that other people form about them.\" (from the\nWikipedia entry)"} +{"query":"Have the deregulation measures that caused the 2007-2008 crisis been rolled back?\n\nThe 2007-2008 financial crisis was largely attributed to several deregulation measures, especially the Gramm-Leach-Bliley act. Have any of these measures been rolled back after the crisis, or new ones introduced to prevent a similar crisis?","reasoning":"We first need to check whether deregulation is the root cause of recession, i.e., it may only be the the symptom but not the disease. Then we can find more discussions about the inability of existing institutions.","id":"94","excluded_ids":["N\/A"],"gold_ids_long":["deregulation\/S1042957312000277cas.txt"],"gold_ids":["deregulation\/S1042957312000277cas_2.txt","deregulation\/S1042957312000277cas_8.txt","deregulation\/S1042957312000277cas_4.txt","deregulation\/S1042957312000277cas_10.txt","deregulation\/S1042957312000277cas_11.txt","deregulation\/S1042957312000277cas_9.txt","deregulation\/S1042957312000277cas_6.txt","deregulation\/S1042957312000277cas_3.txt","deregulation\/S1042957312000277cas_1.txt","deregulation\/S1042957312000277cas_7.txt","deregulation\/S1042957312000277cas_5.txt"],"gold_answer":"$\\begingroup$\n\nActually, in the literature Great Recession itself is not attributed primarily\nto financial deregulation. Financial deregulation played a significant and\nextremely important role in the Great Recession, but saying it's the primary\ncause of the Great Recession would be exaggeration. It is generally agreed\nthat root causes of the Great Recession were not just financial deregulation\nbut also macroeconomic imbalances, underlying moral hazard issues inherent in\nthe finance sector, past bailouts, changes in the tax code that incentivize\nmore leverage and also some macro policy mistakes. The deregulation was\nadditional catalyst that exacerbate the above but it is generally agreed that\nthat it is very difficult to pin Great Recession on single primary cause (see\nEichengreen Hall of Mirrors, [ Stiglitz 2009\n](https:\/\/www8.gsb.columbia.edu\/faculty\/jstiglitz\/sites\/jstiglitz\/files\/2009_Interpreting_Causes.pdf)\n, [ Verick et al 2010\n](https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=1631069) or [\nJagannathan et al 2010\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1042957312000277?casa_token=BxyUO4ePGIoAAAAA:pBiPcZeUL-2Xtuz9RLZSjPNaafvtetrEtDjFOVp77hbYp1VbmAPg9U7cl8VzFWHQCWzAa3qr45AW)\n). Even when it comes to the deregulation itself, the role of Gramm-Leach-\nBliley Act is not settled and disputed (see [ Wallison 2010\n](https:\/\/link.springer.com\/chapter\/10.1007\/978-1-4419-6637-7_2) ). It can be\nargued it contributed to the Too Big To Fail problem that further exhibited\nmoral hazard but far more important was deregulation that allowed banks to\nbecome extremely overleveraged such as SECs 2004 decision to allow more\nleverage or completely somehow letting shadow banks to fly under the radar. In\nfact literature shows universal banks (which were prohibited by Glass-\nSteagall) are not important contributors to financial instability e.g. see [\nDietrich et al 2012\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S106297691200004X?casa_token=fEMlx7Y85_QAAAAA:yvLSCG6mEqCJWoCIxdUT3v8AYk9m1nOoDjgdbsm4Ee9JwhLsni5dOMApHSCFZOgLIG0MKUnmbi8r)\n).\n\nNow getting to your main question, measuring financial regulation across time\nis tricky business since you cannot simply just count the number of laws.\nHowever, there is consensus that the financial regulation has increased\nsignificantly since 2009 ( [ Rashid, 2019 ](https:\/\/mpra.ub.uni-\nmuenchen.de\/93447\/1\/MPRA_paper_93447.pdf) ; [ Tarullo 2019\n](https:\/\/pubs.aeaweb.org\/doi\/pdfplus\/10.1257\/jep.33.1.61) )\n\nSome of the new regulations are similar to the old ones. For example, 2010\nDodd-Frank act can be said to be somewhat inspired by the Glass-Steagall in\nsome provisions, even though as explained by Tarullo (2019) Dodd-Frankt\nlargely eschewed the old regulatory solutions and did not reintroduce\nseparation of commercial and investment banking (as discussed above that is\nlikely not main problem when it comes to financial stability anyway).\n\nConsequently, modern bank regulation is fundamentally different from the old\none (at least in the US), reflecting progress of the science on this issue.\nModern banking regulation places emphasis not just on microprudential\npolicies, but on macroprudential policies, systemic stability, stress testing\netc and the idea of clamping down on universal banking was more or less\neschewed.\n\nFor example, the Dodd-Frank focuses quite a lot on macroprudential policies\nand systemic stability and later amendments (e.g. Collins amendment ) also\npushed for more stringent capital requirements for large banks (Tarullo 2019).\nBasel III (which is international regulation but it affects US as well - at\nleast some provisions do) also significantly improves the capital standards of\nbanks.\n\nMoreover, the stress-test introduced by Dodd-Frank and administered by Fed got\nmore stringent and risk sensitive over time ( Hirtle and Lehnert 2014).\n\nShadow banking is one area where the modern regulation is seriously lacking\n(Tarullo 2019), but that is not to say that Glass-Steagall could solve that\nissue. The problem with shadow banks is that they are technically not banks\neven though they behave like banks and due to various issues such as\nregulatory arbitrage they are extremely difficult to regulate.\n\nConsequently, to sum up yes the financial regulation since the Great Recession\nhas increased, some of the new rules were based on the old, but for most part\nthe old regulatory framework was eschewed because in retrospective it was not\nso great anyway (Glass-Steagall virtually completely ignores macroprudential\nissues which are now agreed to be _extremely_ important). Rather the new\nregulation is based on our current science and understanding of the market\nfailures and issues in the financial sector, which is actually to a certain\ndegree informed by the lessons learned from the Great Recession."} +{"query":"Bootstrap always valid under asymptotic Normality?\n\nIf an estimator is known to have an asymptotically normal distribution, is that sufficient for the bootstrap to be valid?\n\nIt seems that is must be, but in 20 minutes of Googling I have come up empty on a proof.","reasoning":"With asymptotic normality, we will need define nonparametric bootstrap samples.","id":"95","excluded_ids":["N\/A"],"gold_ids_long":["bootstrap\/180904016pdf.txt"],"gold_ids":["bootstrap\/180904016pdf_3.txt"],"gold_answer":"$\\begingroup$\n\nTheorem 2.1 in [ Horowitz (2019)\n](https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1809\/1809.04016.pdf) is what you are\nlooking for"} +{"query":"Has monopoly theory incorporated network effects as a source of monopoly?\n\nI studied industrial organization for my Econ. Ph.D. four decades ago. At that time industrial organization had no way to incorporate 'network effects' into monopoly theory. With the advent of social media (especially Facebook) since then accusations of monopoly power against social media are heard daily. Has someone articulated a theory of network effects as the source of monopoly?\n\nMany thanks.","reasoning":"We first need to figure out the definition of network effects, and then understand how\/why they link to monopoly.","id":"96","excluded_ids":["N\/A"],"gold_ids_long":["network_effects\/S1573448X06030317.txt"],"gold_ids":["network_effects\/S1573448X06030317_25.txt","network_effects\/S1573448X06030317_18.txt","network_effects\/S1573448X06030317_33.txt","network_effects\/S1573448X06030317_6.txt","network_effects\/S1573448X06030317_23.txt","network_effects\/S1573448X06030317_26.txt","network_effects\/S1573448X06030317_8.txt","network_effects\/S1573448X06030317_3.txt","network_effects\/S1573448X06030317_5.txt","network_effects\/S1573448X06030317_19.txt","network_effects\/S1573448X06030317_2.txt","network_effects\/S1573448X06030317_32.txt","network_effects\/S1573448X06030317_27.txt","network_effects\/S1573448X06030317_35.txt","network_effects\/S1573448X06030317_20.txt","network_effects\/S1573448X06030317_16.txt","network_effects\/S1573448X06030317_7.txt","network_effects\/S1573448X06030317_31.txt","network_effects\/S1573448X06030317_34.txt","network_effects\/S1573448X06030317_13.txt","network_effects\/S1573448X06030317_14.txt","network_effects\/S1573448X06030317_15.txt","network_effects\/S1573448X06030317_22.txt","network_effects\/S1573448X06030317_4.txt","network_effects\/S1573448X06030317_12.txt","network_effects\/S1573448X06030317_24.txt","network_effects\/S1573448X06030317_29.txt","network_effects\/S1573448X06030317_17.txt","network_effects\/S1573448X06030317_21.txt","network_effects\/S1573448X06030317_37.txt","network_effects\/S1573448X06030317_28.txt","network_effects\/S1573448X06030317_10.txt","network_effects\/S1573448X06030317_36.txt","network_effects\/S1573448X06030317_1.txt","network_effects\/S1573448X06030317_9.txt","network_effects\/S1573448X06030317_30.txt"],"gold_answer":"$\\begingroup$\n\nHere's a solid example of it in formal literature, with about 1k citations: \n[ Competition with Switching Costs and Network Effects\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1573448X06030317) by\nJoseph Farrell and Paul Klemperer\n\nThe general thought of the article is that customers who are \"locked in\" to a\nparticular product can lead to competitors preferring to separate markets\nrather than competing with one another. To be pedantic, they are not full\nmonopolies (but they're certainly not classically competing either)."} +{"query":"Where can I find a calculator for constrained optimization of general form of algebraic equation?\n\nI'm working with a fairly complex equation and I need to carry out constrained optimization of the same. The first order differential equations are very messy to solve by hand and hence I thought to use online calculators. Problem with calculators like wolfram is that I cannot choose the variables with respect to which optimization needs to be done. If I use numbers in place of other variables it would defy the purpose. Are there any calculators available online for the problem I have? I am open to use coding based software like Matlab.\n\nFor example:\nMaximize \ud835\udc4e\ud835\udc65\ud835\udc4f\u2212\ud835\udc65\n\nw.r.t \ud835\udc65\n\ns.t. \ud835\udc65>0,\ud835\udc4e>1,0<\ud835\udc4f<1\n\nThe original problem is multivariable and much more complex. I don't want to optimize with specific values for a,b.","reasoning":"We need to check the reference sheet of the Maximize function of calculators to understand the usage.","id":"97","excluded_ids":["N\/A"],"gold_ids_long":["constrained_optimization\/Maximizehtml.txt"],"gold_ids":["constrained_optimization\/Maximizehtml_10.txt"],"gold_answer":"$\\begingroup$\n\nYou can set the decision variables in Wolfram Alpha. E.g.,\n\n> maximize [{x*q-q^2+a} , {q}]\n\nmaximizes the function $x*q-q^2+a$ w.r.t. the variable $q$ . \nFor further details read the [ reference sheet of the Maximize function\n](https:\/\/reference.wolfram.com\/language\/ref\/Maximize.html) ."} +{"query":"Can Difference-in-Difference be used when the treatment effects get smaller with time since treatment?\n\nRecently, there is an emerging line of the study said that the traditional two-way fixed effect(TWFE) is failed in a lot of case because of the heterogeneous effects of laws over time, follow some paper Goodman-Bacon (2019), Chaisemartin, 2020.\n\nEspecially, Goodman, 2019 did a great job to decompose the single post-treatment dummy. In his note, he answered the question \"Is DD wrong\":\n\nNot in general. The DD research design\u2014comparing outcomes for groups whose treatment status changes to groups whose treatment status does not change\u2014still can be a good idea. The DD specification\u2014estimating the coefficient a single post-treatment dummy\u2014is a bad idea when your treatment effects vary over time (get bigger with time since treatment). In this case, just summarize your findings in a different way\u2014event-study or a linear trend-break, for instance.\n\nMy question here is, if I expect and argue that the treatment effects get smaller with time since treatment, so whether DD specification now is a bad idea?","reasoning":"The central question is to check inference procedures for treatment effect parameters using Difference-in-Differences with multiple time periods.","id":"98","excluded_ids":["N\/A"],"gold_ids_long":["treatment_difference\/180309015.txt"],"gold_ids":["treatment_difference\/180309015_21.txt"],"gold_answer":"$\\begingroup$\n\nIt depends on a few things. First, if you expect treatment effects change over\ntime, then you want to estimate an event-study style DD specification.\n\nIf you have a single treatment timing (all treatment starts at the same time),\nthen an event-study will unbiasedly estimate the treatment path. If you have\nvariation in treatment timing, you want to use a modern method, e.g. [\nCallaway and Sant'Anna (2020)\n](https:\/\/pedrohcgs.github.io\/files\/Callaway_SantAnna_2020.pdf) , [ Gardner\n(2021) ](https:\/\/jrgcmu.github.io\/2sdd_current.pdf) , or [ Sun and Abraham\n(2020) ](http:\/\/economics.mit.edu\/files\/14964)"} +{"query":"Including an endogenous covariate in a regression model as a control to estimate the effect of another variable of interest\n\nI am interested in the effect of an independent variable \ud835\udc65\n on a dependent variable \ud835\udc66\n, like so\n\n\ud835\udc66=\ud835\udefd0+\ud835\udefd1\ud835\udc65+\ud835\udc52\n\nwhere \ud835\udc52\n is the error term. Now \ud835\udc65\n includes two effects \ud835\udc671\n and \ud835\udc672\n. For simplicity, let say \ud835\udc65=\ud835\udc671+\ud835\udc672\n. If I include \ud835\udc671\n in the model, like this\n\n\ud835\udc66=\ud835\udefd0+\ud835\udefd1\ud835\udc65+\ud835\udefd2\ud835\udc671+\ud835\udc52\n\nDoes that mean that \ud835\udefd1\n is predominantly capturing the effect of \ud835\udc672\n?","reasoning":"This is related to Frisch\u2013Waugh\u2013Lovell theorem, which is applied when the regression we are concerned with is expressed in terms of two separate sets of predictor variables.","id":"99","excluded_ids":["N\/A"],"gold_ids_long":["frisch\/FrischE28093WaughE28.txt"],"gold_ids":["frisch\/FrischE28093WaughE28_4.txt","frisch\/FrischE28093WaughE28_5.txt"],"gold_answer":"$\\begingroup$\n\n> If I include $z_1$ in the model, like this: $$ > y = \\beta_0 + \\beta_1 x\n> + \\beta_2 z_1 + e, > $$ Does that mean that $\\beta_1$ is predominantly\n> capturing the effect of $z_2$ ?\n\nYes. This can be seen using the [ Frish-Waugh-Lovell theorem\n](https:\/\/en.wikipedia.org\/wiki\/Frisch%E2%80%93Waugh%E2%80%93Lovell_theorem) :\n\nIf you regress: $$ y = \\beta_0 + \\beta_1 x + \\beta_2 z_1 + e, $$ then\n$\\beta_1$ will be the same as the corresponding coefficient of a modified\nregression: $$ \\hat y = \\gamma_0 + \\beta_1 \\hat x + \\hat e \\tag{1} $$ where\n$\\hat x$ is the residual from regressing $x$ on $z_1$ and the same for\n$\\hat y$ . Now if we regress $x$ on $z_1$ then the residual is equal to:\n$$ M_{z_1} x, $$ where $M_{z_1} = 1 - z_1(z_1'z_1)^{-1}z_1'$ is the\nannihilator matrix. If $x = z_1 + z_2$ then: $$ M_{z_1} x = (1 -\nz_1(z_1'z_1)z_1')(z_1 + z_2) = M_{z_1}z_2 $$ As such, substituting in $(1$\n), we have: $$ \\hat y = \\gamma_0 + \\beta_1 \\hat z_2 + \\hat e, $$ where\n$\\hat z_2$ is now the residual from regressing $z_2$ on $z_1$ . Using the\nFrish-Waugh-Lovell theorem, in the reverse direction, this gives that\n$\\beta_1$ is also equal to the coefficient in the following regression: $$ y\n= \\delta_0 + \\beta_1 z_2 + \\delta_2 z_1 + \\varepsilon. $$ In other words,\n$\\beta_1$ will also be equal to the coefficient for $z_2$ for a regression\nof $y$ on both $z_2$ and $z_1$ . Notice, however, that in general\n$\\beta_2 \\ne \\delta_2$ (so the coefficients for $z_1$ will not be equal in\nthe two regressions).\n\nAnother way to see this is by immediately substituting $x = z_1 + z_2$ into\nthe regression $(1)$ then: $$ \\begin{align*} y &= \\beta_0 + \\beta_1(z_1 +\nz_2) + \\beta_2 z_1 + e,\\\\\\ &= \\beta_0 + \\beta_1 z_2 + (\\beta_1 + \\beta_2) z_1\n+ e. \\end{align*} $$ So the coefficient on $z_2$ in the new regression is\nidentical to the coefficient on $x$ in the original one $(\\beta_1)$ ,\nwhile the coefficient on $z_1$ in the new regression is the sum of the\ncoefficients on $x$ and $z_1$ in the original regression $(\\beta_1 +\n\\beta_2)$ .\n\n> And a follow up question is: if $z_1$ is correlated with the error term,\n> will including it in the model bias the estimate of $\\beta_1$ ?\n\nThe opposite is true. Including $z_1$ into the regression will make the\nestimate of $\\beta_1$ unbiased. Consider the following data generating\nprocess: $$ y = \\beta_0 + \\beta_1 x + e, \\tag{2} $$ and assume that $e$ is\ncorrelated with $z_1$ . Then we can write: $$ e = \\gamma z_1 + \\varepsilon.\n$$ where $\\varepsilon$ is now uncorrelated with $z_1$ . (and where\n$\\gamma = \\mathbb{E}(e z_1)\/\\mathbb{E}((z_1)^2) \\ne 0$ . Assume for\nsimplicity that $e$ is uncorrelated with $z_2$ .\n\nThen the estimate of $\\beta_1$ will be biased as the orthogonality condition\n$\\mathbb{E}(e x) = 0$ is not satisfied. Indeed: $$ \\begin{align*}\n\\mathbb{E}(ex) &= \\mathbb{E}(e z_2) + \\mathbb{E}(\\gamma z_1 z_1) +\n\\mathbb{E}(\\varepsilon z_1),\\\\\\ &= \\mathbb{E}(\\gamma (z_1)^2) \\ne 0\n\\end{align*} $$ If we include $z_1$ into the regression. Then substituting\n$e = \\gamma z_1 + \\varepsilon$ , into $(2)$ we can write: $$ y = \\beta_0 +\n\\beta_1 x + \\gamma z_1 + \\varepsilon. $$ And $\\mathbb{E}(\\varepsilon) =\n\\mathbb{E}(\\varepsilon x) = \\mathbb{E}(\\varepsilon z_1) = 0$ . So by\nincluding $z_1$ into the regression, we can guarantee the residual\n$\\varepsilon$ to be uncorrelated with all covariates. This means that\n$\\beta_1$ is identified and its estimate will be unbiased."} +{"query":"What is the purpose of taxes if central banks can fund deficit spending?\n\nSomewhat straight forward. If the federal reserve can print money to buy treasuries to fund deficit spending, what is the purpose of taxes? Sure, taxes reduce the amount of deficit that needs to be picked up by the Fed, but if, as ive seen argued, money \u201cprinting\u201d doesn\u2019t necessarily lead to inflation whats the point of levying taxes? Why doesn\u2019t the fed just procure all of the money itself if it could theoretically do so without adverse affects?","reasoning":"It is related to the monetary theory that predicts a strong long-run correlation between money growth and inflation.","id":"100","excluded_ids":["N\/A"],"gold_ids_long":["federal_reserve_tax\/vol2035.txt"],"gold_ids":["federal_reserve_tax\/vol2035_10.txt","federal_reserve_tax\/vol2035_5.txt","federal_reserve_tax\/vol2035_6.txt","federal_reserve_tax\/vol2035_1.txt","federal_reserve_tax\/vol2035_3.txt","federal_reserve_tax\/vol2035_0.txt","federal_reserve_tax\/vol2035_8.txt","federal_reserve_tax\/vol2035_7.txt","federal_reserve_tax\/vol2035_9.txt","federal_reserve_tax\/vol2035_2.txt","federal_reserve_tax\/vol2035_4.txt"],"gold_answer":"$\\begingroup$\n\nBecause Fed or any central bank cannot fund 100% of a budget without any\nadverse effect. I do not know where you heard such argument but it is\nblatantly false.\n\nFirst, it is virtually unanimously agreed by top policy economists that\ngovernment cannot fund arbitrary amount of real spending (i.e. spending on\nreal goods and services). This question was actually put forward in [ a poll\n](https:\/\/www.igmchicago.org\/surveys\/modern-monetary-theory\/) among the top\nIvy league US economists by IGM and there was almost unanimous agreement that\ngovernment cannot do that (in fact there was no one who even agreed that it\ncould just few responses with no opinion, with low confidence, since the poll\nscores that as well).\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/lnWbn.png)\n](https:\/\/i.sstatic.net\/lnWbn.png)\n\nSecond, money supply increase does not _necessarily_ cause inflation because\nother variables might push inflation into different direction. For example,\nincrease in money supply can be offset by velocity of money or change in real\noutput etc. However, money supply increase, _ceteris paribus_ , _does_ cause\ninflation.\n\nNumerous studies including [ Frain (2004)\n](https:\/\/www.esr.ie\/ESR_papers\/vol35_3\/Vol%2035_3Frain.pdf) or [ DeGrauve &\nPolan (2005)\n](https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/j.1467-9442.2005.00406.x)\nshow that there is causal relationship between inflation and money supply.\n\nYou can also see that just by plotting money supply growth against inflation.\nIn the figure below on a left you can see the relationship between money\nsupply growth and inflation in the US over several years, on a right you can\nsee cross-sectional plot of money supply growth and inflation across various\ncountries. The images of course only show correlation, but causality is\nfurther corroborated by empirical studies as those cited above (the charts are\ntaken from Mankiw Macroeconomics 8ed pp 107-108).\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/rR4aUm.png)\n](https:\/\/i.sstatic.net\/rR4aUm.png) [ ![enter image description\nhere](https:\/\/i.sstatic.net\/rJPhpm.png) ](https:\/\/i.sstatic.net\/rJPhpm.png)\n\nThere is very little doubt in empirical research that there is relationship\nbetween money growth and inflation. There are still questions on how strong\nthe relationship is, and how stable the relationship is over time, which is\ndiscussed in the studies cited above, but either evidence points toward causal\nrelationship.\n\nHowever, this does not mean government cannot monetarily finance _some_ of its\nreal spending, as despite of the above government can still earn seignorage on\nissuing money, and it can take some time before inflation kicks in as the\nrelationship described above is not necessarily immediate but can work with\nlags. As a consequences taxes are necessarily to fund all other real spending\nthat cannot be funded monetarily."} +{"query":"What are some negative\/positive aspects from globalization in regard to developed countries?\n\nI have read many articles on how globalization is helping developing countries, as more and more companies from developed countries have been moving their production oversees. However, as hard as I tried, I couldn't find a lot of articles talking about the opposite site of the story - How globalization affects developed countries.\n\nMost of the articles I found were focused on the fact that moving production oversees will increase the unemployment and decrease the GDP. However, are there any other aspects (positive or negative)? Would love to learn more about this topic and if you have a good article that dives into this I would be really grateful if you can provide me with a link to it.","reasoning":"Another aspect of globalization is related to international tourism, CO2 emissions and climate change.","id":"101","excluded_ids":["N\/A"],"gold_ids_long":["globalization_developing\/s11356019073724.txt"],"gold_ids":["globalization_developing\/s11356019073724_20.txt"],"gold_answer":"$\\begingroup$\n\nI did a search in Google Scholar and it seems that most papers mainly digs\ninto the benefits of less-developed countries regarding globalization.\n\nBut there are a couple of dimensions here that you can have a look:\n\n**Globalization helps to reduce carbon emissions from international tourism**\nin the developed country following [ Daniel,2020\n](https:\/\/link.springer.com\/article\/10.1007\/s11356-019-07372-4)\n\n**Impact of Globalization on some macro indicators** of developed countries of\n[ Lenka,2013 ](http:\/\/ojs.spiruharet.ro\/index.php\/jedep\/article\/view\/34)\n\n**Developed-developing country partnerships** : Benefits to developed\ncountries? from [ Syed, 2012\n](https:\/\/link.springer.com\/article\/10.1186\/1744-8603-8-17)\n\nThe **Globalization of the Software Industry** : **Perspectives and\nOpportunities for Developed and Developing Countries** from [ Arora, 2005\n](https:\/\/www.journals.uchicago.edu\/doi\/abs\/10.1086\/ipe.5.25056169)\n\nAnd last but not least, a great publication last year from [ Gozgor,2020\n](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S030142152030121X) that\nprove that a higher level of economic g **lobalization promotes renewable\nenergy** in developed countries\n\nI went through 6 pages from Google Scholar, and my keyword is \" **[\nglobalization developed countries\n](https:\/\/scholar.google.com.vn\/scholar?start=0&q=globalization%20developed%20countries&hl=vi&as_sdt=0,5)\n** \" in case you want to replicate my search."} +{"query":"Why standard errors in country-level variables are higher than that in firm-level variables?\n\nFrom this dicussion, the commentor said\n\nLastly, firm fixed effects may absorb more variation and likely reduced the size of their standard errors.\n\nIn practice, I also mainly see that the standard error for country-level mainly higher than that of firm-level variables. I am wondering if there is any mathematical or intuitive way to explain this phenomenon.","reasoning":"We can check equations\/calculation between firm-level and country-level GDP.","id":"102","excluded_ids":["N\/A"],"gold_ids_long":["country_firm\/granular.txt"],"gold_ids":["country_firm\/granular_6.txt"],"gold_answer":"$\\begingroup$\n\nConsider a global panel model where there are many firms M >> N, the number of\ncountries. If we are expansive about what we mean by firms to include small\nbusinesses, sole-proprietorships, and freelancers, then all domestic economic\naggregates are going to be the sum of the firm level aggregates (employment is\nthe sum of domestic firm employment, national output is the sum of domestic\nfirm employment, and so on). Aggregate variability of these statistics is\ngoing to be less than the firm level variability. In [ THE GRANULAR ORIGINS OF\nAGGREGATE FLUCTUATIONS BY XAVIER GABAIX\n](http:\/\/pages.stern.nyu.edu\/%7Exgabaix\/papers\/granular.pdf) , he shows that\nif firms are identical and have the output variance $\\sigma^2$ (variance as\npercent deviations from the mean), then the GDP standard deviation (of\npercentage deviations from the mean) is\n$$\\sigma_{gdp}=\\frac{\\sigma}{\\sqrt{n}} $$\n\nThis going to make GDP much less variable than the average firm and\napproximately zero. An insight of his wonderful paper is that variability in\nfirm size matters a great deal, and in practice:\n$$\\sigma_{gdp}=\\frac{\\sigma}{\\log{n}} $$ This is much, much larger, but still\n$\\sigma_{gdp} << \\sigma$ .\n\nOf course, you asked about the standard errors and not the standard\ndeviations. Recall that the basic calculation of the standard error of the\nmean calculated from independent and identically distributed random variables\nis: $$ SE = \\frac{\\sigma}{\\sqrt{T}}$$ , where T is the number of\nobservations. While things get much more complicated as we allow for\nheteroskedasticity, serial correlation, clustering, and the like, the basic\nidea remains that you need more observations to shrink the standard errors.\nSince we established that $\\sigma_{gdp} << \\sigma$ , if we observe firms and\nnations the same number of times (T) and the observations are IID:\n\n$$SE_{gdp}=\\frac{\\sigma_{gdp}}{\\sqrt{T}} << \\frac{\\sigma}{\\sqrt{T}} = SE_{i}$$\n\nThat's my argument for why (generally) the standard errors at the national\nlevel should not be larger than at the firm level. I'm 99% sure you can cook\nup examples that flip this result with the right covariance structure.\nNevertheless, it does not have to be the case that standard errors of country\nlevel variables are larger than firm level ones, and in the simplest case, the\nopposite is true."}