text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Jevons paradox green taxes, cap and trade, or higher emissions standards). The ecological economists Mathis Wackernagel and William Rees have suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment in natural capital rehabilitation." By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented. Notes Further reading | https://en.wikipedia.org/wiki?curid=988796 |
Population pyramid A population pyramid, also called an "age-gender-pyramid", is a graphical illustration that shows the distribution of various age groups in a population (typically that of a country or region of the world), which forms the shape of a pyramid when the population is growing. Males are conventionally shown on the left and females on the right, and they may be measured by raw number or as a percentage of the total population. This tool can be used to visualize and age of a particular population. It is also used in ecology to determine the overall age distribution of a population; an indication of the reproductive capabilities and likelihood of the continuation of a species. Population pyramids often contain continuous stacked-histogram bars, making it a horizontal bar diagram. The population size is depicted on the x-axis (horizontal) while the age-groups are represented on the y-axis (vertical). The size of the population can either be measured as a percentage of the total population or by raw number. Males are conventionally shown on the left and females on the right. Population pyramids are often viewed as the most effective way to graphically depict the age and distribution of a population, partly because of the very clear image these pyramids represent. A great deal of information about the population broken down by age and sex can be read from a population pyramid, and this can shed light on the extent of development and other aspects of the population | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid The measures of central tendency, mean, median, and mode, should be considered when assessing a population pyramid. since the data is not completely accurate. For example, the average age could be used to determine the type of population in a particular region. A population with an average age of 15 would have a young population compared to a population that has an average age of 55, which would be considered an older population. It is also important to consider these measures because the collected data is not completely accurate. The mid-year population is often used in calculations to account for the number of births and deaths that occur. A population pyramid gives a clear picture of how a country transitions from high fertility to low fertility rate. The broad base of the pyramid means a relative majority of the population lies between ages 0–14, which tells us that the fertility rate of the country is high and above population sub-replacement fertility level. The older population is declining over time due to a shorter life expectancy of sixty years. However, there are still more females than males in these ranges since women have a longer life expectancy. As reported by the "Proceedings of the National Academy of Sciences," women tend to live longer than men because women do not partake in risky behaviors | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid Also, Weeks' "Population: an Introduction to Concepts and Issues," considered that the sex ratio gap for the older ages will shrink due to women's health declining due to the effects of smoking, as suggested by the United Nations and U.S. Census Bureau. Moreover, it can also reveal the age-dependency ratio of a population. Populations with a big base, young population, or a big top, an older population, shows that there is a higher dependency ratio. The dependency ratio refers to how many people are dependent on the working class (ages 15–64). According to Weeks' "Population: an Introduction to Concepts and Issues," population pyramids can be used to predict the future, known as a population forecast. Population momentum, when a population's birth rates continue to increase even after replacement level has been reached, can even be predicted if a population has a low mortality rate since the population will continue to grow. This then brings up the term doubling time, which is used to predict when the population will double in size. Lastly, a population pyramid can even give insight on the economic status of a country from the age stratification since the distribution of supplies are not evenly distributed through a population. In the demographic transition model, the size and shape of population pyramids vary. In stage one of the demographic transition model, the pyramids have the most defined shape. They have the ideal big base and skinny top | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid In stage two, the pyramid looks similar, but starts to widen in the middle age groups. In stage three, the pyramids start to round out and look similar in shape to a tombstone. In stage four, there is a decrease in the younger age groups. This causes the base of the widened pyramid to narrow. Lastly, in stage five, the pyramid starts to take on the shape of a kite as the base continues to decrease. The shape of the population is dependent upon what the economy is like in the country. More developed countries can be found in stages three four and five while the least developed countries have a population represented by the pyramids in stages one and two. Each country will have different or unique population pyramids. However, population pyramids will be defined as the following: stationary, expansive, or constrictive. These types have been identified by the fertility and mortality rates of a country. Gary Fuller (1995) described Youth bulge as a type of expansive pyramid. Gunnar Heinsohn (2003) argues that an excess in especially young adult male population predictably leads to social unrest, war and terrorism, as the "third and fourth sons" that find no prestigious positions in their existing societies rationalize their impetus to compete by religion or political ideology | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid Heinsohn claims that most historical periods of social unrest lacking external triggers (such as rapid climatic changes or other catastrophic changes of the environment) and most genocides can be readily explained as a result of a built-up youth bulge. This factor has been also used to account for the Arab Spring events. Economic recessions, such as the Great Depression of the 1930s and the late 2000s recession, are also claimed to be explained in part due to a large youth population who cannot find jobs. Youth bulge can be seen as one factor among many in explaining social unrest and uprisings in society. A 2016 study finds that youth bulges increases the chances of non-ethnic civil wars, but not ethnic civil wars. A large population of adolescents entering the labor force and electorate strains at the seams of the economy and polity, which were designed for smaller populations. This creates unemployment and alienation unless new opportunities are created quickly enough – in which case a 'demographic dividend' accrues because productive workers outweigh young and elderly dependents. Yet the 16–29 age range is associated with risk-taking, especially among males. In general, youth bulges in developing countries are associated with higher unemployment and, as a result, a heightened risk of violence and political instability. For Cincotta and Doces (2011), the transition to more mature age structures is almost a sine qua non for democratization | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid To reverse the effects of youth bulges, specific policies such as creating more jobs, improving family planning programs, and reducing over all infant mortality rates should be a priority. The Middle East and North Africa are currently experiencing a prominent youth bulge. "Across the Middle East, countries have experienced a pronounced increase in the size of their youth populations over recent decades, both in total numbers and as a percentage of the total population. Today, the nearly 111 million individuals aging between 15 to 29 living across the region make up nearly 27 percent of the region’s population." Structural changes in service provision, especially health care, beginning in the 1960s created the conditions for a demographic explosion, which has resulted in a population consisting primarily of younger people. It is estimated that around 65% of the regional population is under the age of 25. The Middle East has invested more in education, including religious education, than most other regions such that education is available to most children. However, that education has not led to higher levels of employment, and youth unemployment is currently at 25%, the highest of any single region. Of this 25%, over half are first time entrants into the job market. The youth bulge in the Middle East and North Africa has been favorably compared to that of East Asia, which harnessed this human capital and saw huge economic growth in recent decades | https://en.wikipedia.org/wiki?curid=989011 |
Population pyramid The youth bulge has been referred to by the Middle East Youth Initiative as a demographic gift, which, if engaged, could fuel regional economic growth and development. "While the growth of the youth population imposes supply pressures on education systems and labor markets, it also means that a growing share of the overall population is made up of those considered to be of working age; and thus not dependent on the economic activity of others. In turn, this declining dependency ratio can have a positive impact on overall economic growth, creating a demographic dividend. The ability of a particular economy to harness this dividend, however, is dependent on its ability to ensure the deployment of this growing working-age population towards productive economic activity, and to create the jobs necessary for the growing labor force." | https://en.wikipedia.org/wiki?curid=989011 |
Moving average In statistics, a moving average (rolling average or running average) is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter. Variations include: simple, and cumulative, or weighted forms (described below). Given a series of numbers and a fixed subset size, the first element of the moving average is obtained by taking the average of the initial fixed subset of the number series. Then the subset is modified by "shifting forward"; that is, excluding the first number of the series and including the next value in the subset. A moving average is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. The threshold between short-term and long-term depends on the application, and the parameters of the moving average will be set accordingly. For example, it is often used in technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series. Mathematically, a moving average is a type of convolution and so it can be viewed as an example of a low-pass filter used in signal processing. When used with non-time series data, a moving average filters higher frequency components without any specific connection to time, although typically some kind of ordering is implied | https://en.wikipedia.org/wiki?curid=990809 |
Moving average Viewed simplistically it can be regarded as smoothing the data. In financial applications a simple moving average (SMA) is the unweighted mean of the previous "n" data. However, in science and engineering, the mean is normally taken from an equal number of data on either side of a central value. This ensures that variations in the mean are aligned with the variations in the data rather than being shifted in time. An example of a simple equally weighted running mean for a "n"-day sample of closing price is the mean of the previous "n" days' closing prices. If those prices are formula_1 then the formula is When calculating successive values, a new value comes into the sum, and the oldest value drops out, meaning that a full summation each time is unnecessary for this simple case: The period selected depends on the type of movement of interest, such as short, intermediate, or long-term. In financial terms, moving-average levels can be interpreted as support in a falling market or resistance in a rising market. If the data used are not centered around the mean, a simple moving average lags behind the latest datum point by half the sample width. An SMA can also be disproportionately influenced by old datum points dropping out or new data coming in. One characteristic of the SMA is that if the data have a periodic fluctuation, then applying an SMA of that period will eliminate that variation (the average always containing one complete cycle). But a perfectly regular cycle is rarely encountered | https://en.wikipedia.org/wiki?curid=990809 |
Moving average For a number of applications, it is advantageous to avoid the shifting induced by using only "past" data. Hence a central moving average can be computed, using data equally spaced on either side of the point in the series where the mean is calculated. This requires using an odd number of datum points in the sample window. A major drawback of the SMA is that it lets through a significant amount of the signal shorter than the window length. Worse, it "actually inverts it". This can lead to unexpected artifacts, such as peaks in the smoothed result appearing where there were troughs in the data. It also leads to the result being less smooth than expected, since some of the higher frequencies are not properly removed. In a cumulative moving average (CMA), the data arrive in an ordered datum stream, and the user would like to get the average of all of the data up until the current datum point. For example, an investor may want the average price of all of the stock transactions for a particular stock up until the current time. As each new transaction occurs, the average price at the time of the transaction can be calculated for all of the transactions up to that point using the cumulative average, typically an equally weighted average of the sequence of "n" values formula_4 up to the current time: A weighted average is an average that has multiplying factors to give different weights to data at different positions in the sample window | https://en.wikipedia.org/wiki?curid=990809 |
Moving average Mathematically, the weighted moving average is the convolution of the datum points with a fixed weighting function. One application is removing pixelisation from a digital graphical image. In technical analysis of financial data, a weighted moving average (WMA) has the specific meaning of weights that decrease in arithmetical progression. In an "n"-day WMA the latest day has weight "n", the second latest "n" − 1, etc., down to one. The denominator is a triangle number equal to formula_7 In the more general case the denominator will always be the sum of the individual weights. When calculating the WMA across successive values, the difference between the numerators of WMA and WMA is "np" − "p" − ⋅⋅⋅ − "p". If we denote the sum "p" + ⋅⋅⋅ + "p" by Total, then The graph at the right shows how the weights decrease, from highest weight for the most recent datum points, down to zero. It can be compared to the weights in the exponential moving average which follows. An exponential moving average (EMA), also known as an exponentially weighted moving average (EWMA), is a first-order infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero. The graph at right shows an example of the weight decrease | https://en.wikipedia.org/wiki?curid=990809 |
Moving average The EMA for a series "Y" may be calculated recursively: Where: "S" may be initialized in a number of different ways, most commonly by setting "S" to "Y" as shown above, though other techniques exist, such as setting "S" to an average of the first 4 or 5 observations. The importance of the "S" initialisations effect on the resultant moving average depends on "α"; smaller "α" values make the choice of "S" relatively more important than larger "α" values, since a higher "α" discounts older observations faster. Whatever is done for "S" it assumes something about values prior to the available data and is necessarily in error. In view of this, the early results should be regarded as unreliable until the iterations have had time to converge. This is sometimes called a 'spin-up' interval. One way to assess when it can be regarded as reliable is to consider the required accuracy of the result. For example, if 3% accuracy is required, initialising with "Y" and taking data after five time constants (defined above) will ensure that the calculation has converged to within 3% (only <3% of "Y" will remain in the result ). Sometimes with very small alpha, this can mean little of the result is useful. This is analogous to the problem of using a convolution filter (such as a weighted average) with a very long window. This formulation is according to Hunter (1986) | https://en.wikipedia.org/wiki?curid=990809 |
Moving average By repeated application of this formula for different times, we can eventually write "S" as a weighted sum of the datum points "Y", as: for any suitable "k" ∈ {0, 1, 2, ...} The weight of the general datum point formula_11 is formula_12. This formula can also be expressed in technical analysis terms as follows, showing how the EMA steps towards the latest datum point, but only by a proportion of the difference (each time): Expanding out formula_14 each time results in the following power series, showing how the weighting factor on each datum point "p", "p", etc., decreases exponentially: where since formula_21. It can also be calculated recursively without introducing the error when initializing the first estimate (n starts from 1): This is an infinite sum with decreasing terms. The question of how far back to go for an initial value depends, in the worst case, on the data. Large price values in old data will affect the total even if their weighting is very small. If prices have small variations then just the weighting can be considered. The power formula above gives a starting value for a particular day, after which the successive days formula shown first can be applied. The weight omitted by stopping after "k" terms is which is i.e. a fraction out of the total weight. For example, to have 99.9% of the weight, set above ratio equal to 0.1% and solve for "k": to determine how many terms should be used. Since formula_30 as formula_31, we know formula_32 approaches formula_33 as N increases | https://en.wikipedia.org/wiki?curid=990809 |
Moving average This gives: When formula_35 is related to N via to formula_36, this simplifies to approximately, for this example (99.9% weight). Note that there is no "accepted" value that should be chosen for formula_38, although there are some recommended values based on the application. A commonly used value for formula_35 is formula_40. This is because the weights of an SMA and EMA have the same "center of mass" when formula_41. The weights of an formula_42-day SMA have a "center of mass" on the formula_43 day, where For the remainder of this proof we will use one-based indexing. Now meanwhile, the weights of an EMA have center of mass That is, We also know the Maclaurin Series Taking derivatives of both sides with respect to formula_49 gives: or Substituting formula_52, we get or So the value of formula_35 that sets formula_56 is, in fact: or And so formula_59 is the value of formula_35 that creates an EMA whose weights have the same center of gravity as would the equivalent N-day SMA This is also why sometimes an EMA is referred to as an "N"-day EMA. Despite the name suggesting there are "N" periods, the terminology only specifies the "α" factor. "N" is not a stopping point for the calculation in the way it is in an SMA or WMA. For sufficiently large "N", the first "N" datum points in an EMA represent about 86% of the total weight in the calculation when formula_40: The sum of the weights of all the terms (i.e., infinite number of terms) in an exponential moving average is 1 | https://en.wikipedia.org/wiki?curid=990809 |
Moving average The sum of the weights of formula_62 terms is formula_63. Both of these sums can be derived by using the formula for the sum of a geometric series. The weight omitted after formula_62 terms is given by subtracting this from 1, and you get formula_65 (this is essentially the formula given previously for the weight omitted). We now substitute the commonly used value for formula_40 in the formula for the weight of formula_62 terms. If you make this substitution, and you make use of formula_68, then you get the 0.8647 approximation. Intuitively, what this is telling us is that the weight after formula_62 terms of an ``formula_62-period" exponential moving average converges to 0.8647. The designation of formula_74 is not a requirement. (For example, a similar proof could be used to just as easily determine that the EMA with the same "half-life" as an "N"-day SMA is formula_75). In fact, 2/(N+1) is merely a common convention to form an intuitive understanding of the relationship between EMAs and SMAs, for industries where both are commonly used together on the same datasets. In reality, an EMA with any value of formula_35 can be used, and can be named either by stating the value of formula_35, or with the more familiar "N"-day EMA terminology letting formula_78. In addition to the mean, we may also be interested in the variance and in the standard deviation to evaluate the statistical significance of a deviation from the mean. EWMVar can be computed easily along with the moving average | https://en.wikipedia.org/wiki?curid=990809 |
Moving average The starting values are formula_79 and formula_80, and we then compute the subsequent values using: formula_81 From this, the exponentially weighted moving standard deviation can be computed as formula_82. We can then use the standard score to normalize data with respect to the moving average and variance. This algorithm is based on Welford's algorithm for computing the variance. A modified moving average (MMA), running moving average (RMA), or smoothed moving average (SMMA) is defined as: In short, this is an exponential moving average, with formula_84. Some computer performance metrics, e.g. the average process queue length, or the average CPU utilization, use a form of exponential moving average. Here is defined as a function of time between two readings. An example of a coefficient giving bigger weight to the current reading, and smaller weight to the older readings is where is the exponential function, time for readings "t" is expressed in seconds, and is the period of time in minutes over which the reading is said to be averaged (the mean lifetime of each reading in the average). Given the above definition of , the moving average can be expressed as For example, a 15-minute average "L" of a process queue length "Q", measured every 5 seconds (time difference is 5 seconds), is computed as Other weighting systems are used occasionally – for example, in share trading a volume weighting will weight each time period in proportion to its trading volume | https://en.wikipedia.org/wiki?curid=990809 |
Moving average A further weighting, used by actuaries, is Spencer's 15-Point Moving Average (a central moving average). The symmetric weight coefficients are −3, −6, −5, 3, 21, 46, 67, 74, 67, 46, 21, 3, −5, −6, −3. Outside the world of finance, weighted running means have many forms and applications. Each weighting function or "kernel" has its own characteristics. In engineering and science the frequency and phase response of the filter is often of primary importance in understanding the desired and undesired distortions that a particular filter will apply to the data. A mean does not just "smooth" the data. A mean is a form of low-pass filter. The effects of the particular filter used should be understood in order to make an appropriate choice. On this point, the French version of this article discusses the spectral effects of 3 kinds of means (cumulative, exponential, Gaussian). From a statistical point of view, the moving average, when used to estimate the underlying trend in a time series, is susceptible to rare events such as rapid shocks or other anomalies. A more robust estimate of the trend is the simple moving median over "n" time points: where the median is found by, for example, sorting the values inside the brackets and finding the value in the middle. For larger values of "n", the median can be efficiently computed by updating an indexable skiplist. Statistically, the moving average is optimal for recovering the underlying trend of the time series when the fluctuations about the trend are normally distributed | https://en.wikipedia.org/wiki?curid=990809 |
Moving average However, the normal distribution does not place high probability on very large deviations from the trend which explains why such deviations will have a disproportionately large effect on the trend estimate. It can be shown that if the fluctuations are instead assumed to be Laplace distributed, then the moving median is statistically optimal. For a given variance, the Laplace distribution places higher probability on rare events than does the normal, which explains why the moving median tolerates shocks better than the moving mean. When the simple moving median above is central, the smoothing is identical to the median filter which has applications in, for example, image signal processing. In a moving average regression model, a variable of interest is assumed to be a weighted moving average of unobserved independent error terms; the weights in the moving average are parameters to be estimated. Those two concepts are often confused due to their name, but while they share many similarities, they represent distinct methods and are used in very different contexts. | https://en.wikipedia.org/wiki?curid=990809 |
John R. Commons John Rogers Commons (October 13, 1862 – May 11, 1945) was an American institutional economist, Georgist, progressive and labor historian at the University of Wisconsin–Madison. was born in Hollansburg, Ohio on October 13, 1862. Commons had a religious upbringing which led him to be an advocate for social justice early in life. Commons was considered a poor student and suffered from a mental illness while studying. He was allowed to graduate without finishing because of the potential seen in his intense determination and curiosity. At this time, Commons became a follower of Henry George's 'single tax' economics. He carried this 'Georgist' or 'Ricardian' approach to economics, with a focus on land and monopoly rents, throughout the rest of his life, including a proposal for income taxes with higher rates on land rents. After graduating from Oberlin College, Commons did two years of graduate studies at Johns Hopkins University, where he studied under Richard T. Ely, but left without a degree. After appointments at Oberlin and Indiana University, Commons began teaching at Syracuse University in 1895. In spring 1899, Syracuse dismissed him as a radical. Eventually Commons re-entered academia at the University of Wisconsin in 1904. Commons' early work exemplified his desire to unite Christian ideals with the emerging social sciences of sociology and economics | https://en.wikipedia.org/wiki?curid=991546 |
John R. Commons He was a frequent contributor to "Kingdom" magazine, was a founder of the American Institute for Christian Sociology, and authored a book in 1894 called "Social Reform and the Church." He was an advocate of temperance legislation and was active in the national Prohibition Party. By his Wisconsin years, Commons' scholarship had become less moralistic and more empirical, and he moved away from a religious viewpoint in his ethics and sociology. Commons is best known for developing an analysis of collective action by the state and other institutions, which he saw as essential to understanding economics. Commons believed that carefully crafted legislation could create social change; that view led him to be known as a socialist radical and incrementalist. Contrary to some published accounts, Commons did consider African Americans capable of voting. When he advocated proportional representation, he suggested a "negro party". He even suggested applying the Thirteenth amendment to the Constitution to force Southern States to allow African Americans to vote. He continued the strong American tradition in institutional economics by such figures as the economist and social theorist Thorstein Veblen. His notion of transaction is one of the most important contribution to Institutional Economics. The institutional theory was closely related to his remarkable successes in fact-finding and drafting legislation on a wide range of social issues for the state of Wisconsin | https://en.wikipedia.org/wiki?curid=991546 |
John R. Commons He drafted legislation establishing Wisconsin's worker's compensation program, the first of its kind in the United States. In 1934, Commons published "Institutional Economics", which laid out his view that institutions were made up of collective actions that, along with conflict of interests, defined the economy. He believed that institutional economics added collective control of individual transactions to existing economic theory. Commons considered the Scottish economist Henry Dunning Macleod to be the "originator" of Institutional economics. Commons was a contributor to The Pittsburgh Survey, a 1907 sociological investigation of a single American city. His graduate student, John A. Fitch, wrote "The Steel Workers", a classic depiction of a key industry in early 20th-century America. It was one of six key texts to come out of the survey. Edwin E. Witte, later known as the "father of social security" also did his PhD at the University of Wisconsin–Madison under Commons. He was a leading advocate of proportional representation in the United States, writing a book on the subject in 1907 and serving as vice-president of the Proportional Representation League. Commons undertook two major studies of the history of labor unions in the United States. Beginning in 1910, he edited "A Documentary History of American Industrial Society," a large work that preserved many original-source documents of the American labor movement | https://en.wikipedia.org/wiki?curid=991546 |
John R. Commons Almost as soon as that work was complete, Commons began editing "History of Labor in the United States", a narrative work which built on the previous 10-volume documentary history. He died on May 11, 1945. Today, Commons's contribution to labor history is considered equal to his contributions to the theory of institutional economics. He also made valuable contributions to the history of economic thought, especially with regard to collective action. He is honored at the University of Wisconsin in Madison with rooms and clubs named for him. His former home, now known as the House, is listed on the National Register of Historic Places. Commons, John, R. 1900. Representative Democracy. New York: American Bureau of Economic Research, 1900. Available at https://babel.hathitrust.org/cgi/pt?id=coo.31924032462842&view=1up&seq=18 | https://en.wikipedia.org/wiki?curid=991546 |
Liquid capital Liquid capital, or fluid capital is a readily convertible asset, such as money or other bearer economic instruments, as opposed to a long term asset like real estate. may be held by individuals, companies, or governments. Globalization means that developing countries have easier access to liquid capital from around the world, but if a country becomes too dependent on foreign liquid capital any political or economic difficulties can be exacerbated by capital flight. George Soros is a well-known international liquid capital holder, who made a bucket full by playing with monetary policy in many countries such as the United Kingdom in 1992, Thailand, and Hong Kong. | https://en.wikipedia.org/wiki?curid=992568 |
Poverty trap A poverty trap is a self-reinforcing mechanism which causes poverty to persist. If it persists from generation to generation, the effect can reinforce itself as a "cycle of poverty", if steps are not taken to break the trap. In the developing world, many factors can contribute to a poverty trap, including: limited access to credit and capital markets, extreme environmental degradation (which depletes agricultural production potential), corrupt governance, capital flight, poor education systems, disease ecology, lack of public health care, war and poor infrastructure. Jeffrey Sachs, in his book "The End of Poverty", discusses the poverty trap and prescribes a set of policy initiatives intended to end the trap. He recommends that aid agencies behave as venture capitalists funding start-up companies. Venture capitalists, once they choose to invest in a venture, do not give only half or a third of the amount they feel the venture needs in order to become profitable; if they did, their money would be wasted. If all goes as planned, the venture will eventually become profitable and the venture capitalist will experience an adequate rate of return on investment. Likewise, Sachs proposes, developed countries cannot give only a fraction of what is needed in aid and expect to reverse the poverty trap in Africa. Just like any other start-up, developing nations absolutely must receive the amount of aid necessary (and promised at the G-8 Summit in 2005) for them to begin to reverse the poverty trap | https://en.wikipedia.org/wiki?curid=996078 |
Poverty trap The problem is that unlike start-ups, which simply go bankrupt if they fail to receive funding, in Africa people continue to die at a high rate due in large part to lack of sufficient aid. Sachs points out that the extreme poor lack six major kinds of capital: human capital, business capital, infrastructure, natural capital, public institutional capital, and knowledge capital. He then details the poverty trap: The poor start with a very low level of capital per person, and then find themselves trapped in poverty because the ratio of capital per person actually falls from generation to generation. The amount of capital per person declines when the population is growing faster than capital is being accumulated ... The question for growth in per capita income is whether the net capital accumulation is large enough to keep up with population growth. Sachs argues that sufficient foreign aid can make up for the lack of capital in poor countries, maintaining that, "If the foreign assistance is substantial enough, and lasts long enough, the capital stock rises sufficiently to lift households above subsistence | https://en.wikipedia.org/wiki?curid=996078 |
Poverty trap " Sachs believes the public sector should focus mainly on investments in human capital (health, education, nutrition), infrastructure (roads, power, water and sanitation, environmental conservation), natural capital (conservation of biodiversity and ecosystems), public institutional capital (a well-run public administration, judicial system, police force), and parts of knowledge capital (scientific research for health, energy, agriculture, climate, ecology). Sachs leaves business capital investments to the private sector, which he claims would more efficiently use funding to develop the profitable enterprises necessary to sustain growth. In this sense, Sachs views public institutions as useful in providing the public goods necessary to begin the Rostovian take-off model, but maintains that private goods are more efficiently produced and distributed by private enterprise. This is a widespread view in neoclassical economics | https://en.wikipedia.org/wiki?curid=996078 |
Poverty trap Several other forms of poverty traps are discussed in the literature, including nations being landlocked with bad neighbors; a vicious cycle of violent conflict; subsistence traps in which farmers wait for middlemen before they specialize but middlemen wait for a region to specialize first; working capital traps in which petty sellers have inventories too sparse to earn enough money to get a bigger inventory; low skill traps in which workers wait for jobs using special skill but firms wait for workers to get such skills; nutritional traps in which individuals are too malnourished to work, yet too poor to afford sustainable food; and behavioral traps in which individuals cannot differentiate between temptation and non-temptation goods, and therefore cannot invest in the non-temptation goods which could help them begin to escape poverty. | https://en.wikipedia.org/wiki?curid=996078 |
Green economy The green economy is defined as economy that aims at making issues of reducing environmental risks and ecological scarcities, and that aims for sustainable development without degrading the environment. It is closely related with ecological economics, but has a more politically applied focus. The 2011 UNEP Green Economy Report argues "that to be green, an economy must not only be efficient, but also fair. Fairness implies recognizing global and country level equity dimensions, particularly in assuring a just transition to an economy that is low-carbon, resource efficient, and socially inclusive." A feature distinguishing it from prior economic regimes is the direct valuation of natural capital and ecological services as having economic value ("see The Economics of Ecosystems and Biodiversity and Bank of Natural Capital") and a full cost accounting regime in which costs externalized onto society via ecosystems are reliably traced back to, and accounted for as liabilities of, the entity that does the harm or neglects an asset. Green Sticker and ecolabel practices have emerged as consumer facing measurements of friendliness to the environment and sustainable development. Many industries are starting to adopt these standards as a viable way to promote their greening practices in a globalizing economy. "Green economics" is loosely defined as any theory of economics by which an economy is considered to be component of the ecosystem in which it resides (after Lynn Margulis) | https://en.wikipedia.org/wiki?curid=996699 |
Green economy A holistic approach to the subject is typical, such that economic ideas are commingled with any number of other subjects, depending on the particular theorist. Proponents of feminism, postmodernism, the environmental movement, peace movement, Green politics, green anarchism and anti-globalization movement have used the term to describe very different ideas, all external to mainstream economics. The use of the term is further ambiguated by the political distinction of Green parties which are formally organized and claim the capital-G "Green" term as a unique and distinguishing mark. It is thus preferable to refer to a loose school of "'green economists"' who generally advocate shifts towards a green economy, biomimicry and a fuller accounting for biodiversity. ("see The Economics of Ecosystems and Biodiversity especially for current authoritative international work towards these goals and Bank of Natural Capital for a layperson's presentation of these.") Some economists view green economics as a branch or subfield of more established schools. For instance, it is regarded as classical economics where the traditional land is generalized to natural capital and has some attributes in common with labor and physical capital (since natural capital assets like rivers directly substitute for man-made ones such as canals) | https://en.wikipedia.org/wiki?curid=996699 |
Green economy Or, it is viewed as Marxist economics with nature represented as a form of Lumpenproletariat, an exploited base of non-human workers providing surplus value to the human economy, or as a branch of neoclassical economics in which the price of life for developing vs. developed nations is held steady at a ratio reflecting a balance of power and that of non-human life is very low. An increasing commitment by the UNEP (and national governments such as the UK) to the ideas of natural capital and full cost accounting under the banner 'green economy' could blur distinctions between the schools and redefine them all as variations of "green economics". As of 2010 the Bretton Woods institutions (notably the World Bank and International Monetary Fund (via its "Green Fund" initiative) responsible for global monetary policy have stated a clear intention to move towards biodiversity valuation and a more official and universal biodiversity finance. Taking these into account targeting not less but radically zero emission and waste is what is promoted by the Zero Emissions Research and Initiatives. The UNEP 2011 Green Economy Report informs that "based on existing studies, the annual financing demand to green the global economy was estimated to be in the range US$ 1.05 to US$ 2.59 trillion. To place this demand in perspective, it is about one-tenth of total global investment per year, as measured by global Gross Capital Formation | https://en.wikipedia.org/wiki?curid=996699 |
Green economy " Karl Burkart defines a green economy as based on six main sectors: The International Chamber of Commerce (ICC) representing global business defines green economy as “an economy in which economic growth and environmental responsibility work together in a mutually reinforcing fashion while supporting progress on social development”. In 2012, the ICC published the Green Economy Roadmap, containing contributions from experts from around the globe brought together in a two-year consultation process. The Roadmap represents a comprehensive and multidisciplinary effort to clarify and frame the concept of “green economy”. It highlights the essential role of business in bringing solutions to common global challenges. It sets out the following 10 conditions which relate to business/intra-industry and collaborative action for a transition towards a green economy: Green Finance is: "“1. The financing of public and private green investment through blockchain. Green investment include but is not limited to environmental goods and services (such as water management or protection of biodiversity and landscapes) prevention, minimization and compensation of damages to the environment and to the climate, components of the financial system that deal specifically with green investments, such as Green Climate Fund or financial instruments for green investments approved by a recognised international green blockchain supervisory body (e.g Fintech Corporation of London, Green Finance International Committee…)." "2 | https://en.wikipedia.org/wiki?curid=996699 |
Green economy It also comprises any project, policies, framework or system participating in the protection and application of inherent moral values (e.g abolition of crime against humanity, slavery, children labour….).”" Measuring economic output and progress is done through the use of economic index indicators. Green indices emerged from the need to measure human ecological impact, efficiency sectors like transport, energy, buildings and tourism, as well as the investment flows targeted to areas like renewable energy and cleantech innovation. Ecological footprint measurements are a way to gauge anthropogenic impact and are another standard used by municipal governments. Green economies require a transition to green energy generation based on renewable energy to replace fossil fuels as well as energy conservation and efficient energy use. There is justification for market failure to respond to environmental protection and climate protection needs with the excuse that high external costs and high initial costs for research, development, and marketing of green energy sources and green products prevents firms from voluntarily reducing their ecological footprints. The green economy may need government subsidies as market incentives to motivate firms to invest and produce green products and services. The German Renewable Energy Act, legislations of many other member states of the European Union and the American Recovery and Reinvestment Act of 2009, all provide such market incentives | https://en.wikipedia.org/wiki?curid=996699 |
Green economy However, other experts argue that green strategies can be highly profitable for corporations that understand the business case for sustainability and can market green products and services beyond the traditional green consumer. A number of organisations and individuals have criticised aspects of the 'Green Economy', particularly the mainstream conceptions of it based on using price mechanisms to protect nature, arguing that this will extend corporate control into new areas from forestry to water. The research organisation ETC Group argues that the corporate emphasis on bio-economy "will spur even greater convergence of corporate power and unleash the most massive resource grab in more than 500 years." Venezuelan professor Edgardo Lander says that the UNEP's report, "Towards a Green Economy", while well-intentioned "ignores the fact that the capacity of existing political systems to establish regulations and restrictions to the free operation of the markets – even when a large majority of the population call for them – is seriously limited by the political and financial power of the corporations." Ulrich Hoffmann, in a paper for UNCTAD also says that the focus on Green Economy and "green growth" in particular, "based on an evolutionary (and often reductionist) approach will not be sufficient to cope with the complexities of climate change" and "may rather give much false hope and excuses to do nothing really fundamental that can bring about a U-turn of global greenhouse gas emissions | https://en.wikipedia.org/wiki?curid=996699 |
Green economy Clive Spash, an ecological economist, has criticised the use of economic growth to address environmental losses, and argued that the Green Economy, as advocated by the UN, is not a new approach at all and is actually a diversion from the real drivers of environmental crisis. He has also criticised the UN's project on the economics of ecosystems and biodiversity (TEEB), and the basis for valuing ecosystems services in monetary terms. | https://en.wikipedia.org/wiki?curid=996699 |
United Nations Economic Commission for Latin America and the Caribbean The United Nations Economic Commission for Latin America and the Caribbean, known as ECLAC, UNECLAC or in Spanish and Portuguese CEPAL, is a United Nations regional commission to encourage economic cooperation. ECLAC includes 46 member States (20 in Latin America, 13 in the Caribbean and 13 from outside the region), and 13 associate members which are various non-independent territories, associated island countries and a commonwealth in the Caribbean. ECLAC publishes statistics covering the countries of the region and makes cooperative agreements with nonprofit institutions. ECLAC's headquarters is in Santiago, Chile. ECLAC was established in 1948 as the UN Economic Commission for Latin America, or UNECLA. In 1984, a resolution was passed to include the countries of the Caribbean in the name. It reports to the UN Economic and Social Council (ECOSOC). The following are all Member States of ECLAC: The following are all associate members of ECLAC: The formation of the United Nations Economic Commission for Latin America was crucial to the beginning of "Big D development". Many economic scholars attribute the founding of ECLA and its policy implementation in Latin America for the subsequent debates on structuralism and dependency theory. Although forming in the post-war period, the historic roots of the ECLA trace back to political movement made long before the war had begun. Before World War II, the perception of economic development in Latin America was formulated primarily from colonial ideology | https://en.wikipedia.org/wiki?curid=998073 |
United Nations Economic Commission for Latin America and the Caribbean This perception, combined with the Monroe Doctrine that asserted the United States as the only foreign power that could intervene in Latin American affairs, led to substantial resentment in Latin America. In the eyes of those living in the continent, Latin America was considerably economically strong; most had livable wages and industry was relatively dynamic. This concern of a need for economic restructuring was taken up by the League of Nations and manifested in a document drawn up by Stanley Bruce and presented to the League in 1939. This in turn strongly influenced the creation of the United Nations Economic and Social Committee in 1944. Although it was a largely ineffective policy development initially, the formation of the ECLA proved to have profound effects in Latin America in following decades. For example, by 1955, Peru was receiving $28.5 million in loans per ECLA request. Most of these loans were utilized as means to finance foreign exchange costs, creating more jobs and heightening export trade. To investigate the extent to which this aid was supporting industrial development plans in Peru, ECLA was sent in to study its economic structure. In order to maintain stronghold over future developmental initiatives, ECLA and its branches continued providing financial support to Peru to assist in the country’s general development | https://en.wikipedia.org/wiki?curid=998073 |
United Nations Economic Commission for Latin America and the Caribbean The terms of trade at this time, set by the United States, introduced the concept of "unequal exchange" in that the so-called "North" mandated prices that allowed them a greater return on its own resources than that of the "South's". Thus, although the export sector had grown during this time, certain significant economic and social issues continued to threaten this period of so-called stability. Although real income was on the rise, its distribution was still very uneven. Social problems were still overwhelmingly prevalent; large portions of the population were unnourished and without homes, and the education and health system were inept. | https://en.wikipedia.org/wiki?curid=998073 |
Corporate capitalism In social science and economics, corporate capitalism is a capitalist marketplace characterized by the dominance of hierarchical and bureaucratic corporations. A large proportion of the economy of the United States and its labour market falls within corporate control. In the developed world, corporations dominate the marketplace, comprising 50% or more of all businesses. Those businesses which are not corporations contain the same bureaucratic structure of corporations, but there is usually a sole owner or group of owners who are liable to bankruptcy and criminal charges relating to their business. Corporations have limited liability and remain less regulated and accountable than sole proprietorships. Corporations are usually called public entities or publicly traded entities when parts of their business can be bought in the form of shares on the stock market. This is done as a way of raising capital to finance the investments of the corporation. The shareholders appoint the executives of the corporation, who are the ones running the corporation via a hierarchical chain of power, where the bulk of investor decisions are made at the top and have effects on those beneath them. has been criticized for the amount of power and influence corporations and large business interest groups have over government policy, including the policies of regulatory agencies and influencing political campaigns (see corporate welfare) | https://en.wikipedia.org/wiki?curid=998577 |
Corporate capitalism Many social scientists have criticized corporations for failing to act in the interests of the people, and their existence seems to circumvent the principles of democracy, which assumes equal power relations between individuals in a society. In an April 29, 1938 message to the Congress, Franklin D. Roosevelt warned that the growth of private power could lead to fascism: Dwight D. Eisenhower criticized the notion of the confluence of corporate power and "de facto" fascism, but nevertheless brought attention to the "conjunction of an immense military establishment and a large arms industry" in his 1961 Farewell Address to the Nation, and stressed "the need to maintain balance in and among national programs – balance between the private and the public economy, balance between cost and hoped for advantage". | https://en.wikipedia.org/wiki?curid=998577 |
Economic appraisal is a type of decision method applied to a project, programme or policy that takes into account a wide range of costs and benefits, denominated in monetary terms or for which a monetary equivalent can be estimated. is a key tool for achieving value for money and satisfying requirements for decision accountability. It is a systematic process for examining alternative uses of resources, focusing on assessment of needs, objectives, options, costs, benefits, risks, funding, affordability and other factors relevant to decisions. The main types of economic appraisal are: is a methodology designed to assist in defining problems and finding solutions that offer the best value for money (VFM). This is especially important in relation to public expenditure and is often used as a vehicle for planning and approval of public investment relating to policies, programmes and projects. The principles of appraisal are applicable to all decisions, even those concerned with small expenditures. However, the scope of appraisal can also be very wide. Good economic appraisal leads to better decisions and VFM. It facilitates good project management and project evaluation. Appraisal is an essential part of good financial management, and it is vital to decision-making and accountability. A P Thirwal | https://en.wikipedia.org/wiki?curid=1003699 |
Value investing is an investment paradigm that involves buying securities that appear underpriced by some form of fundamental analysis. The various forms of value investing derive from the investment philosophy first taught by Benjamin Graham and David Dodd at Columbia Business School in 1928, and subsequently developed in their 1934 text "Security Analysis". The early value opportunities identified by Graham and Dodd included stock in public companies trading at discounts to book value or tangible book value, those with high dividend yields, and those having low price-to-earning multiples, or low price-to-book ratios. High-profile proponents of value investing, including Berkshire Hathaway chairman Warren Buffett, have argued that the essence of value investing is buying stocks at less than their intrinsic value. The discount of the market price to the intrinsic value is what Benjamin Graham called the "margin of safety". For the last 25 years, under the influence of Charlie Munger, Buffett expanded the value investing concept with a focus on "finding an outstanding company at a sensible price" rather than generic companies at a bargain price. Hedge fund manager Seth Klarman has described value investing as rooted in a rejection of the efficient market hypothesis (EMH). While the EMH proposes that securities are accurately priced based on all available data, value investing proposes that some equities are not accurately priced | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Graham never used the phrase, "value investing" — the term was coined later to help describe his ideas and has resulted in significant misinterpretation of his principles, the foremost being that Graham simply recommended cheap stocks. And now The Heilbrunn Center is the home of the Value Investing Program at Columbia Business School. was established by Benjamin Graham and David Dodd, both professors at Columbia Business School and teachers of many famous investors. In Graham's book "The Intelligent Investor", he advocated the important concept of margin of safety — first introduced in "Security Analysis", a 1934 book he co-authored with David Dodd — which calls for an approach to investing that is focused on purchasing equities at prices less than their intrinsic values. In terms of picking or screening stocks, he recommended purchasing firms which have steady profits, are trading at low prices to book value, have low price-to-earnings (P/E) ratios, and which have relatively low debt. However, the concept of value (as well as "book value") has evolved significantly since the 1970s. Book value is most useful in industries where most assets are tangible. Intangible assets such as patents, brands, or goodwill are difficult to quantify, and may not survive the break-up of a company. When an industry is going through fast technological advancements, the value of its assets is not easily estimated | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Sometimes, the production power of an asset can be significantly reduced due to competitive disruptive innovation and therefore its value can suffer permanent impairment. One good example of decreasing asset value is a personal computer. An example of where book value does not mean much is the service and retail sectors. One modern model of calculating value is the discounted cash flow model (DCF), where the value of an asset is the sum of its future cash flows, discounted back to the present. has proven to be a successful investment strategy. There are several ways to evaluate the success. One way is to examine the performance of simple value strategies, such as buying low PE ratio stocks, low price-to-cash-flow ratio stocks, or low price-to-book ratio stocks. Numerous academics have published studies investigating the effects of buying value stocks. These studies have consistently found that value stocks outperform growth stocks and the market as a whole. A review of 26 years of data (1990 to 2015) from US markets found that the over-performance of value investing was more pronounced in stocks for smaller and mid-size companies than for larger companies and recommended a "value tilt" with greater emphasis on value than growth investing in personal portfolios. Simply examining the performance of the best known value investors would not be instructive, because investors do not become well known unless they are successful. This introduces a selection bias | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing A better way to investigate the performance of a group of value investors was suggested by Warren Buffett, in his May 17, 1984 speech that was published as The Superinvestors of Graham-and-Doddsville. In this speech, Buffett examined the performance of those investors who worked at Graham-Newman Corporation and were thus most influenced by Benjamin Graham. Buffett's conclusion is identical to that of the academic research on simple value investing strategies—value investing is, on average, successful in the long run. During about a 25-year period (1965–90), published research and articles in leading journals of the value ilk were few. Warren Buffett once commented, "You couldn't advance in a finance department in this country unless you thought that the world was flat." Benjamin Graham is regarded by many to be the father of value investing. Along with David Dodd, he wrote "Security Analysis", first published in 1934. The most lasting contribution of this book to the field of security analysis was to emphasize the quantifiable aspects of security analysis (such as the evaluations of earnings and book value) while minimizing the importance of more qualitative factors such as the quality of a company's management. Graham later wrote "The Intelligent Investor", a book that brought value investing to individual investors. Aside from Buffett, many of Graham's other students, such as William J. Ruane, Irving Kahn, Walter Schloss, and Charles Brandes went on to become successful investors in their own right | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Irving Kahn was one of Graham's teaching assistants at Columbia University in the 1930s. He was a close friend and confidant of Graham's for decades and made research contributions to Graham's texts "Security Analysis", "Storage and Stability", "World Commodities and World Currencies" and "The Intelligent Investor". Kahn was a partner at various finance firms until 1978 when he and his sons, Thomas Graham Kahn and Alan Kahn, started the value investing firm, Kahn Brothers & Company. Irving Kahn remained chairman of the firm until his death at age 109. Walter Schloss was another Graham-and-Dodd disciple. Schloss never had a formal education. When he was 18, he started working as a runner on Wall Street. He then attended investment courses taught by Ben Graham at the New York Stock Exchange Institute, and eventually worked for Graham in the Graham-Newman Partnership. In 1955, he left Graham’s company and set up his own investment firm, which he ran for nearly 50 years. Walter Schloss was one of the investors Warren Buffett profiled in his famous Superinvestors of Graham-and-Doddsville article. Christopher H. Browne of Tweedy, Browne was well known for value investing. According to the "Wall Street Journal", Tweedy, Browne was the favorite brokerage firm of Benjamin Graham during his lifetime; also, the Tweedy, Browne Value Fund and Global Value Fund have both beat market averages since their inception in 1993. In 2006, Christopher H | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Browne wrote "The Little Book of Value Investing" in order to teach ordinary investors how to value invest. Peter Cundill was a well-known Canadian value investor who followed the Graham teachings. His flagship Cundill Value Fund allowed Canadian investors access to fund management according to the strict principles of Graham and Dodd. Warren Buffett had indicated that Cundill had the credentials he's looking for in a chief investment officer. Graham's most famous student, however, is Warren Buffett, who ran successful investing partnerships before closing them in 1969 to focus on running Berkshire Hathaway. Buffett was a strong advocate of Graham's approach and strongly credits his success back to his teachings. Another disciple, Charlie Munger, who joined Buffett at Berkshire Hathaway in the 1970s and has since worked as Vice Chairman of the company, followed Graham's basic approach of buying assets below intrinsic value, but focused on companies with robust qualitative qualities, even if they weren't statistically cheap. This approach by Munger gradually influenced Buffett by reducing his emphasis on quantitatively cheap assets, and instead encouraged him to look for long-term sustainable competitive advantages in companies, even if they weren't quantitatively cheap relative to intrinsic value. Buffett is often quoted saying, "It's better to buy a great company at a fair price, than a fair company at a great price | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing " Columbia Business School has played a significant role in shaping the principles of the "Value Investor", with professors and students making their mark on history and on each other. Ben Graham’s book, "The Intelligent Investor", was Warren Buffett’s bible and he referred to it as "the greatest book on investing ever written.” A young Warren Buffett studied under Ben Graham, took his course and worked for his small investment firm, Graham Newman, from 1954 to 1956. Twenty years after Ben Graham, Roger Murray arrived and taught value investing to a young student named Mario Gabelli. About a decade or so later, Bruce Greenwald arrived and produced his own protégés, including Paul Sonkin—just as Ben Graham had Buffett as a protégé, and Roger Murray had Gabelli. Mutual Series has a well-known reputation of producing top value managers and analysts in this modern era. This tradition stems from two individuals: Max Heine, founder of the well regarded value investment firm Mutual Shares fund in 1949 and his protégé legendary value investor Michael F. Price. Mutual Series was sold to Franklin Templeton Investments in 1996. The disciples of Heine and Price quietly practice value investing at some of the most successful investment firms in the country. Franklin Templeton Investments takes its name from Sir John Templeton, another contrarian value oriented investor | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Seth Klarman, a Mutual Series alum, is the founder and president of The Baupost Group, a Boston-based private investment partnership, and author of "Margin of Safety, Risk Averse Investing Strategies for the Thoughtful Investor", which since has become a value investing classic. Now out of print, "Margin of Safety" has sold on Amazon for $1,200 and eBay for $2,000. Laurence Tisch, who led Loews Corporation with his brother, Robert Tisch, for more than half a century, also embraced value investing. Shortly after his death in 2003 at age 80, Fortune wrote, "Larry Tisch was the ultimate value investor. He was a brilliant contrarian: He saw value where other investors didn't -- and he was usually right." By 2012, Loews Corporation, which continues to follow the principles of value investing, had revenues of $14.6 billion and assets of more than $75 billion. Michael Larson is the Chief Investment Officer of Cascade Investment, which is the investment vehicle for the Bill & Melinda Gates Foundation and the Gates personal fortune. Cascade is a diversified investment shop established in 1994 by Gates and Larson. Larson graduated from Claremont McKenna College in 1980 and the Booth School of Business at the University of Chicago in 1981. Larson is a well known value investor but his specific investment and diversification strategies are not known | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Larson has consistently outperformed the market since the establishment of Cascade and has rivaled or outperformed Berkshire Hathaway's returns as well as other funds based on the value investing strategy. Martin J. Whitman is another well-regarded value investor. His approach is called safe-and-cheap, which was hitherto referred to as financial-integrity approach. Martin Whitman focuses on acquiring common shares of companies with extremely strong financial position at a price reflecting meaningful discount to the estimated NAV of the company concerned. Whitman believes it is ill-advised for investors to pay much attention to the trend of macro-factors (like employment, movement of interest rate, GDP, etc.) because they are not as important and attempts to predict their movement are almost always futile. Whitman's letters to shareholders of his Third Avenue Value Fund (TAVF) are considered valuable resources "for investors to pirate good ideas" by Joel Greenblatt in his book on special-situation investment "You Can Be a Stock Market Genius". Joel Greenblatt achieved annual returns at the hedge fund Gotham Capital of over 50% per year for 10 years from 1985 to 1995 before closing the fund and returning his investors' money. He is known for investing in special situations such as spin-offs, mergers, and divestitures. Charles de Vaulx and Jean-Marie Eveillard are well known global value managers | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing For a time, these two were paired up at the First Eagle Funds, compiling an enviable track record of risk-adjusted outperformance. For example, Morningstar designated them the 2001 "International Stock Manager of the Year" and de Vaulx earned second place from Morningstar for 2006. Eveillard is known for his Bloomberg appearances where he insists that securities investors never use margin or leverage. The point made is that margin should be considered the anathema of value investing, since a negative price move could prematurely force a sale. In contrast, a value investor must be able and willing to be patient for the rest of the market to recognize and correct whatever pricing issue created the momentary value. Eveillard correctly labels the use of margin or leverage as speculation, the opposite of value investing. Other notable value investors include: Mason Hawkins, Thomas Forester, Whitney Tilson, Mohnish Pabrai, Li Lu, Guy Spier and Tom Gayner who manages the investment portfolio of Markel Insurance. San Francisco investing firm Dodge & Cox, founded in 1931 and with one of the oldest US mutual funds still in existence as of 2019, emphasizes value investing. Value stocks do not always beat growth stocks, as demonstrated in the late 1990s. Moreover, when value stocks perform well, it may not mean that the market is inefficient, though it may imply that value stocks are simply riskier and thus require greater returns. | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Furthermore, Foye and Mramor (2016) find that country-specific factors have a strong influence on measures of value (such as the book-to-market ratio) this leads them to conclude that the reasons why value stocks outperform are country-specific. An issue with buying shares in a bear market is that despite appearing undervalued at one time, prices can still drop along with the market. Conversely, an issue with not buying shares in a bull market is that despite appearing overvalued at one time, prices can still rise along with the market. Also, one of the biggest criticisms of price centric value investing is that an emphasis on low prices (and recently depressed prices) regularly misleads retail investors; because fundamentally low (and recently depressed) prices often represent a fundamentally sound difference (or change) in a company's relative financial health. To that end, Warren Buffett has regularly emphasized that "it's far better to buy a wonderful company at a fair price, than to buy a fair company at a wonderful price." In 2000, Stanford accounting professor Joseph Piotroski developed the "F-Score", which discriminates higher potential members within a class of value candidates. The F-Score aims to discover additional value from signals in a firm's series of annual financial statements, after initial screening of static measures like book-to-market value. The F-Score formula inputs financial statements and awards points for meeting predetermined criteria | https://en.wikipedia.org/wiki?curid=1011242 |
Value investing Piotroski retrospectively analyzed a class of high book-to-market stocks in the period 1976-1996, and demonstrated that high F-Score selections increased returns by 7.5% annually versus the class as a whole. The American Association of Individual Investors examined 56 screening methods in a retrospective analysis of the financial crisis of 2008, and found that only F-Score produced positive results. Another issue is the method of calculating the "intrinsic value". Some analysts believe that two investors can analyze the same information and reach different conclusions regarding the intrinsic value of the company, and that there is no systematic or standard way to value a stock. In other words, a value investing strategy can only be considered successful if it delivers excess returns after allowing for the risk involved, where risk may be defined in many different ways, including market risk, multi-factor models or idiosyncratic risk. | https://en.wikipedia.org/wiki?curid=1011242 |
Workerism is a political theory that emphasizes the importance of, or glorifies, the working class. Workerism, or operaismo, was of particular significance in Italian left politics. (or "operaismo") is a political analysis, whose main elements were to merge into autonomism, that starts out from the power of the working class. Michael Hardt and Antonio Negri, known as operaist and autonomist writers, offer a definition of operaismo, quoting from Marx as they do so: The workerists followed Marx in seeking to base their politics on an investigation of working class life and struggle. Through translations made available by Danilo Montaldi and others, they drew upon previous activist research in the United States by the Johnson-Forest Tendency and in France by the group Socialisme ou Barbarie. The Johnson-Forest Tendency had studied working class life and struggles within the Detroit auto industry, publishing pamphlets such as "The American Worker" (1947), "Punching Out" (1952) and "Union Committeemen and Wildcat Strikes" (1955). That work was translated into French by Socialisme ou Barbarie and published, serially, in their journal. They too began investigating and writing about what was going on inside workplaces, in their case inside both auto factories and insurance offices. The journal "Quaderni Rossi" ("Red Notebooks", 1961–5), along with its successor "Classe Operaia" ("Working Class", 1963–6), both founded by Negri and Tronti, developed workerist theory, focusing on the struggles of proletarians | https://en.wikipedia.org/wiki?curid=1011693 |
Workerism Associated with this theoretical development was a praxis based on workplace organising, most notably by Lotta Continua. This reached its peak in the Italian "Hot Autumn" of 1969. By the mid-1970s, however, the emphasis shifted from the factory to "the social factory"—the everyday lives of working people in their communities. The "operaist" movement was increasingly known as autonomist. More broadly, workerism can imply the idealization of workers, especially manual workers, working class culture (or an idealized conception of it) and manual labour in general. Socialist realism is an example of a form of expression that would be likely to be accused of workerism in this sense, but this also applies to Fascism, such as Franco's Falangist movement, which often used propaganda showing workers living and working in equitable conditions. The charge of workerism is often levelled at syndicalists. Traditional communist parties are also thought to be workerist, because of their supposed glorification of manual workers to the exclusion of white-collar workers. This use of the term was the most common English language use during the twentieth century. | https://en.wikipedia.org/wiki?curid=1011693 |
Engel's law is an observation in economics stating that, as income rises, the "proportion" of income spent on food falls―even if "absolute" expenditure on food rises. In other words, the income elasticity of demand of food is between 0 and 1. The law was named after the statistician Ernst Engel (1821–1896). does not imply that food spending remains unchanged as income increases; instead, it suggests that consumers increase their expenditures for food products "in percentage terms" less than their increases in income. One application of the statistic is treating it as a reflection of the living standard of a country; as that proportion―or "Engel coefficient"―increases, the country is by nature poorer. Conversely, a "low" Engel coefficient indicates a higher standard of living. The interaction between Engel's law, technological progress, and the process of structural change is crucial for explaining long-term economic growth as suggested by Leon, and Pasinetti. | https://en.wikipedia.org/wiki?curid=1013999 |
Revenu Québec (formerly the Ministère du Revenu du Québec (Quebec Ministry of Revenue) is the department of the government of the Province of Quebec, Canada that: Effective 2005, the Ministère du Revenu du Québec has been renamed Revenu Québec. Effective 2010, has been reconstituted as Agence du Revenu du Québec. | https://en.wikipedia.org/wiki?curid=1019907 |
Hodrick–Prescott filter The (also known as Hodrick–Prescott decomposition) is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier formula_1. The filter was popularized in the field of economics in the 1990s by economists Robert J. Hodrick and Nobel Memorial Prize winner Edward C. Prescott. However, it was first proposed much earlier by E. T. Whittaker in 1923. The reasoning for the methodology uses ideas related to the decomposition of time series. Let formula_2 for formula_3 denote the logarithms of a time series variable. The series formula_2 is made up of a trend component formula_5, a cyclical component formula_6, and an error component formula_7 such that formula_8. Given an adequately chosen, positive value of formula_1, there is a trend component that will solve The first term of the equation is the sum of the squared deviations formula_11, which penalizes the cyclical component. The second term is a multiple formula_1 of the sum of the squares of the trend component's second differences. This second term penalizes variations in the growth rate of the trend component. The larger the value of formula_1, the higher is the penalty | https://en.wikipedia.org/wiki?curid=1021099 |
Hodrick–Prescott filter Hodrick and Prescott suggest 1600 as a value for formula_1 for quarterly data. Ravn and Uhlig (2002) state that formula_1 should vary by the fourth power of the frequency observation ratio; thus, formula_1 should equal 6.25 (1600/4^4) for annual data and 129,600 (1600*3^4) for monthly data. The will only be optimal when: The standard two-sided is non-causal as it is not purely backward looking. Hence, it should not be used when estimating DSGE models based on recursive state-space representations (e.g., likelihood-based methods that make use of the Kalman filter). The reason is that the uses observations at formula_18 to construct the current time point formula_19, while the recursive setting assumes that only current and past states influence the current observation. One way around this is to use the one-sided Hodrick–Prescott filter. Exact algebraic formulas are available for the two-sided in terms of its signal-to-noise ratio formula_1. A working paper by James D. Hamilton at UC San Diego titled "Why You Should Never Use the Hodrick-Prescott Filter" presents evidence against using the HP filter. Hamilton writes that:<br> "(1) The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process.<br> (2) A one-sided version of the filter reduces but does not eliminate spurious predictability and moreover produces series that do not have the properties sought by most potential users of the HP filter | https://en.wikipedia.org/wiki?curid=1021099 |
Hodrick–Prescott filter <br> (3) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice, e.g., a value for λ far below 1600 for quarterly data.<br> (4) There’s a better alternative. A regression of the variable at date t+h on the four most recent values as of date t offers a robust approach to detrending that achieves all the objectives sought by users of the HP filter with none of its drawbacks." A working paper by Robert J. Hodrick titled "An Exploration of Trend-Cycle Decomposition Methodologies in Simulated Data"examines whether the proposed alternative approach of James D. Hamilton is actually better than the HP filter at extracting the cyclical component of several simulated time series calibrated to approximate U.S. real GDP. Hodrick finds that for time series in which there are distinct growth and cyclical components, the HP filter comes closer to isolating the cyclical component than the Hamilton alternative. | https://en.wikipedia.org/wiki?curid=1021099 |
Equity premium puzzle The equity premium puzzle refers to the inability of an important class of economic models to explain the average premium of the returns on a well-diversified U.S. equity portfolio over U.S. Treasury Bills observed for more than 100 years. The term was coined by Rajnish Mehra and Edward C. Prescott in a study published in 1985 titled "The Equity Premium: A Puzzle". An earlier version of the paper was published in 1982 under the title "A test of the intertemporal asset pricing model". The authors found that a standard general equilibrium model, calibrated to display key U.S. business cycle fluctuations, generated an equity premium of less than 1% for reasonable risk aversion levels. This result stood in sharp contrast with the average equity premium of 6% observed during the historical period. In simple terms, the investor returns on equities have been on average so much higher than returns on U.S. Treasury Bonds, that it is hard to explain why investors buy bonds, even after allowing for a reasonable amount of risk aversion. In 1982, Robert J. Shiller published the first calculation that showed that either a large risk aversion coefficient or counterfactually large consumption variability was required to explain the means and variances of asset returns. Azeredo (2014) shows, however, that increasing the risk aversion level may produce a negative equity premium in an Arrow-Debreu economy constructed to mimic the persistence in U.S. consumption growth observed in the data since 1929 | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle The intuitive notion that stocks are much riskier than bonds is not a sufficient explanation of the observation that the magnitude of the disparity between the two returns, the equity risk premium (ERP), is so great that it implies an implausibly high level of investor risk aversion that is fundamentally incompatible with other branches of economics, particularly macroeconomics and financial economics. The process of calculating the equity risk premium, and selection of the data used, is highly subjective to the study in question, but is generally accepted to be in the range of 3–7% in the long-run. Dimson et al. calculated a premium of "around 3–3.5% on a geometric mean basis" for global equity markets during 1900–2005 (2006). However, over any one decade, the premium shows great variability—from over 19% in the 1950s to 0.3% in the 1970s. To quantify the level of risk aversion implied if these figures represented the "expected" outperformance of equities over bonds, investors would prefer a certain payoff of $51,300 to a 50/50 bet paying either $50,000 or $100,000. The puzzle has led to an extensive research effort in both macroeconomics and finance. So far a range of useful theoretical tools and numerically plausible explanations have been presented, but no one solution is generally accepted by economists | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle The economy has a single representative household whose preferences over stochastic consumption paths are given by: where formula_2 is the subjective discount factor, formula_3 is the per capita consumption at time formula_4, U() is an increasing and concave utility function. In the Mehra and Prescott (1985) economy, the utility function belongs to the constant relative risk aversion class: where formula_6 is the constant relative risk aversion parameter. When formula_7, the utility function is the natural logarithmic function. Weil (1989) replaced the constant relative risk aversion utility function with the Kreps-Porteus nonexpected utility preferences. The Kreps-Porteus utility function has a constant intertemporal elasticity of substitution and a constant coefficient of relative risk aversion which are not required to be inversely related - a restriction imposed by the constant relative risk aversion utility function. Mehra and Prescott (1985) and Weil (1989) economies are a variations of Lucas (1978) pure exchange economy. In their economies the growth rate of the endowment process, formula_9, follows an ergodic Markov Process. where formula_11. This assumption is the key difference between Mehra and Prescott's economy and Lucas' economy where the level of the endowment process follows a Markov Process. There is a single firm producing the perishable consumption good. At any given time formula_4, the firm's output must be less than or equal to formula_13 which is stochastic and follows formula_14 | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle There is only one equity share held by the representative household. We work out the intertemporal choice problem. This leads to: as the fundamental equation. For computing stock returns where gives the result. They can compute the derivative with respect to the percentage of stocks, and this must be zero. Much data exists that says that stocks have higher returns. For example, Jeremy Siegel says that stocks in the United States have returned 6.8% per year over a 130-year period. Proponents of the capital asset pricing model say that this is due to the higher beta of stocks, and that higher-beta stocks should return even more. Others have criticized that the period used in Siegel's data is not typical, or the country is not typical. A large number of explanations for the puzzle have been proposed. These include: Kocherlakota (1996), Mehra and Prescott (2003) present a detailed analysis of these explanations in financial markets and conclude that the puzzle is real and remains unexplained. Subsequent reviews of the literature have similarly found no agreed resolution. Azeredo (2014) showed that traditional pre-1930 consumption measures understate the extent of serial correlation in the U.S. annual real growth rate of per capita consumption of non-durables and services ("consumption growth"). Under alternative measures proposed in the study, the serial correlation of consumption growth is found to be positive | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle This new evidence implies that an important subclass of dynamic general equilibrium models studied by Mehra and Prescott (1985) generates negative equity premium for reasonable risk-aversion levels, thus further exacerbating the equity premium puzzle. Some explanations rely on assumptions about individual behavior and preferences different from those made by Mehra and Prescott. Examples include the prospect theory model of Benartzi and Thaler (1995) based on loss aversion. A problem for this model is the lack of a general model of portfolio choice and asset valuation for prospect theory. A second class of explanations is based on relaxation of the optimization assumptions of the standard model. The standard model represents consumers as continuously-optimizing dynamically-consistent expected-utility maximizers. These assumptions provide a tight link between attitudes to risk and attitudes to variations in intertemporal consumption which is crucial in deriving the equity premium puzzle. Solutions of this kind work by weakening the assumption of continuous optimization, for example by supposing that consumers adopt satisficing rules rather than optimizing. An example is info-gap decision theory, based on a non-probabilistic treatment of uncertainty, which leads to the adoption of a robust satisficing approach to asset allocation | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle A second class of explanations focuses on characteristics of equity not captured by standard capital market models, but nonetheless consistent with rational optimization by investors in smoothly functioning markets. Writers including Bansal and Coleman (1996), Palomino (1996) and Holmstrom and Tirole (1998) focus on the demand for liquidity. McGrattan and Prescott (2001) argue that the observed equity premium in the United States since 1945 may be explained by changes in the tax treatment of interest and dividend income. As Mehra (2003) notes, there are some difficulties in the calibration used in this analysis and the existence of a substantial equity premium before 1945 is left unexplained. Graham and Harvey have estimated that, for the United States, the expected average premium during the period June 2000 to November 2006 ranged between 4.65 and 2.50. They found a modest correlation of 0.62 between the 10-year equity premium and a measure of implied volatility (in this case VIX, the Chicago Board Options Exchange Volatility Index). Anwar Shaikh argues that in the classical framework the equity premium is a consequence of fractional-reserve banking and competition. In the most abstract model of a fractional-reserve bank in classical economics, a bank's capital consists only of its reserves "R". The bank attracts deposits "D" such that the deposits cover a fraction "ρ = R/D" of the reserves, then creates loans "L", which are covered by a fraction "d = D/L" of the deposits | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle The bank then obtains a profit rate of formula_18 where "i" = "r·ρ·d" is the interest rate on loans. Since "ρ·d" = "R/L" < 1, the profit rate "r" is higher than the interest rate "i". In a competitive market, the interest rates will be equalized across banks. Since bond holders compete with banks in the credit market, their returns are equalized with the bank interest rate. Stock returns, on the other hand, are equalized with the profit rate "r" and there is no mechanism that equalizes equity and bond rates of return. In a more realistic classical model, the bank interest rate is the sum of "r·ρ·d" and a positive term that depends on banks' operating costs and the price level, so that the equity premium is smaller than in the abstract model. The premium "r−i" must still be greater than zero for there to be an incentive for firms to borrow. The difference between interest rate and profit rate is, however, not a risk premium, but a structural factor. Two broad classes of market failure have been considered as explanations of the equity premium. First, problems of adverse selection and moral hazard may result in the absence of markets in which individuals can insure themselves against systematic risk in labor income and noncorporate profits. Second, transaction costs or liquidity constraints may prevent individuals from smoothing consumption over time. A final possible explanation is that there is no puzzle to explain: that there is no equity premium | https://en.wikipedia.org/wiki?curid=1021521 |
Equity premium puzzle This can be argued from a number of ways, all of them being different forms of the argument that we don't have enough statistical power to distinguish the equity premium from zero: A related criticism is that the apparent equity premium is an artifact of observing stock market bubbles in progress. Note however that most mainstream economists agree that the evidence shows substantial statistical power. The magnitude of the equity premium has implications for resource allocation, social welfare, and economic policy. Grant and Quiggin (2005) derive the following implications of the existence of a large equity premium: | https://en.wikipedia.org/wiki?curid=1021521 |
Balassa–Samuelson effect The Balassa–Samuelson effect, also known as Harrod–(Kravis and Lipsey 1983), the Ricardo–Viner–Harrod–Balassa–Samuelson–Penn–Bhagwati effect (Samuelson 1994, p. 201), or productivity biased purchasing power parity (PPP) (Officer 1976) is the tendency for consumer prices to be systematically higher in more developed countries than in less developed countries. This observation about the systematic differences in consumer prices is called the "Penn effect". The Balassa–Samuelson hypothesis is the proposition that this can be explained by the greater variation in productivity, between developed and less developed countries, in the traded goods' sectors than in the non-tradable sectors. Béla Balassa and Paul Samuelson independently proposed the causal mechanism for the Penn effect in the early 1960s. The depends on inter-country differences in the relative productivity of the tradable and non-tradable sectors. By the law of one price, entirely tradable goods cannot vary greatly in price by location (because buyers can source from the lowest cost location). However most services must be delivered locally (e.g. hairdressing), and many manufactured goods such as furniture have high transportation costs (or, conversely, low value-to-weight or low value-to-bulk ratios), which makes deviations from one price (known as purchasing power parity or PPP-deviations) persistent. The Penn effect is that PPP-deviations usually occur in the same direction: where incomes are high, average price levels are typically high | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect The simplest model which generates a has two countries, two goods (one tradable, and a country specific nontradable) and one factor of production, labor. For simplicity assume that productivity, as measured by marginal product (in terms of goods produced) of labor, in the nontradable sector is equal between countries and normalized to one. formula_1 where "nt" denotes the nontradable sector and 1 and 2 indexes the two countries. In each country, under the assumption of competition in the labor market the wage ends up being equal to the value of the marginal product, or the sector's price times MPL. (Note that this is not necessary, just sufficient, to produce the Penn effect. What is needed is that wages are at least related to productivity.) formula_2 formula_3 Where the subscript "t" denotes the tradables sector. Note that the lack of a country specific subscript on the price of tradables means that tradable goods prices are equalized between the two countries. Suppose that country 2 is the more productive, and hence, the wealthier one. This means that formula_4 which implies that formula_5. So with a same (world) price for tradable goods, the price of nontradable goods will be lower in the less productive country, resulting in an overall lower price level. A typical discussion of this argument would include the following features: The average asking price for a house in a prosperous city can be ten times that of an identical house in a depressed area of the "same country" | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect Therefore, the RER-deviation exists independent of what happens to the "nominal exchange rate" (which is always 1 for areas sharing the same currency). Looking at the price level distribution within a country gives a clearer picture of the effect, because this removes three complicating factors: A pint of pub beer is famously more expensive in the south of England than the north, but supermarket beer prices are very similar. This may be treated as anecdotal evidence in favour of the Balassa–Samuelson hypothesis, since supermarket beer is an easily transportable, traded good. (Although pub beer is transportable, the pub itself is not.) The BS-hypothesis explanation for the price differentials is that the 'productivity' of pub employees (in pints served per hour) is more uniform than the 'productivity' (in foreign currency earned per year) of people working in the dominant tradable sector in each region of the country (financial services in the south of England, manufacturing in the north). Although the employees of southern pubs are not significantly more productive than their counterparts in the north, southern pubs must pay wages comparable to those offered by other southern firms in order to keep their staff. This results in southern pubs incurring a higher labour cost per pint served. Evidence for the Penn effect is well established in today's world (and is readily observable when traveling internationally) | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect However, the Balassa–Samuelson (BS) hypothesis implies that countries with rapidly expanding economies should tend to have more rapidly appreciating exchange rates (for instance the Four Asian Tigers); conventional econometric tests have resulted with mixed findings for the predictions of the BS effect. In total, since it was (re)discovered in 1964, according to Tica and Druzic (2006) the HBS theory "has been tested 60 times in 98 countries in time series or panel analyses and in 142 countries in cross-country analyses. In these analyzed estimates, country specific HBS coefficients have been estimated 166 times in total, and at least once for 65 different countries". Also, one should have in mind that a lot of papers have been published since then. Bahmani-Oskooee and Abm (2005) and Egert, Halpern and McDonald (2006) also provide quite interesting surveys of empirical evidence on BS effect. Over time, the testing of the HBS model has evolved quite dramatically. Panel data and time series techniques have crowded out old cross-section tests, demand side and terms of trade variables have emerged as explanatory variables, new econometric methodologies have replaced old ones, and recent improvements with endogenous tradability have provided direction for future researchers. The sector approach combined with panel data analysis and/or cointegration has become a benchmark for empirical tests | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect Consensus has been reached on the testing of internal and external HBS effects (vis a vis a numeraire country) with a strong reservation against the purchasing power parity assumption in the tradable sector. Analysis of empirical data shows that the vast majority of the evidence supports the HBS model. A deeper analysis of the empirical evidence shows that the strength of the results is strongly influenced by the nature of the tests and set of countries analyzed. Almost all cross-section tests confirm the model, while panel data results confirm the model for the majority of countries included in the tests. Although some negative results have been returned, there has been strong support for the predictions of a cointegration between relative productivity and relative prices within a country and between countries, while the interpretation of evidence for cointegration between real exchange rate and relative productivity has been much more controversial. Therefore, most of the contemporary authors (see for example: Egert, Halpern and McDonald (2006) or Drine & Rault (2002) ) analyze main BS assumptions separately: Refinements to the econometric techniques and debate about alternative models are continuing in the International economics community | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect For instance: The next section lists some of the alternative proposals to an explanation of the Penn effect, but there are significant econometric problems with testing the BS-hypothesis, and the lack of strong evidence for it between modern economies may not refute it, or imply that it produces a small effect. For instance, other effects of exchange rate movements might mask the long-term BS-hypothesis mechanism (making it harder to detect if it exists). Exchange rate movements are believed by some to affect productivity; if this is true then regressing RER movements on differential productivity growth will be 'polluted' by a totally different relationship between the variables. Most professional economists accept that the model has some merit. However other sources of the Penn effect RER/GDP relationship have been proposed: In a 2001 International Monetary Fund working paper Macdonald & Ricci accept that relative productivity changes produce PPP-deviations, but argue that this is not confined to tradables versus non-tradable sectors. Quoting the abstract: ""an increase in the productivity and competitiveness of the distribution sector with respect to foreign countries leads to an appreciation of the real exchange rate, similarly to what a relative increase in the domestic productivity of tradables does"". Capital inflows (say to the Netherlands) may stimulate currency appreciation through demand for money | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect As the RER appreciates, the competitiveness of the traded-goods sectors falls (in terms of the international price of traded goods). In this model, there has been no change in real economy productivities, but money price productivity in traded goods has been exogenously lowered through currency appreciation. Since capital inflow is associated with high-income states (e.g. Monaco) this could explain part of the RER/Income correlation. Yves Bourdet and Hans Falck have studied the effect of Cape Verde remittances on the traded-goods sector. They find that, as local incomes have risen with a doubling of remittances from abroad, the Cape Verde RER has appreciated 14% (during the 1990s). The export sector of the Cape Verde economy suffered a similar fall in productivity during the same period, which was caused entirely by capital flows and not by the BS-effect. Rudi Dornbusch (1998) and others say that income rises can change the ratio of demand for goods and services (tradable and non-tradable sectors). This is because services tend to be superior goods, which are consumed proportionately more heavily at higher incomes. A shift in preferences at the microeconomic level, caused by an income effect can change the make-up of the consumer price index to include proportionately more expenditure on services | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect This alone may shift the consumer price index, and might make the non-trade sector look relatively less productive than it had been when demand was lower; if service quality (rather than quantity) follows diminishing returns to labour input, a general demand for a higher service quality automatically produces a reduction in per-capita productivity. A typical labour market pattern is that high-GDP countries have a higher ratio of service-sector to traded-goods-sector employment than low-GDP countries. If the traded/non-traded consumption ratio is also correlated with the price level, the Penn effect would still be observed with labour productivity rising equally fast (in identical technologies) between countries. Lipsey and Swedenborg (1996) show a strong correlation between the barriers to Free trade and the domestic price level. If wealthy countries feel more able to protect their native producers than developing nations (e.g. with tariffs on agricultural imports) we should expect to see a correlation between rising GDP and rising prices (for goods in protected industries - especially food). This explanation is similar to the BS-effect, since an industry needing protection must be measurably less productive in the world market of the commodity it produces | https://en.wikipedia.org/wiki?curid=1025655 |
Balassa–Samuelson effect However, this reasoning is slightly different from the pure BS-hypothesis, because the goods being produced are 'traded-goods', even though protectionist measures mean that they are more expensive on the domestic market than the international market, so they will not be "traded" internationally The supply-side economists (and others) have argued that raising International competitiveness through policies that promote traded goods sectors' productivity (at the expense of other sectors) will increase a nation's GDP, and increase its standard of living, when compared with treating the sectors equally. The might be one reason to oppose this trade theory, because it predicts that: "a GDP gain in traded goods does not lead to as much of an improvement in the living standard as an equal GDP increase in the non-traded sector". (This is due to the effect's prediction that the CPI will increase by more in the former case.) The model was developed independently in 1964 by Béla Balassa and Paul Samuelson. The effect had previously been hypothesized in the first edition of Roy Forbes Harrod's "International Economics" (1939, pp. 71–77), but this portion was not included in subsequent editions. Partly because empirical findings have been mixed, and partly to differentiate the model from its conclusion, modern papers tend to refer to the Balassa–Samuelson "hypothesis", rather than the Balassa–Samuelson "effect". (See for instance: "A panel data analysis of the Balassa-Samuelson hypothesis", referred to above.) | https://en.wikipedia.org/wiki?curid=1025655 |
Tax advantage refers to the economic bonus which applies to certain accounts or investments that are, by statute, tax-reduced, tax-deferred, or tax-free. Governments establish the tax advantages to encourage private individuals to contribute money when it is considered to be in the public interest. An example is retirement plans, which often offer tax advantages to incentivize savings for retirement. In the United States, many government bonds (such as state bonds or municipal bonds) may also be exempt from certain taxes. In countries in which the average age of the population is increasing, tax advantages may put pressure on pension schemes. For example, where benefits are funded on a pay-as-you-go basis, the benefits paid to those receiving a pension come directly from the contributions of those of working age. If the proportion of pensioners to working-age people rises, the contributions needed from working people will also rise proportionately. In the United States, the rapid onset of Baby Boomer retirement is currently causing such a problem. However, there are international limitations regarding tax advantages realized through pensions plans. If a person with dual citizen in the United States and in the United Kingdom, they may have tax liabilities to both. If this person is living in the United Kingdom, their pension could have tax advantages in the UK, for example, but not in the US. Even though a UK pension may be exempt from UK tax, it doesn’t necessarily mean that it is exempt from US taxes | https://en.wikipedia.org/wiki?curid=1028140 |
Tax advantage In short, a US Tax payer with dual citizenship may have to pay taxes on the gains from the UK pension to the United States government, but not the United Kingdom. In order to reduce the burden on such schemes, many governments give privately funded retirement plans a tax advantaged status in order to encourage more people to contribute to such arrangements. Governments often exclude such contributions from an employee's taxable income, while allowing employers to receive tax deductions for contributions to plan funds. Investment earnings in pension funds are almost universally excluded from income tax while accumulating, prior to payment. Payments to retirees and their beneficiaries also sometimes receive favorable tax treatment. In return for a pension scheme's tax advantaged status, governments typically enact restrictions to discourage access to a pension fund's assets before retirement. Investing in annuities may allow investors to realize tax advantages that are not realized through other tax-deferred retirement accounts, such as 401k and IRAs. One of the great advantages of annuities is they allow an investor to store away large amounts of cash and defer paying taxes. There is no yearly limit to contributions for annuities. This is especially useful for those approaching retirement age that may not have saved large sums throughout previous years. The total investment compounds annually without any federal taxes | https://en.wikipedia.org/wiki?curid=1028140 |
Tax advantage This allows each dollar in the entire investment to accrue interest, which could potentially be an advantage compared to taxable investments. Additionally, upon cashing the annuity out, the investor can decide to receive a lump-sum payment, or develop a more spread out payout plan. In order to encourage home ownership, there are tax deductions on mortgage payments. Likewise, to encourage charitable donations from high net-worth individuals, there are tax deductions on charitable donations greater than a specified amount. In the United States life insurance policies also have tax advantages. Income can grow in a life insurance policy that is tax deferred or tax-free. Additionally, there are certain advantages within certain life insurance policies that are excluded from estate and/or inheritance taxes. Additionally, investments in partnerships and Limited Liability Companies also have tax advantages. For individual owners of businesses, the LLC is taxed as a sole proprietorship. This means that the entity is not taxed, but the income earned by the entity is taxed to the owner. The LLC has important tax advantages, such as the owners profits potentially being taxed at the owners lower marginal tax bracket. Furthermore, losses can offset the sole proprietor’s non-business income. If there are multiple owners of a Limited Liability Company, there is also tax advantages associated with it. They can choose to be taxed as a partnership, but they can also decide to be taxed as a corporate-entity | https://en.wikipedia.org/wiki?curid=1028140 |
Tax advantage Partnerships are not taxed, but corporations are. For LLCs taxed as partnerships the income is taxed to the partners. For a corporation or an LLC taxed like a corporation, the entity is subject to tax and dividends on after tax income are also taxed to the shareholders of the corporation or the members of the LLC. A capital gains tax can be thought of as tax advantaged or benefitted. When an investor receives profit by selling a capital asset, such as the sale of stock, this is taxed at the rate of the capital gains tax, which is often lower than the income tax. Thus, business owners and investors are incentivized to make profits by making capital gains rather than a steady wage. In the United States, real estate investments are one of the best ways to yield tax advantages. One benefit is the ability to regain the cost of income producing (for example, commercial real estate) properties through depreciation. When a property is bought in the United States, the cost of the building and land are capitalized. If the building is a commercial property or a rental property, used in a business, the cost of the building is depreciated over 39 years for non-residential buildings and 27.5 years for residential buildings using the straight-line depreciation method for tax purposes. The building’s cost is written off over the lifespan of the building by annual depreciation deductions. Thus, the building owner receives these depreciation deductions as tax advantages at their income tax rate | https://en.wikipedia.org/wiki?curid=1028140 |
Tax advantage Upon the sale of a property, depreciation recapture is the part of the gains that the depreciation deductions are responsible for during the period of ownership. The following is an example to show the idea of depreciation in a clear manner. A building owner buys a building for $20 million. After 5 years the owner has taken $1 million of depreciation deductions. Now, the building owner’s basis in the building is $19 million. If the owner decides to sell the building for $25 million, the building owner will realize a gain of $6 million ($25 million less $19 million). Oftentimes people wrongly assume that this $6 million is taxed at a capital gains rate. However, this is a common misconception. In this example, $1 million of the gain would actually be taxed at the depreciation recapture rate, and the other $5 million at the capital gains rate. In essence building tax advantages into the law is providing a government subsidy for engaging in this behavior. Obviously encouraging people to save for retirement is a good idea, because it reduces the need for the government to support people later in life by spending money on welfare or other government expenses for these people, but does a capital gains tax rate benefit spur investment? Should capital gains tax benefit be limited to direct investments in businesses and not to the secondary capital markets (because they don't provide financing for growing businesses)? | https://en.wikipedia.org/wiki?curid=1028140 |
Net D net 10, net 15, net 30 and net 60 (often hyphenated "net-" and/or followed by "days", e.g., "net 10 days") are forms of trade credit which specify that the net amount (the total outstanding on the invoice) is expected to be paid in full by the buyer within 10, 15, 30 or 60 days of the date when the goods are dispatched or the service is completed. Net 30 or net 60 terms are often coupled with a credit for early payment. The word "net" in this sense means "total after all discounts". It originally derives from the Latin "nitere" (to shine) and "nitidus" (elegant, trim), and more recently from the French "net" (sharp, neat, clean). net 30 is a term that most business and municipalities (federal, state, and local) use in the United States. net 10 and net 15 are widely used as well, especially for contractors and service-oriented business (as opposed to those that deal with tangible goods). Net 60 is not used as frequently due to its longer payment term. Legally speaking, net 30 means that buyer will pay seller in full on or before the 30th calendar day (including weekends and holidays) of when the goods were dispatched by the seller or the services were fully provided. Transit time is included when counting the days, i.e. a purchase in transit for 7 days before receipt has just 23 additional days until payment is due to the seller. Net 30 payment terms typically have an interest penalty for not meeting these terms and they begin accruing on the 31st day after dispatch | https://en.wikipedia.org/wiki?curid=1029359 |
Net D The same happens with net 60, but 60 days are given for payment, interest penalties begin on the 61st day and thus a purchase in transit for 7 days has now 53 days until payment is due to the seller. In certain markets such as the United Kingdom, a construction such as "net 30, end of the month" or "Net Monthly Account" indicates that payment in full is expected by the end of the month following the month of the invoice. | https://en.wikipedia.org/wiki?curid=1029359 |
Family income is generally considered a primary measure of a nation's financial prosperity. In the United States, political parties perennially disagree over which economic policies are more likely to increase family income. The party in power often takes the credit (or blame) for any significant changes in family income. | https://en.wikipedia.org/wiki?curid=1032184 |
Economic base analysis is a theory that posits that activities in an area divide into two categories: basic and nonbasic. Basic industries are those exporting from the region and bringing wealth from outside, while nonbasic (or service) industries support basic industries. Because export-import flows are usually not tracked at sub-national (regional) levels, it is not practical to study industry output and trade flows to and from a region. As an alternative, the concepts of basic and nonbasic are operationalized using employment data. The theory was developed by Robert Murray Haig in his work on the Regional Plan of New York in 1928. The basic industries of a region are identified by comparing employment in the region to national norms. If the national norm for employment in, for example, Egyptian woodwind manufacturing is 5 percent and the region's employment is 8 percent, then 3 percent of the region's woodwind employment is basic. Once basic employment is identified, the outlook for basic employment is investigated sector by sector and projections made sector by sector. In turn, this permits the projection of total employment in the region. Typically the basic/nonbasic employment ratio is about 1:1. Extending by manipulation of data and comparisons, conjectures may be made about population and income. This is a rough, serviceable procedure, and it remains in use today. It has the advantage of being readily operationalized, fiddled with, and understandable | https://en.wikipedia.org/wiki?curid=1033912 |
Economic base analysis The formula for computing location quotients can be written as: formula_1 Where: formula_2 Local employment in industry i formula_3 Total local employment formula_4 Reference area employment in industry i formula_5 Total reference area employment It is assumed that the base year is identical in all of the above variables. The figure showing location quotients uses data from "Compare Minnesota: Profiles of Minnesota’s Economy and Population, 2002–2003". It uses the term location quotient, a number derived by comparing the percentage of employment in a place (Minnesota) with the percentage of employment nationwide. Minnesota has about the same percentage of high-technology employment as does the nation. It has more medical devices employment than the national average (due to companies such as Medtronic). Economic base ideas are easy to understand, as are measures made of employment. For instance, it is well known that the economy of Seattle, Washington is tied to aircraft manufacturing, that of Detroit, Michigan, to automobiles, and that of Silicon Valley to high-tech manufacturing. When newspapers discuss the closing of military bases, they may say something like: "5,000 jobs at the base will be lost. That's going to hit the economy hard because it means a loss of 10,000 jobs in the community." To forecast, the main procedure is to compare the region with the nation and national trends. If the economic base of a region is in industries that are declining nationwide, then the region faces a problem | https://en.wikipedia.org/wiki?curid=1033912 |
Economic base analysis If its economic base is concentrated in sectors that are growing, then it is in good shape. Methodologically, economic base analysis views the region as if it were a small nation and uses notions of relative and comparative advantage from international trade theory (Charles Tiebout 1963). In a sense, the activity is macroeconomics "written small", and it has not been of much interest to urban economists in recent years because it does not get at within-city relationships. The analysis usually takes US growth patterns as a given. The fates of regions are determined by trends in the national economy. As H. Craig Davis points out, there are a number of assumptions on which economic base analysis is conducted. These include (1) that exports are the sole source of economic growth (investment, government spending, and household consumption are ignored); (2) that the export industry is homogeneous (i.e., that an increase or decrease of one export does not affect another); (3) the constancy of the export/service ratio; (4) that there is no inter-regional feedback; and (5) that there is a pool of underutilized resources. Quintero, James. 2007. Regional Economic Development: An Economic Base Study and Shift and Shares Analysis of Hays County, Texas. Applied Research Project. Texas State University. http://ecommons.txstate.edu/arp/259/ | https://en.wikipedia.org/wiki?curid=1033912 |
Economic forecasting is the process of making predictions about the economy. Forecasts can be carried out at a high level of aggregation—for example for GDP, inflation, unemployment or the fiscal deficit—or at a more disaggregated level, for specific sectors of the economy or even specific firms. Many institutions engage in economic forecasting: national governments, banks and central banks, consultants and private sector entities such as think-tanks, companies and international organizations such as the International Monetary Fund, World Bank and the OECD. Some forecasts are produced annually, but many are updated more frequently. The economist typically considers risks (i.e., events or conditions that can cause the result to vary from their initial estimates). These risks help illustrate the reasoning process used in arriving at the final forecast numbers. Economists typically use commentary along with data visualization tools such as tables and charts to communicate their forecast. In preparing economic forecasts a variety of information has been used in an attempt to increase the accuracy. Everything from macroeconomic, microeconomic, market data from the future, machine-learning (artificial neural networks), and human behavioral studies have all been used to achieve better forecasts. Forecasts are used for a variety of purposes. Governments and businesses use economic forecasts to help them determine their strategy, multi-year plans, and budgets for the upcoming year | https://en.wikipedia.org/wiki?curid=1033935 |
Economic forecasting Stock market analysts use forecasts to help them estimate the valuation of a company and its stock. Economists select which variables are important to the subject material under discussion. Economists may use statistical analysis of historical data to determine the apparent relationships between particular independent variables and their relationship to the dependent variable under study. For example, to what extent did changes in housing prices affect the net worth of the population overall in the past? This relationship can then be used to forecast the future. That is, if housing prices are expected to change in a particular way, what effect would that have on the future net worth of the population? Forecasts are generally based on sample data rather than a complete population, which introduces uncertainty. The economist conducts statistical tests and develops statistical models (often using regression analysis) to determine which relationships best describe or predict the behavior of the variables under study. Historical data and assumptions about the future are applied to the model in arriving at a forecast for particular variables. The Economic Outlook is the OECD's twice-yearly analysis of the major economic trends and prospects for the next two years. The IMF publishes the World Economic Outlook report twice annually, which provides comprehensive global coverage. The U.S | https://en.wikipedia.org/wiki?curid=1033935 |
Economic forecasting Congressional Budget Office (CBO) publishes a report titled "The Budget and Economic Outlook" annually, which primarily covers the following ten-year period. The U.S. Federal Reserve Board of Governors members also give speeches, provide testimony, and issue reports throughout the year that cover the economic outlook. Large banks such as Wells Fargo and JP Morgan Chase provide economics reports and newsletters. Forecasts from multiple sources may be arithmetically combined and the result is often referred to as a consensus forecast. A large volume of forecast information is published by private firms, central banks and government agencies to meet the strong demand for economic forecast data. Consensus Economics, among other forecasting companies, compiles the macroeconomic forecasts prepared by a variety of forecasters, and publishes them every month. "The Economist" magazine regularly provides such a snapshot as well, for a narrower range of countries and variables. The process of economic forecasting is similar to data analysis and results in estimated values for key economic variables in the future. An economist applies the techniques of econometrics in their forecasting process. Typical steps may include: Forecasters may use computational general equilibrium models or dynamic stochastic general equilibrium models. The latter are often used by central banks | https://en.wikipedia.org/wiki?curid=1033935 |
Economic forecasting Methods of forecasting include Econometric models, Consensus forecasts, Economic base analysis, Shift-share analysis, Input-output model and the Grinold and Kroner Model. See also Land use forecasting, Reference class forecasting, Transportation planning and Calculating Demand Forecast Accuracy. The World Bank provides a means for individuals and organizations to run their own simulations and forecasts using its "iSimulate platform". There are many studies on the subject of forecast accuracy. Accuracy is one of the main, if not the main criteria, used to judge forecast quality. Some of the references below relate to academic studies of forecast accuracy. Forecasting performance appears to be time dependent, where some exogenous events affect forecast quality. As expert forecasts are generally better than market-based forecasts, forecast performance is also model dependant. In early 2014 the OECD carried out a self-analysis of its projections. "The OECD also found that it was too optimistic for countries that were most open to trade and foreign finance, that had the most tightly regulated markets and weak banking systems" according to the Financial Times. The financial and economic crisis that erupted in 2007—arguably the worst since the Great Depression of the 1930s—was not foreseen by most of the forecasters, even if a few lone analysts had been predicting it for some time (for example, Nouriel Roubini and Robert Shiller) | https://en.wikipedia.org/wiki?curid=1033935 |
Economic forecasting The failure to forecast the "Great Recession" has caused a lot of soul searching in the profession. The UK's Queen Elizabeth herself asked why had nobody noticed that the credit crunch was on its way, and a group of economists—experts from business, the City, its regulators, academia, and government—tried to explain in a letter. | https://en.wikipedia.org/wiki?curid=1033935 |
Lerner symmetry theorem The is a result used in international trade theory, which states that an ad valorem import tariff (a percentage of value or an amount per unit) will have the same effects as an export tax. The theorem is based on the observation that the effect on relative prices is the same regardless of which policy (ad valorem tariffs or export taxes) is applied. The theorem was developed by economist Abba P. Lerner in 1936. | https://en.wikipedia.org/wiki?curid=1035908 |
Hicks–Marshall laws of derived demand In economics, the assert that, other things equal, the own-wage elasticity of demand for a category of labor is high under the following conditions: The "Hicks–Marshall" is named for economists John Hicks (from "The Theory of Wages", 1932) and Alfred Marshall (from "Principles of Economics", 1890). | https://en.wikipedia.org/wiki?curid=1036205 |
Input–output model In economics, an input–output model is a quantitative economic model that represents the interdependencies between different sectors of a national economy or different regional economies. Wassily Leontief (1906–1999) is credited with developing this type of analysis and earned the Nobel Prize in Economics for his development of this model. Francois Quesnay had developed a cruder version of this technique called Tableau économique, and Léon Walras's work "Elements of Pure Economics" on general equilibrium theory also was a forerunner and made a generalization of Leontief's seminal concept. Alexander Bogdanov has been credited with originating the concept in a report delivered to the All Russia Conference on the Scientific Organisation of Labour and Production Processes, in January 1921. This approach was also developed by L. N. Kritsman and T. F. Remington, who has argued that their work provided a link between Quesnay's tableau économique and the subsequent contributions by Vladimir Groman and Vladimir Bazarov to Gosplan's method of material balance planning. Wassily Leontief's work in the input–output model was influenced by the works of the classical economists Karl Marx and Jean Charles Léonard de Sismondi. Karl Marx's economics provided an early outline involving a set of tables where the economy consisted of two interlinked departments. Leontief was the first to use a matrix representation of a national (or regional) economy | https://en.wikipedia.org/wiki?curid=1036651 |
Input–output model The model depicts inter-industry relationships within an economy, showing how output from one industrial sector may become an input to another industrial sector. In the inter-industry matrix, column entries typically represent inputs to an industrial sector, while row entries represent outputs from a given sector. This format therefore shows how dependent each sector is on every other sector, both as a customer of outputs from other sectors and as a supplier of inputs. Each column of the input–output matrix shows the monetary value of inputs to each sector and each row represents the value of each sector's outputs. Say that we have an economy with formula_1 sectors. Each sector produces formula_2 units of a single homogeneous good. Assume that the formula_3th sector, in order to produce 1 unit, must use formula_4 units from sector formula_5. Furthermore, assume that each sector sells some of its output to other sectors (intermediate output) and some of its output to consumers (final output, or final demand). Call final demand in the formula_5th sector formula_7. Then we might write or total output equals intermediate output plus final output. If we let formula_9 be the matrix of coefficients formula_4, formula_11 be the vector of total output, and formula_12 be the vector of final demand, then our expression for the economy becomes which after re-writing becomes formula_14 | https://en.wikipedia.org/wiki?curid=1036651 |
Input–output model If the matrix formula_15 is invertible then this is a linear system of equations with a unique solution, and so given some final demand vector the required output can be found. Furthermore, if the principal minors of the matrix formula_15 are all positive (known as the Hawkins–Simon condition), the required output vector formula_11 is non-negative. Consider an economy with two goods, A and B. The matrix of coefficients and the final demand is given by Intuitively, this corresponds to finding the amount of output each sector should produce given that we want 7 units of good A and 4 units of good B. Then solving the system of linear equations derived above gives us There is an extensive literature on these models. There is the Hawkins–Simon condition on producibility. There has been research on disaggregation to clustered inter-industry flows, and on the study of constellations of industries. A great deal of empirical work has been done to identify coefficients, and data has been published for the national economy as well as for regions. The Leontief system can be extended to a model of general equilibrium; it offers a method of decomposing work done at a macro level. While national input–output tables are commonly created by countries' statistics agencies, officially published regional input–output tables are rare. Therefore, economists often use location quotients to create regional multipliers starting from national data | https://en.wikipedia.org/wiki?curid=1036651 |
Input–output model This technique has been criticized because there are several location quotient regionalization techniques, and none are universally superior across all use-cases. Transportation is implicit in the notion of inter-industry flows. It is explicitly recognized when transportation is identified as an industry – how much is purchased from transportation in order to produce. But this is not very satisfactory because transportation requirements differ, depending on industry locations and capacity constraints on regional production. Also, the receiver of goods generally pays freight cost, and often transportation data are lost because transportation costs are treated as part of the cost of the goods. Walter Isard and his student, Leon Moses, were quick to see the spatial economy and transportation implications of input–output, and began work in this area in the 1950s developing a concept of interregional input–output. Take a one region versus the world case. We wish to know something about interregional commodity flows, so introduce a column into the table headed "exports" and we introduce an "import" row. A more satisfactory way to proceed would be to tie regions together at the industry level. That is, we could identify both intra-region inter-industry transactions and inter-region inter-industry transactions. The problem here is that the table grows quickly. Input–output is conceptually simple. Its extension to a model of equilibrium in the national economy has been done successfully using high-quality data | https://en.wikipedia.org/wiki?curid=1036651 |
Input–output model One who wishes to do work with input–output systems must deal skillfully with industry classification, data estimation, and inverting very large, ill-conditioned matrices. Moreover, changes in relative prices are not readily handled by this modeling approach alone. Input–output accounts are part and parcel to a more flexible form of modeling, computable general equilibrium models. Two additional difficulties are of interest in transportation work. There is the question of substituting one input for another, and there is the question about the stability of coefficients as production increases or decreases. These are intertwined questions. They have to do with the nature of regional production functions. Because the input–output model is fundamentally linear in nature, it lends itself to rapid computation as well as flexibility in computing the effects of changes in demand. Input–output models for different regions can also be linked together to investigate the effects of inter-regional trade, and additional columns can be added to the table to perform environmentally extended input–output analysis (EEIOA). For example, information on fossil fuel inputs to each sector can be used to investigate flows of embodied carbon within and between different economies. The structure of the input–output model has been incorporated into national accounting in many developed countries, and as such can be used to calculate important measures such as national GDP | https://en.wikipedia.org/wiki?curid=1036651 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.