text
stringlengths
11
1.65k
source
stringlengths
38
44
Business valuation The WACC must be applied to the subject company's net cash flow to total invested capital. One of the problems with this method is that the valuator may elect to calculate WACC according to the subject company's existing capital structure, the average industry capital structure, or the optimal capital structure. Such discretion detracts from the objectivity of this approach, in the minds of some critics. Indeed, since the WACC captures the risk of the subject business itself, the existing or contemplated capital structures, rather than industry averages, are the appropriate choices for business valuation. Once the capitalization rate or discount rate is determined, it must be applied to an appropriate economic income stream: pretax cash flow, aftertax cash flow, pretax net income, after tax net income, excess earnings, projected cash flow, etc. The result of this formula is the indicated value before discounts. Before moving on to calculate discounts, however, the valuation professional must consider the indicated value under the asset and market approaches. Careful matching of the discount rate to the appropriate measure of economic income is critical to the accuracy of the business valuation results. Net cash flow is a frequent choice in professionally conducted business appraisals
https://en.wikipedia.org/wiki?curid=1885799
Business valuation The rationale behind this choice is that this earnings basis corresponds to the equity discount rate derived from the Build-Up or CAPM models: the returns obtained from investments in publicly traded companies can easily be represented in terms of net cash flows. At the same time, the discount rates are generally also derived from the public capital markets data. The Build-Up Method is a widely recognized method of determining the after-tax net cash flow discount rate, which in turn yields the capitalization rate. The figures used in the Build-Up Method are derived from various sources. This method is called a "build-up" method because it is the sum of risks associated with various classes of assets. It is based on the principle that investors would require a greater return on classes of assets that are more risky. By adding the first three elements of a Build-Up discount rate, we can determine the rate of return that investors would require on their investments in small public company stocks. These three elements of the Build-Up discount rate are known collectively as the "systematic risks." This type of investment risk cannot be avoided through portfolio diversification. It arises from external factors and affect every type of investment in the economy. As a result, investors taking systematic risk are rewarded by an additional premium
https://en.wikipedia.org/wiki?curid=1885799
Business valuation In addition to systematic risks, the discount rate must include "unsystematic risk" representing that portion of total investment risk that can be avoided through diversification. Public capital markets do not provide evidence of unsystematic risk since investors that fail to diversify cannot expect additional returns. Unsystematic risk falls into two categories. Historically, no published data has been available to quantify specific company risks. However, as of late 2006, new research has been able to quantify, or isolate, this risk for publicly traded stocks through the use of Total Beta calculations. P. Butler and K. Pinkerton have outlined a procedure which sets the following two equations together: The only unknown in the two equations is the company specific risk premium. While it is possible to isolate the company-specific risk premium as shown above, many appraisers just key in on the total cost of equity (TCOE) provided by the first equation. It is similar to using the market approach in the income approach instead of adding separate (and potentially redundant) measures of risk in the build-up approach. The use of total beta (developed by Aswath Damodaran) is a relatively new concept. It is, however, gaining acceptance in the business valuation Consultancy community since it is based on modern portfolio theory
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Total beta can help appraisers develop a cost of capital who were content to use their intuition alone when previously adding a purely subjective company-specific risk premium in the build-up approach. It is important to understand why this capitalization rate for small, privately held companies is significantly higher than the return that an investor might expect to receive from other common types of investments, such as money market accounts, mutual funds, or even real estate. Those investments involve substantially lower levels of risk than an investment in a closely held company. Depository accounts are insured by the federal government (up to certain limits); mutual funds are composed of publicly traded stocks, for which risk can be substantially minimized through portfolio diversification. Closely held companies, on the other hand, frequently fail for a variety of reasons too numerous to name. Examples of the risk can be witnessed in the storefronts on every Main Street in America. There are no federal guarantees. The risk of investing in a private company cannot be reduced through diversification, and most businesses do not own the type of hard assets that can ensure capital appreciation over time. This is why investors demand a much higher return on their investment in closely held businesses; such investments are inherently much more risky. (This paragraph is biased, presuming that by the mere fact that a company is closely held, it is prone towards failure
https://en.wikipedia.org/wiki?curid=1885799
Business valuation ) In asset-based analysis the value of a business is equal to the sum of its parts. That is the theory underlying the asset-based approaches to business valuation. The asset approach to business valuation reported on the books of the subject company at their acquisition value, net of depreciation where applicable. These values must be adjusted to fair market value wherever possible. The value of a company's intangible assets, such as goodwill, is generally impossible to determine apart from the company's overall enterprise value. For this reason, the asset-based approach is not the most probative method of determining the value of going business concerns. In these cases, the asset-based approach yields a result that is probably lesser than the fair market value of the business. In considering an asset-based approach, the valuation professional must consider whether the shareholder whose interest is being valued would have any authority to access the value of the assets directly. Shareholders own shares in a corporation, but not its assets, which are owned by the corporation. A controlling shareholder may have the authority to direct the corporation to sell all or part of the assets it owns and to distribute the proceeds to the shareholder(s). The non-controlling shareholder, however, lacks this authority and cannot access the value of the assets. As a result, the value of a corporation's assets is not the true indicator of value to a shareholder who cannot avail himself of that value
https://en.wikipedia.org/wiki?curid=1885799
Business valuation The asset based approach is the entry barrier value and should preferably to be used in businesses having mature or declining growth cycle and is more suitable for capital intensive industry. Adjusted net book value may be the most relevant standard of value where liquidation is imminent or ongoing; where a company earnings or cash flow are nominal, negative or worth less than its assets; or where net book value is standard in the industry in which the company operates. The adjusted net book value may also be used as a "sanity check" when compared to other methods of valuation, such as the income and market approaches... The market approach to business valuation is rooted in the economic principle of competition: that in a free market the supply and demand forces will drive the price of business assets to a certain equilibrium. Buyers would not pay more for the business, and the sellers will not accept less, than the price of a comparable business enterprise. The buyers and sellers are assumed to be equally well informed and acting in their own interests to conclude a transaction. It is similar in many respects to the "comparable sales" method that is commonly used in real estate appraisal. The market price of the stocks of publicly traded companies engaged in the same or a similar line of business, whose shares are actively traded in a free and open market, can be a valid indicator of value when the transactions in which stocks are traded are sufficiently similar to permit meaningful comparison
https://en.wikipedia.org/wiki?curid=1885799
Business valuation The difficulty lies in identifying public companies that are sufficiently comparable to the subject company for this purpose. Also, as for a private company, the equity is less liquid (in other words its stocks are less easy to buy or sell) than for a public company, its value is considered to be slightly lower than such a market-based valuation would give. When there is a lack of comparison with direct competition, a meaningful alternative could be a vertical value-chain approach where the subject company is compared with, for example, a known downstream industry to have a good feel of its value by building useful correlations with its downstream companies. Such comparison often reveals useful insights which help business analysts better understand performance relationship between the subject company and its downstream industry. For example, if a growing subject company is in an industry more concentrated than its downstream industry with a high degree of interdependence, one should logically expect the subject company performs better than the downstream industry in terms of growth, margins and risk. Guideline Public Company method entails a comparison of the subject company to publicly traded companies. The comparison is generally based on published data regarding the public companies' stock price and earnings, sales, or revenues, which is expressed as a fraction known as a "multiple
https://en.wikipedia.org/wiki?curid=1885799
Business valuation " If the guideline public companies are sufficiently similar to each other and the subject company to permit a meaningful comparison, then their multiples should be similar. The public companies identified for comparison purposes should be similar to the subject company in terms of industry, product lines, market, growth, margins and risk. However, if the subject company is privately owned, its value must be adjusted for lack of marketability. This is usually represented by a discount, or a percentage reduction in the value of the company when compared to its publicly traded counterparts. This reflects the higher risk associated with holding stock in a private company. The difference in value can be quantified by applying a discount for lack of marketability. This discount is determined by studying prices paid for shares of ownership in private companies that eventually offer their stock in a public offering. Alternatively, the lack of marketability can be assessed by comparing the prices paid for restricted shares to fully marketable shares of stock of public companies. As above, in certain cases equity may be valued by applying the techniques and frameworks developed for financial options, via a real options framework. For general discussion as to context see § "Valuing flexibility" under corporate finance, , and Contingent claim valuation; for detail as to applicability and other considerations see § "Limitations" under real options valuation
https://en.wikipedia.org/wiki?curid=1885799
Business valuation In general, equity may be viewed as a call option on the firm, and this allows for the valuation of troubled firms which may otherwise be difficult to analyse. The classic application of this approach is to the valuation of distressed securities, already discussed in the original Black-Scholes paper. Here, since the principle of limited liability protects equity investors, shareholders would choose not to repay the firm's debt where the value of the firm (as perceived) is less than the value of the outstanding debt; see bond valuation. Of course, where firm value is greater than debt value, the shareholders would choose to repay (i.e. exercise their option) and not to liquidate. Thus analogous to out the money options which nevertheless have value, equity will (may) have value even if the value of the firm falls (well) below the face value of the outstanding debt—and this value can (should) be determined using the appropriate option valuation technique. See also Merton model. (A further application of this principle is the analysis of principal–agent problems; see contract design under principal–agent problem.) Certain business situations, and the parent firms in those cases, are also logically analysed under an options framework; see "Applications" under the Real options valuation references
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Just as a financial option gives its owner the right, but not the obligation, to buy or sell a security at a given price, companies that make strategic investments have the right, but not the obligation, to exploit opportunities in the future; management will of course only exercise where this makes economic sense. Thus, for companies facing uncertainty of this type, the stock price may (should) be seen as the sum of the value of existing businesses (i.e., the discounted cash flow value) plus any real option value. Equity valuations here, may (should) thus proceed likewise. Compare PVGO. A common application is to natural resource investments. Here, the underlying asset is the resource itself; the value of the asset is a function of both quantity of resource available and the price of the commodity in question. The value of the resource is then the difference between the value of the asset and the cost associated with developing the resource. Where positive ("in the money") management will undertake the development, and will not do so otherwise, and a resource project is thus effectively a call option. A may (should) therefore also be analysed using the options approach. Specifically, the value of the firm comprises the value of already active projects determined via DCF valuation (or other standard techniques) and undeveloped reserves as analysed using the real options framework. See Mineral economics
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Product patents may also be valued as options, and the value of firms holding these patents—typically firms in the , , and sectors—can (should) similarly be viewed as the sum of the value of products in place and the portfolio of patents yet to be deployed. As regards the option analysis, since the patent provides the firm with the right to develop the product, it will do so only if the present value of the expected cash flows from the product exceeds the cost of development, and the patent rights thus correspond to a call option. See . Similar analysis may be applied to options on films (or other works of intellectual property) and the valuation of film studios. Besides mathematical approaches for the valuation of companies a rather unknown method includes also the cultural aspect. The so-called Cultural valuation method (Cultural Due Diligence) seeks to combine existing knowledge, motivation and internal culture with the results of a net-asset-value method. Especially during a company takeover uncovering hidden problems is of high importance for a later success of the business venture. The valuation approaches yield the fair market value of the Company as a whole. In valuing a minority, non-controlling interest in a business, however, the valuation professional must consider the applicability of discounts that affect such interests. Discussions of discounts and premiums frequently begin with a review of the "levels of value"
https://en.wikipedia.org/wiki?curid=1885799
Business valuation There are three common levels of value: controlling interest, marketable minority, and non-marketable minority. The intermediate level, marketable minority interest, is less than the controlling interest level and higher than the non-marketable minority interest level. The marketable minority interest level represents the perceived value of equity interests that are freely traded without any restrictions. These interests are generally traded on the New York Stock Exchange, AMEX, NASDAQ, and other exchanges where there is a ready market for equity securities. These values represent a minority interest in the subject companies – small blocks of stock that represent less than 50% of the company's equity, and usually much less than 50%. Controlling interest level is the value that an investor would be willing to pay to acquire more than 50% of a company's stock, thereby gaining the attendant prerogatives of control. Some of the prerogatives of control include: electing directors, hiring and firing the company's management and determining their compensation; declaring dividends and distributions, determining the company's strategy and line of business, and acquiring, selling or liquidating the business. This level of value generally contains a control premium over the intermediate level of value, which typically ranges from 25% to 50%. An additional premium may be paid by strategic investors who are motivated by synergistic motives
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Non-marketable, minority level is the lowest level on the chart, representing the level at which non-controlling equity interests in private companies are generally valued or traded. This level of value is discounted because no ready market exists in which to purchase or sell interests. Private companies are less "liquid" than publicly traded companies, and transactions in private companies take longer and are more uncertain. Between the intermediate and lowest levels of the chart, there are restricted shares of publicly traded companies. Despite a growing inclination of the IRS and Tax Courts to challenge valuation discounts, Shannon Pratt suggested in a scholarly presentation recently that valuation discounts are actually increasing as the differences between public and private companies is widening . Publicly traded stocks have grown more liquid in the past decade due to rapid electronic trading, reduced commissions, and governmental deregulation. These developments have not improved the liquidity of interests in private companies, however. Valuation discounts are multiplicative, so they must be considered in order. Control premiums and their inverse, minority interest discounts, are considered before marketability discounts are applied. The first discount that must be considered is the discount for lack of control, which in this instance is also a minority interest discount
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Minority interest discounts are the inverse of control premiums, to which the following mathematical relationship exists: MID = 1 – [1 / (1 + CP)] The most common source of data regarding control premiums is the Control Premium Study, published annually by Mergerstat since 1972. Mergerstat compiles data regarding publicly announced mergers, acquisitions and divestitures involving 10% or more of the equity interests in public companies, where the purchase price is $1 million or more and at least one of the parties to the transaction is a U.S. entity. Mergerstat defines the "control premium" as the percentage difference between the acquisition price and the share price of the freely traded public shares five days prior to the announcement of the M&A transaction. While it is not without valid criticism, Mergerstat control premium data (and the minority interest discount derived therefrom) is widely accepted within the valuation profession. A "discount for lack of marketability" (DLOM) may be applied to a minority block of stock to alter the valuation of that block. Another factor to be considered in valuing closely held companies is the marketability of an interest in such businesses. Marketability is defined as the ability to convert the business interest into cash quickly, with minimum transaction and administrative costs, and with a high degree of certainty as to the amount of net proceeds
https://en.wikipedia.org/wiki?curid=1885799
Business valuation There is usually a cost and a time lag associated with locating interested and capable buyers of interests in privately held companies, because there is no established market of readily available buyers and sellers. All other factors being equal, an interest in a publicly traded company is worth more because it is readily marketable. Conversely, an interest in a private-held company is worth less because no established market exists. "The IRS Valuation Guide for Income, Estate and Gift Taxes, Valuation Training for Appeals Officers" acknowledges the relationship between value and marketability, stating: "Investors prefer an asset which is easy to sell, that is, liquid." The discount for lack of control is separate and distinguishable from the discount for lack of marketability. It is the valuation professional's task to quantify the lack of marketability of an interest in a privately held company. Because, in this case, the subject interest is not a controlling interest in the Company, and the owner of that interest cannot compel liquidation to convert the subject interest to cash quickly, and no established market exists on which that interest could be sold, the discount for lack of marketability is appropriate. Several empirical studies have been published that attempt to quantify the discount for lack of marketability. These studies include the restricted stock studies and the pre-IPO studies. The aggregate of these studies indicate average discounts of 35% and 50%, respectively
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Some experts believe the Lack of Control and Marketability discounts can aggregate discounts for as much as ninety percent of a Company's fair market value, specifically with family-owned companies. Restricted stocks are equity securities of public companies that are similar in all respects to the freely traded stocks of those companies except that they carry a restriction that prevents them from being traded on the open market for a certain period of time, which is usually one year (two years prior to 1990). This restriction from active trading, which amounts to a lack of marketability, is the only distinction between the restricted stock and its freely traded counterpart. Restricted stock can be traded in private transactions and usually do so at a discount. The restricted stock studies attempt to verify the difference in price at which the restricted shares trade versus the price at which the same unrestricted securities trade in the open market as of the same date. The underlying data by which these studies arrived at their conclusions has not been made public. Consequently, it is not possible when valuing a particular company to compare the characteristics of that company to the study data. Still, the existence of a marketability discount has been recognized by valuation professionals and the Courts, and the restricted stock studies are frequently cited as empirical evidence. Notably, the lowest average discount reported by these studies was 26% and the highest average discount was 40%
https://en.wikipedia.org/wiki?curid=1885799
Business valuation In addition to the restricted stock studies, U.S. publicly traded companies are able to sell stock to offshore investors (SEC Regulation S, enacted in 1990) without registering the shares with the Securities and Exchange Commission. The offshore buyers may resell these shares in the United States, still without having to register the shares, after holding them for just 40 days. Typically, these shares are sold for 20% to 30% below the publicly traded share price. Some of these transactions have been reported with discounts of more than 30%, resulting from the lack of marketability. These discounts are similar to the marketability discounts inferred from the restricted and pre-IPO studies, despite the holding period being just 40 days. Studies based on the prices paid for options have also confirmed similar discounts. If one holds restricted stock and purchases an option to sell that stock at the market price (a put), the holder has, in effect, purchased marketability for the shares. The price of the put is equal to the marketability discount. The range of marketability discounts derived by this study was 32% to 49%. However, ascribing the entire value of a put option to marketability is misleading, because the primary source of put value comes from the downside price protection. A correct economic analysis would use deeply in-the-money puts or Single-stock futures, demonstrating that marketability of restricted stock is of low value because it is easy to hedge using unrestricted stock or futures trades
https://en.wikipedia.org/wiki?curid=1885799
Business valuation Another approach to measure the marketability discount is to compare the prices of stock offered in initial public offerings (IPOs) to transactions in the same company's stocks prior to the IPO. Companies that are going public are required to disclose all transactions in their stocks for a period of three years prior to the IPO. The pre-IPO studies are the leading alternative to the restricted stock stocks in quantifying the marketability discount. The pre-IPO studies are sometimes criticized because the sample size is relatively small, the pre-IPO transactions may not be arm's length, and the financial structure and product lines of the studied companies may have changed during the three year pre-IPO window. The studies confirm what the marketplace knows intuitively: Investors covet liquidity and loathe obstacles that impair liquidity. Prudent investors buy illiquid investments only when there is a sufficient discount in the price to increase the rate of return to a level which brings risk-reward back into balance. The referenced studies establish a reasonable range of valuation discounts from the mid-30%s to the low 50%s. The more recent studies appeared to yield a more conservative range of discounts than older studies, which may have suffered from smaller sample sizes. Another method of quantifying the lack of marketability discount is the Quantifying Marketability Discounts Model (QMDM)
https://en.wikipedia.org/wiki?curid=1885799
Business valuation The evidence on the market value of specific businesses varies widely, largely depending on reported market transactions in the equity of the firm. A fraction of businesses are "publicly traded," meaning that their equity can be purchased and sold by investors in stock markets available to the general public. Publicly traded companies on major stock markets have an easily calculated "market capitalization" that is a direct estimate of the market value of the firm's equity. Some publicly traded firms have relatively few recorded trades (including many firms traded "over the counter" or in "pink sheets"). A far larger number of firms are privately held. Normally, equity interests in these firms (which include corporations, partnerships, limited-liability companies, and some other organizational forms) are traded privately, and often irregularly. As a result, previous transactions provide limited evidence as to the current value of a private company primarily because business value changes over time, and the share price is associated with considerable uncertainty due to limited market exposure and high transaction costs. A number of stock market indicators in the United States and other countries provide an indication of the market value of publicly traded firms. The Survey of Consumer Finance in the US also includes an estimate of household ownership of stocks, including indirect ownership through mutual funds
https://en.wikipedia.org/wiki?curid=1885799
Business valuation The 2004 and 2007 SCF indicate a growing trend in stock ownership, with 51% of households indicating a direct or indirect ownership of stocks, with the majority of those respondents indicating indirect ownership through mutual funds. Few indications are available on the value of privately held firms. Anderson (2009) recently estimated the market value of U.S. privately held and publicly traded firms, using Internal Revenue Service and SCF data. He estimates that privately held firms produced more income for investors, and had more value than publicly held firms, in 2004.
https://en.wikipedia.org/wiki?curid=1885799
Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality. is often used for analysis of time-series data. is one of many window functions commonly applied to smooth data in signal processing, acting as low-pass filters to remove high-frequency noise. This method is preceded by Poisson's use of recursive exponential window functions in convolutions from the 19th century, as well as Kolmogorov and Zurbenko's use of recursive moving averages from their studies of turbulence in the 1940s. The raw data sequence is often represented by formula_1 beginning at time formula_2, and the output of the exponential smoothing algorithm is commonly written as formula_3, which may be regarded as a best estimate of what the next value of formula_4 will be. When the sequence of observations begins at time formula_2, the simplest form of exponential smoothing is given by the formulas: formula_6 where formula_7 is the "smoothing factor", and formula_8. The use of the exponential window function is first attributed to Poisson as an extension of a numerical analysis technique from the 17th century, and later adopted by the signal processing community in the 1940s
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing Here, exponential smoothing is the application of the exponential, or Poisson, window function. was first suggested in the statistical literature without citation to previous work by Robert Goodell Brown in 1956, and then expanded by Charles C. Holt in 1957. The formulation below, which is the one commonly used, is attributed to Brown and is known as “Brown’s simple exponential smoothing”. All the methods of Holt, Winters and Brown may be seen as a simple application of recursive filtering, first found in the 1940s to convert FIR filters to IIR filters. The simplest form of exponential smoothing is given by the formula: where "α" is the "smoothing factor", and 0 < "α" < 1. In other words, the smoothed statistic "s" is a simple weighted average of the current observation "x" and the previous smoothed statistic "s". The term "smoothing factor" applied to "α" here is something of a misnomer, as larger values of "α" actually reduce the level of smoothing, and in the limiting case with "α" = 1 the output series is just the current observation. Simple exponential smoothing is easily applied, and it produces a smoothed statistic as soon as two observations are available. Values of "α" close to one have less of a smoothing effect and give greater weight to recent changes in the data, while values of "α" closer to zero have a greater smoothing effect and are less responsive to recent changes. There is no formally correct procedure for choosing "α"
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing Sometimes the statistician's judgment is used to choose an appropriate factor. Alternatively, a statistical technique may be used to "optimize" the value of "α". For example, the method of least squares might be used to determine the value of "α" for which the sum of the quantities formula_10 is minimized. Unlike some other smoothing methods, such as the simple moving average, this technique does not require any minimum number of observations to be made before it begins to produce results. In practice, however, a “good average” will not be achieved until several samples have been averaged together; for example, a constant signal will take approximately "3"/"α" stages to reach 95% of the actual value. To accurately reconstruct the original signal without information loss all stages of the exponential moving average must also be available, because older samples decay in weight exponentially. This is in contrast to a simple moving average, in which some samples can be skipped without as much loss of information due to the constant weighting of samples within the average. If a known number of samples will be missed, one can adjust a weighted average for this as well, by giving equal weight to the new sample and all those to be skipped. This simple form of exponential smoothing is also known as an exponentially weighted moving average (EWMA). Technically it can also be classified as an autoregressive integrated moving average (ARIMA) (0,1,1) model with no constant term
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing The time constant of an exponential moving average is the amount of time for the smoothed response of a unit set function to reach formula_11 of the original signal. The relationship between this time constant, formula_12, and the smoothing factor, formula_13, is given by the formula: Where formula_15 is the sampling time interval of the discrete time implementation. If the sampling time is fast compared to the time constant (formula_16) then Note that in the definition above, "s" is being initialized to "x". Because exponential smoothing requires that at each stage we have the previous forecast, it is not obvious how to get the method started. We could assume that the initial forecast is equal to the initial value of demand; however, this approach has a serious drawback. puts substantial weight on past observations, so the initial value of demand will have an unreasonably large effect on early forecasts. This problem can be overcome by allowing the process to evolve for a reasonable number of periods (10 or more) and using the average of the demand during those periods as the initial forecast. There are many other ways of setting this initial value, but it is important to note that the smaller the value of "α", the more sensitive your forecast will be on the selection of this initial smoother value "s". For every exponential smoothing method we also need to choose the value for the smoothing parameters
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing For simple exponential smoothing, there is only one smoothing parameter ("α"), but for the methods that follow there is usually more than one smoothing parameter. There are cases where the smoothing parameters may be chosen in a subjective manner — the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more robust and objective way to obtain values for the unknown parameters included in any exponential smoothing method is to estimate them from the observed data. The unknown parameters and the initial values for any exponential smoothing method can be estimated by minimizing the sum of squared errors (SSE). The errors are specified as formula_18 for t=1...,T (the one-step-ahead within-sample forecast errors). Hence we find the values of the unknown parameters and the initial values that minimize formula_19 Unlike the regression case (where we have formulae to directly compute the regression coefficients which minimize the SSE) this involves a non-linear minimization problem and we need to use an optimization tool to perform this. The name 'exponential smoothing' is attributed to the use of the exponential window function during convolution. It is no longer attributed to Holt, Winters & Brown
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing By direct substitution of the defining equation for simple exponential smoothing back into itself we find that In other words, as time passes the smoothed statistic "s" becomes the weighted average of a greater and greater number of the past observations "x", and the weights assigned to previous observations are in general proportional to the terms of the geometric progression {1, (1 − "α"), (1 − "α"), (1 − "α"), ...}. A geometric progression is the discrete version of an exponential function, so this is where the name for this smoothing method originated according to Statistics lore. and moving average have similar defects of introducing a lag relative to the input data. While this can be corrected by shifting the result by half the window length for a symmetrical kernel, such as a moving average or gaussian, it is unclear how appropriate this would be for exponential smoothing. They also both have roughly the same distribution of forecast error when "α" = 2/("k"+1). They differ in that exponential smoothing takes into account all past data, whereas moving average only takes into account "k" past data points. Computationally speaking, they also differ in that moving average requires that the past "k" data points, or the data point at lag "k"+1 plus the most recent forecast value, to be kept, whereas exponential smoothing only needs the most recent forecast value to be kept
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing In the signal processing literature, the use of non-causal (symmetric) filters is commonplace, and the exponential window function is broadly used in this fashion, but a different terminology is used: exponential smoothing is equivalent to a first-order Infinite Impulse Response or IIR filter and moving average is equivalent to a Finite Impulse Response filter with equal weighting factors. Simple exponential smoothing does not do well when there is a trend in the data, which is inconvenient. In such situations, several methods were devised under the name "double exponential smoothing" or "second-order exponential smoothing," which is the recursive application of an exponential filter twice, thus being termed "double exponential smoothing". This nomenclature is similar to quadruple exponential smoothing, which also references its recursion depth. The basic idea behind double exponential smoothing is to introduce a term to take into account the possibility of a series exhibiting some form of trend. This slope component is itself updated via exponential smoothing. One method, sometimes referred to as "Holt-Winters double exponential smoothing" works as follows: Again, the raw data sequence of observations is represented by {"x"}, beginning at time "t" = 0. We use {"s"} to represent the smoothed value for time "t", and {"b"} is our best estimate of the trend at time "t". The output of the algorithm is now written as "F", an estimate of the value of "x" at time for "m" > 0 based on the raw data up to time "t"
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing Double exponential smoothing is given by the formulas And for t > 1 by where α is the "data smoothing factor", 0 < α < 1, and β is the "trend smoothing factor", 0 < β < 1. To forecast beyond "x" Setting the initial value "b" is a matter of preference. An option other than the one listed above is ("x" - "x")/"n" for some "n" > 1. Note that "F" is undefined (there is no estimation for time 0), and according to the definition "F"="s"+"b", which is well defined, thus further values can be evaluated. A second method, referred to as either Brown's linear exponential smoothing (LES) or Brown's double exponential smoothing works as follows. where "a", the estimated level at time "t" and "b", the estimated trend at time "t" are: Triple exponential smoothing applies exponential smoothing three times, which is commonly used when there are three high frequency signals to be removed from a time series under study. There are different types of seasonality: 'multiplicative' and 'additive' in nature, much like addition and multiplication are basic operations in mathematics. If every month of December we sell 10,000 more apartments than we do in November the seasonality is "additive" in nature. However, if we sell 10% more apartments in the summer months than we do in the winter months the seasonality is "multiplicative" in nature. Multiplicative seasonality can be represented as a constant factor, not an absolute amount
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing Triple exponential smoothing was first suggested by Holt's student, Peter Winters, in 1960 after reading a signal processing book from the 1940s on exponential smoothing. Holt's novel idea was to repeat filtering an odd number of times greater than 1 and less than 5, which was popular with scholars of previous eras. While recursive filtering had been used previously, it was applied twice and four times to coincide with the Hadamard conjecture, while triple application required more than double the operations of singular convolution. The use of a triple application is considered a rule of thumb technique, rather than one based on theoretical foundations and has often been over-emphasized by practitioners. Suppose we have a sequence of observations {"x"}, beginning at time "t" = 0 with a cycle of seasonal change of length "L". The method calculates a trend line for the data as well as seasonal indices that weight the values in the trend line based on where that time point falls in the cycle of length "L". The output of the algorithm is again written as "F", an estimate of the value of "x" at time "t+m, m>0" based on the raw data up to time "t". Triple exponential smoothing with multiplicative seasonality is given by the formulas where α is the "data smoothing factor", 0 < α < 1, β is the "trend smoothing factor", 0 < β < 1, and γ is the "seasonal change smoothing factor", 0 < γ < 1. The general formula for the initial trend estimate "b" is: Setting the initial estimates for the seasonal indices "c" for "i" = 1,2
https://en.wikipedia.org/wiki?curid=1890727
Exponential smoothing ..,L is a bit more involved. If "N" is the number of complete cycles present in your data, then: where Note that "A" is the average value of "x" in the "j"th cycle of your data. Triple exponential smoothing with additive seasonality is given by: formula_30
https://en.wikipedia.org/wiki?curid=1890727
NOPLAT Net operating profit less adjusted taxes (NOPLAT) refers to after-tax EBIT adjusted for deferred taxes, or NOPAT + net increase in deferred taxes. It represents the profits generated from a company's core operations after subtracting the income taxes related to the core operations and adding back in taxes that the company had overpaid during the accounting period. It excludes income from non-operating assets or financing such as interest and includes only profits generated by invested capital. is the profit available to all equity stake holders including providers of debt, equity, other financing and to shareholders. is distinguished from net income which is the profit available to equity holders only. is often used as an input in creating discounted cash flow valuation models. It is used in preference to Net Income as it removes the effects of capital structure (debt vs. equity). minus the monetary cost of all capital (both equity and debt) equals economic profit, which is quite similar to the trademarked EVA model. Though an analyst should make thorough adjustments to account for amortization, intertemporal tax differences, taxes on nonoperating income, and other adjustments, sometimes the following simple back-of-the-envelope formula is employed to show de-levered profits by removing the effects of a debt tax shield: Operating earnings = After-tax operating profit + (Interest paid * (1 - tax rate))
https://en.wikipedia.org/wiki?curid=1894325
Béla Balassa Béla Alexander Balassa (6 April 1928 – 10 May 1991) was a Hungarian economist and professor at Johns Hopkins University and a consultant for the World Bank. Balassa is best known for his work on the relationship between purchasing power parity and cross-country productivity differences (the Balassa–Samuelson effect). He is also known for his work on revealed comparative advantage. Balassa received a law degree from the University of Budapest. He left Hungary after the Hungarian Revolution of 1956 and went to Austria. While there, he received a grant from the Rockefeller Foundation to study at Yale University, where he received M.A. and Ph.D. degrees in economics in 1958 and 1959, respectively. He won the John Addison Porter Prize for 1959. Balassa also did extensive consulting work for the World Bank, serving as an advisor about development and trade policy. According to an authoritative history of the Bank, Balassa was "a protagonist of the Bank's conceptual transformation in the trade-policy area during the 1970s." Beyond economics, Balassa was a noted gourmet who compiled and periodically updated an unofficial guide to eating well in Paris while remaining within an international agency expense allowance, which circulated among his friends and colleagues.
https://en.wikipedia.org/wiki?curid=1894340
External sector The external sector is the portion of a country's economy that interacts with the economies of other countries. In the goods market, the external sector involves exports and imports. In the financial market it involves capital flows. Economic features related to the external sector include:
https://en.wikipedia.org/wiki?curid=1894631
FDI stock is the value of the share of capital and reserves (including retained profits) attributable to the parent enterprise, plus the net indebtedness of affiliates to the parent enterprise. Inward stock is the value of the capital and reserves in the economy attributable to a parent enterprise resident in a different economy. Outward stock is the value of capital and reserves in another economy attributable to a parent enterprise resident in the economy.
https://en.wikipedia.org/wiki?curid=1895031
SELIC The Sistema Especial de Liquidação e Custodia (SELIC) "(Special Clearance and Escrow System)" is the Brazilian Central Bank's system for performing open market operations in execution of monetary policy. The rate is the Bank's overnight rate.
https://en.wikipedia.org/wiki?curid=1896449
Bid price A bid price is the highest price that a buyer (i.e., bidder) is willing to pay for a goods. It is usually referred to simply as the "bid". In bid and ask, the bid price stands in contrast to the ask price or "offer", and the difference between the two is called the bid–ask spread. An unsolicited bid or purchase offer is when a person or company receives a bid even though they are not looking to sell. A bidding war is said to occur when a large number of bids are placed in rapid succession by two or more entities, especially when the price paid is much greater than the ask price, or greater than the first bid in the case of unsolicited bidding. In other words, bidding war is a situation where two or more buyers are so interested in an item (such as a house or a business) that they make increasingly higher offers of the price they are willing to pay to try to become the new owner of the item. In the context of stock trading on a stock exchange, the bid price is the highest price a buyer of a stock is willing to pay for a share of that given stock. The bid price displayed in most quote services is the highest bid price in the market. The ask or offer price on the other hand is the lowest price a seller of a particular stock is willing to sell a share of that given stock. The ask or offer price displayed is the lowest ask/offer price in the market (Stock market).
https://en.wikipedia.org/wiki?curid=1904366
New Public Management (NPM) is an approach to running public service organizations that is used in government and public service institutions and agencies, at both sub-national and national levels. The term was first introduced by academics in the UK and Australia to describe approaches that were developed during the 1980s as part of an effort to make the public service more "businesslike" and to improve its efficiency by using private sector management models. As with the private sector, which focuses on "customer service", NPM reforms often focused on the "centrality of citizens who were the recipient of the services or customers to the public sector". NPM reformers experimented with using decentralized service delivery models, to give local agencies more freedom in how they delivered programs or services. In some cases, NPM reforms that used e-government consolidated a program or service to a central location to reduce costs. Some governments tried using quasi-market structures, so that the public sector would have to compete against the private sector (notably in the UK, in health care). Key themes in NPM were "financial control, value for money, increasing efficiency ..., identifying and setting targets and continuance monitoring of performance, handing over ... power to the senior management" executives. Performance was assessed with audits, benchmarks and performance evaluations. Some NPM reforms used private sector companies to deliver what were formerly public services
https://en.wikipedia.org/wiki?curid=1906130
New Public Management NPM advocates in some countries worked to remove "collective agreements [in favour of] ... individual rewards packages at senior levels combined with short term contracts" and introduce private sector-style corporate governance, including using a Board of Directors approach to strategic guidance for public organizations. While NPM approaches have been used in many countries around the world, NPM is particularly associated with the most industrialized OECD nations such as the United Kingdom, Australia and the United States of America. NPM advocates focus on using approaches from the private sector – the corporate or business world–which can be successfully applied in the public sector and in a public administration context. NPM approaches have been used to reform the public sector, its policies and its programs. NPM advocates claim that it is a more efficient and effective means of attaining the same outcome. In NPM, citizens are viewed as "customers" and public servants are viewed as public managers. NPM tries to realign the relationship between public service managers and their political superiors by making a parallel relationship between the two. Under NPM, public managers have incentive-based motivation such as pay-for-performance, and clear performance targets are often set, which are assessed by using performance evaluations. As well, managers in an NPM paradigm may have greater discretion and freedom as to how they go about achieving the goals set for them
https://en.wikipedia.org/wiki?curid=1906130
New Public Management This NPM approach is contrasted with the traditional public administration model, in which institutional decision-making, policy-making and public service delivery is guided by regulations, legislation and administrative procedures. NPM reforms use approaches such as disaggregation, customer satisfaction initiatives, customer service efforts, applying an entrepreneurial spirit to public service, and introducing innovations. The NPM system allows "the expert manager to have a greater discretion". "Public Managers under the reforms can provide a range of choices from which customers can choose, including the right to opt out of the service delivery system completely". The first practices of emerged in the United Kingdom under the leadership of Prime Minister Margaret Thatcher. Thatcher played the functional role of “policy entrepreneur" and the official role of prime minister. Thatcher drove changes in public management policy in such areas as organizational methods, civil service, labor relations, expenditure planning, financial management, audit, evaluation, and procurement. Thatcher's successor, John Major, kept public management policy on the agenda of the Conservative government, leading to the implementation of the Next Steps Initiative. Major also launched the programs of the Citizens Charter Initiative, Competing for Quality, Resource Accounting and Budgeting, and the Private Finance Initiative
https://en.wikipedia.org/wiki?curid=1906130
New Public Management A term was coined in the late 1980s to denote a new (or renewed) focus on the importance of management and ‘production engineering’ in public service delivery, which often linked to doctrines of economic rationalism (Hood 1989, Pollitt 1993). During this timeframe public management became an active area of policy-making in numerous other countries, notably in New Zealand, Australia, and Sweden. At the same time, Organisation for Economic Co-operation and Development (OECD) established its Public Management Committee and Secretariat (PUMA), conferring to public management the status normally accorded more conventional domains of policy. In the 1990s, public management was a major item on President Clinton’s agenda. Early policy actions of the Clinton administration included launching the National Partnership and signing into law the Government Performance and Results Act. Currently there are few indications that public management issues will vanish from governmental policy agendas. A recent study showed that in Italy, municipal directors are aware of a public administration now being oriented toward new public management where they are assessed according to the results they produce. The term (NPM) expresses the idea that the cumulative flow of policy decisions over the past twenty years has amounted to a substantial shift in the governance and management of the “state sector” in the United Kingdom, New Zealand, Australia, Scandinavia, North America, and Latin America
https://en.wikipedia.org/wiki?curid=1906130
New Public Management For instance, regional innovation agency were created under NPM principles to support the innovation process. A benign interpretation is that these decisions have been a defensible, if imperfect, response to policy problems. Those problems as well as their solutions were formulated within the policy-making process. The agenda-setting process has been heavily influenced by electoral commitments to improve macro- economic performance and to contain growth in the public sector, as well as by a growing perception of public bureaucracies as being inefficient. The alternative-generation process has been heavily influenced by ideas coming from economics and from various quarters within the field of management. Although the origins of NPM came from Westernized countries it expanded to a variety of other countries in the 1990s. Before the 1990s, NPM was largely associated to an idea utilized by developed countries that are particularly Anglo-Saxon. However the 1990s have seen countries in Africa, Asia and other countries looking into using this method. In Africa, downsizing and decrease of user fees have been widely introduced. These autonomous agencies within the public sectors have been established in these areas. Performance contracting became a common policy in crisis states worldwide. Contracting out of this magnitude can be used to do things such as waste management, cleaning, laundry, catering and road maintenance. NPM was accepted as the "gold standard for administrative reform" in the 1990s
https://en.wikipedia.org/wiki?curid=1906130
New Public Management The idea for using this method for government reform was that if the government guided private-sector principles were used rather than rigid hierarchical bureaucracy, it would work more efficiently. NPM promotes a shift from bureaucratic administration to business-like professional management. NPM was cited as the solution for management ills in various organizational context and policy making in education and health care reform. The basic principles of NPM is can best be described when split into seven different aspects elaborated by Christopher Hood in 1991. Hood also invented the term NPM itself. They are the following: Because of its belief in the importance and strength of privatizing government, it is critical to have an emphasis on management by engaging in hands-on methods. This theory allows leaders the freedom to manage freely and open up discretion. Its critical to preserve express measures and measures of execution in a workforce. Utilizing this strategy advances clarification of goals/intent, targets, and markers for movement. The third point acknowledges the "shift from the use of input controls and bureaucratic procedures to rules relying on output controls measured by quantitive performance indicators". This aspect requires using performance based assessments when looking to outsource work to private companies/groups
https://en.wikipedia.org/wiki?curid=1906130
New Public Management NPM advocates frequently moved from a bound together administration framework to a decentralized framework in which directors pick up adaptability and are not constrained to organization restrictions. This characteristic centers on how NPM can advance competition within the public sector which may in turn lower fetched, dispose of debate and conceivably accomplish a better quality of progress/work through the term contacts. Competition can too be found when the government offers contracts to the private segments and the contract is given in terms of the capacity to provide the benefit viably, quality of the merchandise given, subsequently this will increment competition since the other private division which did not get the contract will make strides to make strides the quality and capacity subsequently encouraging competition. This viewpoint centers on the need to set up short-term labor contracts, create corporate plans or trade plans, execution assentions and mission statements. It moreover centers on setting up a working environment in which open representatives or temporary workers are mindful of the objectives and intention that offices are attempting to reach. The most effective one which has led to its ascent into global popularity, focuses on keeping cost low and efficiency high
https://en.wikipedia.org/wiki?curid=1906130
New Public Management "Doing more with less" moreover cost reduction stimulates efficiency and is one way which makes it different from the traditional of management approach draws practices from the private sector and uses them in the public sector of management. NPM reforms use market forces to hold the public sector accountable and the satisfaction of preferences as the measures of accountability. In order for this system to proceed, certain conditions, such as the existence of competition, must exist and information about choices must be available. Reforms that promise to reinvent government by way of focusing on results and customer satisfaction as opposed to administrative and political processes fail to account for legislative self-interest. Institutions other than federal government, the changes being trumpeted as reinvention would not even be announced, except perhaps on hallway bulletin boards. There are blurred lines between policymaking and providing services in the system. Questions have been raised about the potential politicization of the public service, when executives are hired on contract under pay-for-performance systems. The ability for citizens to effectively choose the appropriate government services they need has also been challenged. "The notion of choice is essential to the economic concept of a customer. Generally, in government there are few if any choices." There are concerns that public managers move away from trying to meet citizens' needs and limitations on accountability to the public
https://en.wikipedia.org/wiki?curid=1906130
New Public Management NPM brings to question integrity and compliance when dealing with incentives for public managers - the interests of customers and owners do not always align. Questions such as managers being more or less faithful arise. The public interest is at risk and could undermine the trust in government. "Government must be accountable to the larger public interest, not only to individual immediate customers or consumers [of government services.]" Although NPM had a dramatic impact in the 1990s on managing and policymaking, many scholars believe that NPM has hit its prime. Scholars like Frank Dunleavy believe is phasing out because of disconnect with “customers” and their institutions. Scholars cite the Digital Era and the new importance of technology that kills the necessity of NPM. In countries that are less industrialized the NPM concept is still growing and spreading. This trend has much to do with a country's ability or inability to get their public sector in tune with the Digital Era. was created in the Public Sector to create change based on: disaggregation, competition, and incentives. Using incentives to produce the maximum services from an organization is largely stalled in many countries and being reversed because of increased complexity. Post-NPM, many countries explored digital era governance (DEG). Dunleavy believes this new way of governance should be heavily centered upon information and technology. Technology will help re-integrate with digitalization changes
https://en.wikipedia.org/wiki?curid=1906130
New Public Management Digital Era Governance provides a unique opportunity for self- sustainability however, there are various factors that will determine whether or not DEG can be implemented successfully. When countries have proper technology, NPM simply can't compete very well with DEG. DEG does an excellent job of making services more accurate, prompt and remove most barriers and conflicts. DEG also can improve the service quality and provide local access to outsourcers. The New Public Service is a newly developed theory for 21st century citizen-focused public administration. This work directly challenges the clientelism and rationalist paradigm of the New Public Management. New Public Service (NPS) focuses on democratic governance and re-imagining the accountability of public administrators toward citizens. NPS posits that administrators should be a broker between citizens and their government, focusing on citizen engagement in political and administrative issues. is often mistakenly compared to New Public Administration. The ‘New Public Administration’ movement was one established in the USA during the late 1960s and early 1970s. Though there may be some common features, the central themes of the two movements are different. The main thrust of the New Public Administration movement was to bring academic public administration into line with an anti-hierarchical egalitarian movement that was influential in US university campuses and public sector workers
https://en.wikipedia.org/wiki?curid=1906130
New Public Management By contrast, the emphasis of the movement a decade or so later was firmly managerial normative in that it stressed the difference that management could and should make in the quality and efficiency of public services. It focuses on public service production functions and operational issues contrasted with the focus on public accountability, ‘model employer’ public service values, ‘due process,’ and what happens inside public organizations in conventional public administration. The table below gives a side-by-side comparison of the two systems core aspects/characteristics
https://en.wikipedia.org/wiki?curid=1906130
The McDonaldization of Society is a 1993 book by sociologist George Ritzer. Ritzer suggests that in the later part of the 20th century the socially-structured form of the fast-food restaurant has become the organizational force representing and extending the process of rationalization into the realm of everyday interaction and individual identity. McDonald's of the 1990s serves as the case model. The book introduced the term McDonaldization to learned discourse as a way to describe a social process which produces "mind-numbing sameness", according to a 2002 review of a related academic text. In "McDonaldization" Ritzer expands and updates central elements from the work of Max Weber and produces a critical analysis of the impact of social-structural change on human interaction and identity. The central theme in Weber's analysis of modern society was the process of rationalization; a far-reaching process whereby traditional modes of thinking were replaced by an ends/means analysis concerned with efficiency and formalized social control. Weber argued that the archetypal manifestation of this process was the bureaucracy; a large, formal organization characterized by a hierarchical authority structure, well-established division of labor, written rules and regulations, impersonality and a concern for technical competence. Bureaucratic organizations not only represent the process of rationalization, the structure they impose on human interaction and thinking furthers the process, leading to an increasingly rationalized world
https://en.wikipedia.org/wiki?curid=1907580
The McDonaldization of Society The process affects all aspects of everyday life.
https://en.wikipedia.org/wiki?curid=1907580
Endogeneity (econometrics) In econometrics, endogeneity broadly refers to situations in which an explanatory variable is correlated with the error term. The distinction between endogenous and exogenous variables originated in simultaneous equations models, where one separates variables whose values are determined by the model from variables which are predetermined; ignoring simultaneity in the estimation leads to biased estimates as it violates the exogeneity assumption of the Gauss–Markov theorem. The problem of endogeneity is unfortunately, oftentimes ignored by researchers conducting non-experimental research and doing so precludes making policy recommendations. Instrumental variable techniques are commonly used to address this problem. Besides simultaneity, correlation between explanatory variables and the error term can arise when an unobserved or omitted variable is confounding both independent and dependent variables, or when independent variables are measured with error. In a stochastic model, the notion of the "usual exogeneity", "sequential exogeneity", "strong/strict exogeneity" can be defined. Exogeneity is articulated in such a way that a variable or variables is exogenous for parameter formula_1. Even if a variable is exogenous for parameter formula_1, it might be endogenous for parameter formula_3. When the explanatory variables are not stochastic, then they are strong exogenous for all the parameters
https://en.wikipedia.org/wiki?curid=1908618
Endogeneity (econometrics) If the independent variable is correlated with the error term in a regression model then the estimate of the regression coefficient in an ordinary least squares (OLS) regression is biased; however if the correlation is not contemporaneous, then the coefficient estimate may still be consistent. There are many methods of correcting the bias, including instrumental variable regression and Heckman selection correction. The following are some common sources of endogeneity. In this case, the endogeneity comes from an uncontrolled confounding variable, a variable that is correlated with both the independent variable in the model and with the error term. (Equivalently, the omitted variable affects the independent variable and separately affects the dependent variable.) Assume that the "true" model to be estimated is but formula_5 is omitted from the regression model (perhaps because there is no way to measure it directly). Then the model that is actually estimated is where formula_7 (thus, the formula_5 term has been absorbed into the error term). If the correlation of formula_9 and formula_10 is not 0 and formula_10 separately affects formula_12 (meaning formula_13), then formula_9 is correlated with the error term formula_15. Here, formula_9 is not exogenous for formula_1 and formula_3, since, given formula_9, the distribution of formula_12 depends not only on formula_1 and formula_3, but also on formula_10 and formula_24. Suppose that a perfect measure of an independent variable is impossible
https://en.wikipedia.org/wiki?curid=1908618
Endogeneity (econometrics) That is, instead of observing formula_25, what is actually observed is formula_26 where formula_27 is the measurement error or "noise". In this case, a model given by can be written in terms of observables and error terms as Since both formula_30 and formula_31 depend on formula_27, they are correlated, so the OLS estimation of formula_3 will be biased downward. Measurement error in the dependent variable, formula_34, does not cause endogeneity, though it does increase the variance of the error term. Suppose that two variables are codetermined, with each affecting the other according to the following "structural" equations: Estimating either equation by itself results in endogeneity. In the case of the first structural equation, formula_37. Solving for formula_5 while assuming that formula_39 results in Assuming that formula_30 and formula_42 are uncorrelated with formula_31, Therefore, attempts at estimating either structural equation will be hampered by endogeneity. The endogeneity problem is particularly relevant in the context of time series analysis of causal processes. It is common for some factors within a causal system to be dependent for their value in period "t" on the values of other factors in the causal system in period "t" − 1. Suppose that the level of pest infestation is independent of all other factors within a given period, but is influenced by the level of rainfall and fertilizer in the preceding period
https://en.wikipedia.org/wiki?curid=1908618
Endogeneity (econometrics) In this instance it would be correct to say that infestation is exogenous within the period, but endogenous over time. Let the model be "y" = "f"("x", "z") + "u". If the variable "x" is sequential exogenous for parameter formula_1, and "y" does not cause "x" in the Granger sense, then the variable "x" is strongly/strictly exogenous for the parameter formula_1. Generally speaking, simultaneity occurs in the dynamic model just like in the example of static simultaneity above.
https://en.wikipedia.org/wiki?curid=1908618
Ellsberg paradox The is a paradox in decision theory in which people's choices violate the postulates of subjective expected utility. It is generally taken to be evidence for ambiguity aversion. The paradox was popularized by Daniel Ellsberg, although a version of it was noted considerably earlier by John Maynard Keynes. The basic idea is that people overwhelmingly prefer taking on risk in situations where they know specific odds rather than an alternative risk scenario in which the odds are completely ambiguous—they will always choose a known probability of winning over an unknown probability of winning even if the known probability is low and the unknown probability could be a guarantee of winning. For example, given a choice of risks to take (such as bets), people "prefer the devil they know" rather than assuming a risk where odds are difficult or impossible to calculate. Ellsberg proposed two separate thought experiments, the proposed choices of which contradict subjective expected utility. The 2-color problem involves bets on two urns, both of which contain balls of two different colors. The 3-color problem, described below, involves bets on a single urn, which contains balls of three different colors. Consider an urn containing 30 red balls and 60 other balls that are either black or yellow. It is unknown how many black or how many yellow balls there are, but that the total number of black balls plus the total number of yellow equals 60
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox The balls are well mixed so that each individual ball is as likely to be drawn as any other. Given a choice between two gambles: Also you are given the choice between these two gambles (about a different draw from the same urn): This situation poses both "Knightian uncertainty" – how many of the non-red balls are yellow and how many are black, which is not quantified – and "probability" – whether the ball is red or non-red, which is vs. . Utility theory models the choice by assuming that in choosing between these gambles, people assume a "probability" that the non-red balls are yellow versus black, and then compute the "expected utility" of the two gambles. Since the prizes are the same, it follows that you will "prefer" Gamble A to Gamble B if and only if you believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be no clear preference between the choices if you thought that a red ball was as likely as a black ball. Similarly it follows that you will "prefer" Gamble C to Gamble D "if, and only if", you believe that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that, if drawing a red ball is more likely than drawing a black ball, then drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing you "prefer" Gamble A to Gamble B, it follows that you will also "prefer" Gamble C to Gamble D
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox And, supposing instead that you "prefer" Gamble B to Gamble A, it follows that you will also "prefer" Gamble D to Gamble C. When surveyed, however, most people "strictly prefer" Gamble A to Gamble B and Gamble D to Gamble C. Therefore, some assumptions of the expected utility theory are violated. Mathematically, the estimated probabilities of each color ball can be represented as: "R", "Y", and "B". If you "strictly prefer" Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles: specifically, it must be the case that where "U"( ) is your utility function. If "U"($100) > "U"($0) (you strictly prefer $100 to nothing), this simplifies to: If you also strictly prefer Gamble D to Gamble C, the following inequality is similarly obtained: This simplifies to: This contradiction indicates that your preferences are inconsistent with expected-utility theory. The result holds regardless of your utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble is selected, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately, there are only two outcomes: receive a specific amount of money, or receive nothing
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox Therefore, it is sufficient to assume that the preference is to receive some money to nothing (and, this assumption is not necessary: in the mathematical treatment above, it was assumed "U"($100) > "U"($0), but a contradiction can still be obtained for "U"($100) < "U"($0) and for "U"($100) = "U"($0)). In addition, the result holds regardless of your risk aversion. All the gambles involve risk. By choosing Gamble D, you have a 1 in 3 chance of receiving nothing, and by choosing Gamble A, you have a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow that Gamble C was less risky than Gamble D (and vice versa), so, risk is not averted in this way. However, because the exact chances of winning are known for Gambles A and D, and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation). There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity which the decision-maker faces – see Knightian uncertainty
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes. One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is , which is a precise number. Nonetheless, the agent may not distinguish, intuitively, between this and, say, . No probability information whatsoever is provided regarding other outcomes, so the agent has very unclear subjective impressions of these probabilities. In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on "maximizing" the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and to maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal which Ellsberg observed. Another possible explanation is that this type of game triggers a deceit aversion mechanism
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. People make the same decisions in the experiment that they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of is compared to the "lower part" of the – range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply forget to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well. A modification of utility theory to incorporate uncertainty as distinct from risk is Choquet expected utility, which also proposes a solution to the paradox. Other alternative explanations include the competence hypothesis and comparative ignorance hypothesis
https://en.wikipedia.org/wiki?curid=1912480
Ellsberg paradox These theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge.
https://en.wikipedia.org/wiki?curid=1912480
Solow–Swan model The is an economic model of long-run economic growth set within the framework of neoclassical economics. It attempts to explain long-run economic growth by looking at capital accumulation, labor or population growth, and increases in productivity, commonly referred to as technological progress. At its core is a neoclassical (aggregate) production function, often specified to be of Cobb–Douglas type, which enables the model "to make contact with microeconomics". The model was developed independently by Robert Solow and Trevor Swan in 1956, and superseded the Keynesian Harrod–Domar model. Mathematically, the is a nonlinear system consisting of a single ordinary differential equation that models the evolution of the "per capita" stock of capital. Due to its particularly attractive mathematical characteristics, Solow–Swan proved to be a convenient starting point for various extensions. For instance, in 1965, David Cass and Tjalling Koopmans integrated Frank Ramsey's analysis of consumer optimization, thereby endogenizing the saving rate, to create what is now known as the Ramsey–Cass–Koopmans model. The neo-classical model was an extension to the 1946 Harrod–Domar model that included a new term: productivity growth. Important contributions to the model came from the work done by Solow and by Swan in 1956, who independently developed relatively simple growth models. Solow's model fitted available data on US economic growth with some success. In 1987 Solow was awarded the Nobel Prize in Economics for his work
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model Today, economists use Solow's sources-of-growth accounting to estimate the separate effects on economic growth of technological change, capital, and labor. Solow extended the Harrod–Domar model by adding labor as a factor of production and capital-output ratios that are not fixed as they are in the Harrod–Domar model. These refinements allow increasing capital intensity to be distinguished from technological progress. Solow sees the fixed proportions production function as a "crucial assumption" to the instability results in the Harrod-Domar model. His own work expands upon this by exploring the implications of alternative specifications, namely the Cobb–Douglas and the more general constant elasticity of substitution (CES). Although this has become the canonical and celebrated story in the history of economics, featured in many economic textbooks, recent reappraisal of Harrod's work has contested it. One central criticism is that Harrod's original piece was neither mainly concerned with economic growth nor did he explicitly use a fixed proportions production function. A standard Solow model predicts that in the long run, economies converge to their steady state equilibrium and that permanent growth is achievable only through technological progress. Both shifts in saving and in populational growth cause only level effects in the long-run (i.e. in the absolute value of real income per capita)
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model An interesting implication of Solow's model is that poor countries should grow faster and eventually catch-up to richer countries. This convergence could be explained by: Baumol attempted to verify this empirically and found a very strong correlation between a countries' output growth over a long period of time (1870 to 1979) and its initial wealth. His findings were later contested by DeLong who claimed that both the non-randomness of the sampled countries, and potential for significant measurement errors for estimates of real income per capita in 1870, biased Baumol's findings. DeLong concludes that there is little evidence to support the convergence theory. The key assumption of the neoclassical growth model is that capital is subject to diminishing returns in a closed economy. In the the unexplained change in the growth of output after accounting for the effect of capital accumulation is called the Solow residual. This residual measures the exogenous increase in total factor productivity (TFP) during a particular time period. The increase in TFP is often attributed entirely to technological progress, but it also includes any permanent improvement in the efficiency with which factors of production are combined over time. Implicitly TFP growth includes any permanent productivity improvements that result from improved management practices in the private or public sectors of the economy
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model Paradoxically, even though TFP growth is exogenous in the model, it cannot be observed, so it can only be estimated in conjunction with the simultaneous estimate of the effect of capital accumulation on growth during a particular time period. The model can be reformulated in slightly different ways using different productivity assumptions, or different measurement metrics: In a growing economy, capital is accumulated faster than people are born, so the denominator in the growth function under the MFP calculation is growing faster than in the ALP calculation. Hence, MFP growth is almost always lower than ALP growth. (Therefore, measuring in ALP terms increases the apparent capital deepening effect.) MFP is measured by the "Solow residual", not ALP. The textbook is set in continuous-time world with no government or international trade. A single good (output) is produced using two factors of production, labor (formula_1) and capital (formula_2) in an aggregate production function that satisfies the Inada conditions, which imply that the elasticity of substitution must be asymptotically equal to one. where formula_4 denotes time, formula_5 is the elasticity of output with respect to capital, and formula_6 represents total production. formula_7 refers to labor-augmenting technology or “knowledge”, thus formula_8 represents effective labor. All factors of production are fully employed, and initial values formula_9, formula_10, and formula_11 are given. The number of workers, i.e
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model labor, as well as the level of technology grow exogenously at rates formula_12 and formula_13, respectively: The number of effective units of labor, formula_16, therefore grows at rate formula_17. Meanwhile, the stock of capital depreciates over time at a constant rate formula_18. However, only a fraction of the output (formula_19 with formula_20) is consumed, leaving a saved share formula_21 for investment: where formula_23 is shorthand for formula_24, the derivative with respect to time. Derivative with respect to time means that it is the change in capital stock—output that is neither consumed nor used to replace worn-out old capital goods is net investment. Since the production function formula_25 has constant returns to scale, it can be written as output per effective unit of labour: The main interest of the model is the dynamics of capital intensity formula_27, the capital stock per unit of effective labour. Its behaviour over time is given by the key equation of the Solow–Swan model: The first term, formula_29, is the actual investment per unit of effective labour: the fraction formula_30 of the output per unit of effective labour formula_31 that is saved and invested. The second term, formula_32, is the “break-even investment”: the amount of investment that must be invested to prevent formula_27 from falling
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model The equation implies that formula_34 converges to a steady-state value of formula_35, defined by formula_36, at which there is neither an increase nor a decrease of capital intensity: at which the stock of capital formula_2 and effective labour formula_8 are growing at rate formula_40. By assumption of constant returns, output formula_41 is also growing at that rate. In essence, the predicts that an economy will converge to a balanced-growth equilibrium, regardless of its starting point. In this situation, the growth of output per worker is determined solely by the rate of technological progress. Since, by definition, formula_42, at the equilibrium formula_35 we have Therefore, at the equilibrium, the capital/output ratio depends only on the saving, growth, and depreciation rates. This is the Solow–Swan model's version of the golden rule saving rate. Since formula_45, at any time formula_46 the marginal product of capital formula_47 in the is inversely related to the capital/labor ratio. If productivity formula_49 is the same across countries, then countries with less capital per worker formula_50 have a higher marginal product, which would provide a higher return on capital investment. As a consequence, the model predicts that in a world of open market economies and global financial capital, investment will flow from rich countries to poor countries, until capital/worker formula_50 and income/worker formula_52 equalize across countries
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model Since the marginal product of physical capital is not higher in poor countries than in rich countries, the implication is that productivity is lower in poor countries. The basic Solow model cannot explain why productivity is lower in these countries. Lucas suggested that lower levels of human capital in poor countries could explain the lower productivity. If one equates the marginal product of capital formula_53 with the rate of return formula_54 (such approximation is often used in neoclassical economics), then, for our choice of the production function so that formula_56 is the fraction of income appropriated by capital. Thus, assumes from the beginning that the labor-capital split of income remains constant. N. Gregory Mankiw, David Romer, and David Weil created a human capital augmented version of the that can explain the failure of international investment to flow to poor countries. In this model output and the marginal product of capital (K) are lower in poor countries because they have less human capital than rich countries. Similar to the textbook Solow–Swan model, the production function is of Cobb–Douglas type: where formula_58 is the stock of human capital, which depreciates at the same rate formula_18 as physical capital. For simplicity, they assume the same function of accumulation for both types of capital. Like in Solow–Swan, a fraction of the outcome, formula_60, is saved each period, but in this case split up and invested partly in physical and partly in human capital, such that formula_61
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model Therefore, there are two fundamental dynamic equations in this model: The balanced (or steady-state) equilibrium growth path is determined by formula_64, which means formula_65 and formula_66. Solving for the steady-state level of formula_27 and formula_68 yields: In the steady state, formula_71. Klenow and Rodriguez-Clare cast doubt on the validity of the augmented model because Mankiw, Romer, and Weil's estimates of formula_72 did not seem consistent with accepted estimates of the effect of increases in schooling on workers' salaries. Though the estimated model explained 78% of variation in income across countries, the estimates of formula_72 implied that human capital's external effects on national income are greater than its direct effect on workers' salaries. Theodore Breton provided an insight that reconciled the large effect of human capital from schooling in the Mankiw, Romer and Weil model with the smaller effect of schooling on workers' salaries. He demonstrated that the mathematical properties of the model include significant external effects between the factors of production, because human capital and physical capital are multiplicative factors of production
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model The external effect of human capital on the productivity of physical capital is evident in the marginal product of physical capital: He showed that the large estimates of the effect of human capital in cross-country estimates of the model are consistent with the smaller effect typically found on workers' salaries when the external effects of human capital on physical capital and labor are taken into account. This insight significantly strengthens the case for the Mankiw, Romer, and Weil version of the Solow–Swan model. Most analyses criticizing this model fail to account for the pecuniary external effects of both types of capital inherent in the model. The exogenous rate of TFP (total factor productivity) growth in the is the residual after accounting for capital accumulation. The Mankiw, Romer, and Weil model provide a lower estimate of the TFP (residual) than the basic because the addition of human capital to the model enables capital accumulation to explain more of the variation in income across countries. In the basic model, the TFP residual includes the effect of human capital because human capital is not included as a factor of production. The augmented with human capital predicts that the income levels of poor countries will tend to catch up with or converge towards the income levels of rich countries if the poor countries have similar savings rates for both physical capital and human capital as a share of output, a process known as conditional convergence
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model However, savings rates vary widely across countries. In particular, since considerable financing constraints exist for investment in schooling, savings rates for human capital are likely to vary as a function of cultural and ideological characteristics in each country. Since the 1950s, output/worker in rich and poor countries generally has not converged, but those poor countries that have greatly raised their savings rates have experienced the income convergence predicted by the Solow–Swan model. As an example, output/worker in Japan, a country which was once relatively poor, has converged to the level of the rich countries. Japan experienced high growth rates after it raised its savings rates in the 1950s and 1960s, and it has experienced slowing growth of output/worker since its savings rates stabilized around 1970, as predicted by the model. The per-capita income levels of the southern states of the United States have tended to converge to the levels in the Northern states. The observed convergence in these states is also consistent with the conditional convergence concept. Whether absolute convergence between countries or regions occurs depends on whether they have similar characteristics, such as: Additional evidence for conditional convergence comes from multivariate, cross-country regressions
https://en.wikipedia.org/wiki?curid=1913978
Solow–Swan model Econometric analysis on Singapore and the other "East Asian Tigers" has produced the surprising result that although output per worker has been rising, almost none of their rapid growth had been due to rising per-capita productivity (they have a low "Solow residual").
https://en.wikipedia.org/wiki?curid=1913978
Diversified financial Diversified financials is a specific category of the Global Industry Classification Standard (GICS) that is used by the financial community. It includes a range of consumer and commercially oriented companies offering a wide variety of financial products and services, including various lending products (such as home equity loans and credit cards), insurance, and securities and investment products. Many of these firms in this category are non-banking financial companies, specialist organisations like stock exchanges or financial holding companies that were created through consolidation of banks, insurance companies and brokerage firms to become universal banks.
https://en.wikipedia.org/wiki?curid=1914859
Philip Mirowski (born 21 August 1951 in Jackson, Michigan) is a historian and philosopher of economic thought at the University of Notre Dame. He received a PhD in Economics from the University of Michigan in 1979. In his book "More Heat than Light", Mirowski reveals a history of how physics has drawn inspiration from economics and how economics has sought to emulate physics, especially with regard to the theory of value. He traces the development of the energy concept in Western physics and its subsequent effect on the invention and promulgation of neoclassical economics, the modern orthodox theory. In his book "Machine Dreams", Mirowski explores the historical influences of the military and the cyborg sciences on neoclassical economics. The neglected influence of John von Neumann and his theory of automata are key themes throughout the book. Mirowski claims that many of the developments in neoclassical economics in the 20th century, from game theory to computational economics, are the unacknowledged result of von Neumann's plans for economics. The work expands Mirowski's vision for a computational economics, one in which various market types are constructed in a similar fashion to Noam Chomsky's generative grammar. The role of economics is to explore how various market types perform in measures of complexity and efficiency, with more complicated markets being able to incorporate the effects of the less complex. By complexity Mirowski means something analogous to computational complexity theory in computer science
https://en.wikipedia.org/wiki?curid=1916846
Philip Mirowski In his book "Never Let a Serious Crisis Go to Waste", Mirowski concludes that neoliberal thought has become so pervasive that any countervailing evidence serves only to further convince disciples of its ultimate truth. Once neoliberalism became a Theory of Everything, providing a revolutionary account of self, knowledge, information, markets, and government, it could no longer be falsified by anything as trifling as data from the “real” economy.
https://en.wikipedia.org/wiki?curid=1916846
Michel Onfray (; born 1 January 1959) is a French writer and philosopher. Having a hedonistic, epicurean and atheist world view, he is a highly prolific author on philosophy, having written more than 100 books. His philosophy is mainly influenced by such thinkers as Nietzsche, Epicurus, the Cynic and Cyrenaic schools, as well as French materialism. He has gained notoriety for writing such works as "Traité d'athéologie: Physique de la métaphysique" (translated into English as ""), "Politique du rebelle: traité de résistance et d'insoumission", "Physiologie de Georges Palante, portrait d'un nietzchéen de gauche", "La puissance d'exister" and "La sculpture de soi" for which he won the annual Prix Médicis in 1993. Born in Argentan to a family of Norman farmers, Onfray was sent to a weekly Catholic boarding school from ages 10 to 14. This was a solution many parents in France adopted at the time when they lived far from the village school or had working hours that made it too hard or too expensive to transport their children to and from school daily. The young Onfray, however, did not appreciate his new environment, which he describes as a place of suffering. Onfray went on to graduate with a teaching degree in philosophy. He taught this subject to senior students at a high school that concentrates on technical degrees in Caen between 1983 and 2002
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray At that time, he and his supporters established the "Université populaire de Caen", proclaiming its foundation on a free-of-charge basis and on the manifesto written by Onfray in 2004 ("La communauté philosophique"). Onfray is an atheist and author of "Traité d'Athéologie" (""), which "became the number one best-selling nonfiction book in France for months when it was published in the Spring of 2005 (the word 'atheologie' Onfray borrowed from Georges Bataille). This book repeated its popular French success in Italy, where it was published in September 2005 and quickly soared to number one on Italy's bestseller lists." In the 2002 election, Onfray endorsed the French Revolutionary Communist League and its candidate for the French presidency, Olivier Besancenot. In 2007, he endorsed José Bové, but eventually voted for Olivier Besancenot, and conducted an interview with the future French president Nicolas Sarkozy, who, he declared for "Philosophie Magazine," was an "ideological enemy". His book "Le crépuscule d'une idole : L'affabulation freudienne" ("The Twilight of an Idol: The Freudian Confabulation"), published in 2010, has been the subject of considerable controversy in France because of its criticism of Freud. He recognizes Freud as a philosopher, but he brings attention to the considerable cost of Freud's treatments and casts doubts on the effectiveness of his methods. In 2015, he published "Cosmos", the first book of a trilogy. Onfray considers ironically that it constitutes his "very first book"
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray Onfray writes that there is no philosophy without self-psychoanalysis. He describes himself as an atheist and considers theist religion to be indefensible. Onfray has published 9 books under a project of history of philosophy called "Counter-history of Philosophy". In each of these books Onfray deals with a particular historical period in western philosophy. The series of books are composed by the titles I. "Les Sagesses Antiques" (2006) (on western antiquity), II. "Le Christianisme hédoniste" (2006) (on Christian hedonism from the Renaissance period), III. "Les libertins baroques" (2007) (on libertine thought from the Baroque era), IV. "Les Ultras des Lumières" (2007) (on radical enlightenment thought), V. "L'Eudémonisme social" (2008) (on radical utilitarian and eudomonistic thought), VI. "Les Radicalités existentielles" (2009) (on 19th and 20th century radical existentialist thinkers) and VII. "La construction du surhomme: Jean-Marie Guyau, Friedrich Nietzsche" (on Guyau´s and Nietzsche´s philosophy in relation to the concept of the "Übermensch"). VIII. "Les Freudiens hérétiques" (2013). IX. "Les Consciences réfractaires" (2013). In an interview he establishes his view on the history of philosophy. For him: There is in fact a multitude of ways to practice philosophy, but out of this multitude, the dominant historiography picks one tradition among others and makes it the truth of philosophy: that is to say the idealist, spiritualist lineage compatible with the Judeo-Christian world view
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray From that point on, anything that crosses this partial – in both senses of the word – view of things finds itself dismissed. This applies to nearly all non-Western philosophies, Oriental wisdom in particular, but also sensualist, empirical, materialist, nominalist, hedonistic currents and everything that can be put under the heading of "anti-Platonic philosophy". Philosophy that comes down from the heavens is the kind that – from Plato to Levinas by way of Kant and Christianity – needs a world behind the scenes to understand, explain and justify this world. The other line of force rises from the earth because it is satisfied with the given world, which is already so much. "His mission is to rehabilitate materialist and sensualist thinking and use it to re-examine our relationship to the world. Approaching philosophy as a reflection of each individual's personal experience, Onfray inquires into the capabilities of the body and its senses and calls on us to celebrate them through music, painting, and fine cuisine." He defines hedonism "as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else." "Onfray's philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain's and the body's capacities to their fullest extent – while restoring philosophy to a useful role in art, politics, and everyday life and decisions
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray " Onfray's works "have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume "Counter-history of Philosophy"", of which three have been published. For Onfray: In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance – my pleasure at the same time as the pleasure of others – presumes that we approach the subject from different angles – political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical... His philosophy aims for "micro-revolutions", or "revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values." In "La puissance d'exister: Manifeste hédoniste", Onfray claims that the political dimension of hedonism runs from Epicurus to John Stuart Mill through Jeremy Bentham and Claude Adrien Helvétius. What political hedonism aims for is to create "the greatest happiness for the greatest numbers". Blogger J. M. Cornwell praised Onfray's "Atheist Manifesto: The Case Against Christianity, Judaism, and Islam", claiming it "is a religious and historical time capsule" containing what he sees as "the true deceptions of theological philosophy"
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray Recently he has been involved in promoting the work of Jean Meslier, an 18th-century French Catholic priest who was discovered, upon his death, to have written a book-length philosophical essay promoting atheism. In the atheist manifesto, Onfray has said that among the "incalculable number of contradictions and improbabilities in the body of the text of the synoptic Gospels" two claims are made: crucifixion victims were not laid to rest in tombs, and in any case Jews were not crucified in this period. Ancient Historian John Dickson, of Macquarie University, has said that Philo of Alexandria, writing about the time of Jesus, says that sometimes the Romans handed the bodies of crucifixion victims over to family members for proper burial. The Roman Jewish historian Flavius Josephus even remarks: "the Jews are so careful about funeral rites that even malefactors who have been sentenced to crucifixion are taken down and buried before sunset". Regarding the second claim, Dickson calls this a "clear historical blunder". In his latest book, "Décadence" he argued for Christ myth theory, which is a hypotheses that Jesus was not a historical person. Onfray based this on the fact that, other than in the New Testament, Jesus is barely mentioned in accounts of the period. Onfray was a high school philosophy teacher for two decades until he resigned in 2002 to establish a tuition-free Université Populaire (People's University) at Caen, at which he and several colleagues teach philosophy and other subjects
https://en.wikipedia.org/wiki?curid=1917730
Michel Onfray "The Université Populaire, which is open to all who cannot access the state university system, and on principle does not accept any money from the State -- Onfray uses the profits from his books to help finance it -- has had enormous success. Based on Onfray's book "La Communauté Philosophique: Manifeste pour l'Université Populaire" (2004), the original UP now has imitators in Picardy, Arras, Lyon, Narbonne, and Le Mans, with five more in preparation." "The national public radio network France Culture annually broadcasts his course of lectures to the Université Populaire on philosophical themes." Asteroid 289992 Onfray, discovered by astronomers at the Saint-Sulpice Observatory in 2005, was named in his honor. The official was published by the Minor Planet Center on 16 March 2014 ().
https://en.wikipedia.org/wiki?curid=1917730
Paradox of thrift The paradox of thrift (or paradox of saving) is a paradox of economics. The paradox states that an increase in autonomous saving leads to a decrease in aggregate demand and thus a decrease in gross output which will in turn lower "total" saving. The paradox is, narrowly speaking, that total saving may fall because of individuals' attempts to increase their saving, and, broadly speaking, that increase in saving may be harmful to an economy. Both the narrow and broad claims are paradoxical within the assumption underlying the fallacy of composition, namely that which is true of the parts must be true of the whole. The narrow claim transparently contradicts this assumption, and the broad one does so by implication, because while individual thrift is generally averred to be good for the economy, the paradox of thrift holds that collective thrift may be bad for the economy. It had been stated as early as 1714 in "The Fable of the Bees", and similar sentiments date to antiquity. It was popularized by John Maynard Keynes and is a central component of Keynesian economics. It has formed part of mainstream economics since the late 1940s. The argument begins from the observation that in equilibrium, total income must equal total output
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift Assuming that income has a direct effect on saving, an increase in the autonomous component of saving, other things being equal, will move the equilibrium point at which income equals output to a lower value, thereby inducing a decline in saving that may more than offset the original increase. In this form it represents a prisoner's dilemma as saving is beneficial to each individual but deleterious to the general population. This is a "paradox" because it runs contrary to intuition. Someone unaware of the paradox of thrift would fall into a fallacy of composition and assume that what seems to be good for an individual within the economy will be good for the entire population. However, exercising thrift may be good for an individual by enabling that individual to save for a "rainy day", and yet not be good for the economy as a whole. This paradox can be explained by analyzing the place, and impact, of increased savings in an economy. If a population decides to save more money at all income levels, then total revenues for companies will decline. This decreased demand causes a contraction of output, giving employers and employees lower income. Eventually the population's total saving will have remained the same or even declined because of lower incomes and a weaker economy. This paradox is based on the proposition, put forth in Keynesian economics, that many economic downturns are demand-based
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift While the paradox of thrift was popularized by Keynes, and is often attributed to him, it was stated by a number of others prior to Keynes, and the proposition that spending may help and saving may hurt an economy dates to antiquity; similar sentiments occur in the Bible verse: which has found occasional use as an epigram in underconsumptionist writings. Keynes himself notes the appearance of the paradox in "The Fable of the Bees: or, Private Vices, Publick Benefits" (1714) by Bernard Mandeville, the title itself hinting at the paradox, and Keynes citing the passage: Keynes suggests Adam Smith was referring to this passage when he wrote "What is prudence in the conduct of every private family can scarce be folly in that of a great Kingdom." The problem of underconsumption and oversaving, as they saw it, was developed by underconsumptionist economists of the 19th century, and the paradox of thrift in the strict sense that "collective attempts to save yield lower overall savings" was explicitly stated by John M. Robertson in his 1892 book "The Fallacy of Saving," writing: Similar ideas were forwarded by William Trufant Foster and Waddill Catchings in the 1920s in "The Dilemma of Thrift". Keynes distinguished between business activity/investment ("Enterprise") and savings ("Thrift") in his "Treatise on Money" (1930): He stated the paradox of thrift in "The General Theory", 1936: The theory is referred to as the "paradox of thrift" in Samuelson's influential "Economics" of 1948, which popularized the term
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift The paradox of thrift formally can be well described as a circuit paradox using the terms of "Balances Mechanics" developed by the German economist Wolfgang Stützel (German: "Saldenmechanik"): It is about saving by cut of expenses, which always leads to a revenue surplus of the individual, so to saving of money. But once the totality (in the meaning of every each) saves at expenses, the revenues of economy only decline. The paradox of thrift has been related to the debt deflation theory of economic crises, being called "the paradox of debt" – people save not to increase savings, but rather to pay down debt. As well, a paradox of toil and a paradox of flexibility have been proposed: A willingness to work more in a liquidity trap and wage flexibility after a debt deflation shock may lead not only to lower wages, but lower employment. During April 2009, U.S. Federal Reserve Vice Chair Janet Yellen discussed the "Paradox of deleveraging" described by economist Hyman Minsky: "Once this massive credit crunch hit, it didn’t take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals and firms—and indeed essential to return the economy to a normal state—nevertheless magnify the distress of the economy as a whole." Within mainstream economics, non-Keynesian economists, particularly neoclassical economists, criticize this theory on three principal grounds. The first criticism is that, following Say's law and the related circle of ideas, if demand slackens, prices will fall (barring government intervention), and the resulting lower price will stimulate demand (though at lower profit or cost – possibly even lower wages). This criticism in turn has been questioned by New Keynesian economists, who reject Say's law and instead point to evidence of sticky prices as a reason why prices do not fall in recession; this remains a debated point. The second criticism is that savings represent loanable funds, particularly at banks, assuming the savings are held at banks, rather than currency itself being held ("stashed under one's mattress"). Thus an accumulation of savings yields an increase in potential lending, which will lower interest rates and stimulate borrowing
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift So a decline in consumer spending is offset by an increase in lending, and subsequent investment and spending. Two caveats are added to this criticism. Firstly, if savings are held as cash, rather than being loaned out (directly by savers, or indirectly, as via bank deposits), then loanable funds do not increase, and thus a recession may be caused – but this is due to holding cash, not to saving per se. Secondly, banks themselves may hold cash, rather than loaning it out, which results in the growth of excess reserves – funds on deposit but not loaned out. This is argued to occur in liquidity trap situations, when interest rates are at a zero lower bound (or near it) and savings still exceed investment demand. Within Keynesian economics, the desire to hold currency rather than loan it out is discussed under liquidity preference. Third, the paradox assumes a closed economy in which savings are not invested abroad (to fund exports of local production abroad). Thus, while the paradox may hold at the global level, it need not hold at the local or national level: if one nation increases savings, this can be offset by trading partners consuming a greater amount relative to their own production, i.e., if the saving nation increases exports, and its partners increase imports. This criticism is not very controversial, and is generally accepted by Keynesian economists as well, who refer to it as "exporting one's way out of a recession"
https://en.wikipedia.org/wiki?curid=1918286
Paradox of thrift They further note that this frequently occurs in concert with currency devaluation (hence increasing exports and decreasing imports), and cannot work as a solution to a global problem, because the global economy is a closed system – not "every" nation can increase net exports. The Austrian School economist Friedrich Hayek criticized the paradox in a 1929 article, "The 'Paradox' of Savings", questioning the paradox as proposed by Foster and Catchings. Hayek, and later Austrian School economists agree that if a population saves more money, total revenues for companies will decline, but they deny the assertion that lower revenues lead to lower economic growth, understanding that the additional savings are used to create more capital to increase production. Once the new, more productive structure of capital has reorganized inside of the current structure, the real costs of production is reduced for most firms. Some criticisms argue that using accumulated capital to increase production is an act which requires spending, and therefore the Austrian argument does not disprove the paradox.
https://en.wikipedia.org/wiki?curid=1918286
United States housing bubble The was a real estate bubble affecting over half of the U.S. states. Housing prices peaked in early 2006, started to decline in 2006 and 2007, and reached new lows in 2012. On December 30, 2008, the Case–Shiller home price index reported its largest price drop in its history. The credit crisis resulting from the bursting of the housing bubble is an important cause of the 2007–2009 recession in the United States. Increased foreclosure rates in 2006–2007 among U.S. homeowners led to a crisis in August 2008 for the subprime, Alt-A, collateralized debt obligation (CDO), mortgage, credit, hedge fund, and foreign bank markets. In October 2007, the U.S. Secretary of the Treasury called the bursting housing bubble "the most significant risk to our economy". Any collapse of the U.S. housing bubble has a direct impact not only on home valuations, but mortgage markets, home builders, real estate, home supply retail outlets, Wall Street hedge funds held by large institutional investors, and foreign banks, increasing the risk of a nationwide recession. Concerns about the impact of the collapsing housing and credit markets on the larger U.S. economy caused President George W. Bush and the Chairman of the Federal Reserve Ben Bernanke to announce a limited bailout of the U.S. housing market for homeowners who were unable to pay their mortgage debts. In 2008 alone, the United States government allocated over $900 billion to special loans and rescues related to the U.S. housing bubble
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble This was shared between the public sector and the private sector. Because of the large market share of Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) (both of which are government-sponsored enterprises) as well as the Federal Housing Administration, they received a substantial share of government support, even though their mortgages were more conservatively underwritten and actually performed better than those of the private sector. Land prices contributed much more to the price increases than did structures. This can be seen in the building cost index in Fig. 1. An estimate of land value for a house can be derived by subtracting the replacement value of the structure, adjusted for depreciation, from the home price. Using this methodology, Davis and Palumbo calculated land values for 46 U.S. metro areas, which can be found at the website for the Lincoln Institute for Land Policy. Housing bubbles may occur in local or global real estate markets. In their late stages, they are typically characterized by rapid increases in the valuations of real property until unsustainable levels are reached relative to incomes, price-to-rent ratios, and other economic indicators of affordability. This may be followed by decreases in home prices that result in many owners finding themselves in a position of negative equity—a mortgage debt higher than the value of the property. The underlying causes of the housing bubble are complex
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble Factors include tax policy (exemption of housing from capital gains), historically low interest rates, tax lending standards, failure of regulators to intervene, and speculative fever. This bubble may be related to the stock market or dot-com bubble of the 1990s. This bubble roughly coincides with the real estate bubbles of the United Kingdom, Hong Kong, Spain, Poland, Hungary and South Korea. While bubbles may be identifiable in progress, bubbles can be definitively measured only in hindsight after a market correction, which began in 2005–2006 for the U.S. housing market. Former U.S. Federal Reserve Board Chairman Alan Greenspan said "We had a bubble in housing", and also said in the wake of the subprime mortgage and credit crisis in 2007, "I really didn't get it until very late in 2005 and 2006." In 2001, Alan Greenspan dropped interest rates to a low 1% in order to jump the economy after the ".com" bubble. It was then bankers and other Wall Street firms started borrowing money due to its inexpensiveness. The mortgage and credit crisis was caused by the inability of a large number of home owners to pay their mortgages as their low introductory-rate mortgages reverted to regular interest rates. Freddie Mac CEO Richard Syron concluded, "We had a bubble", and concurred with Yale economist Robert Shiller's warning that home prices appear overvalued and that the correction could last years, with trillions of dollars of home value being lost
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble Greenspan warned of "large double digit declines" in home values "larger than most people expect". Problems for home owners with good credit surfaced in mid-2007, causing the United States' largest mortgage lender, Countrywide Financial, to warn that a recovery in the housing sector was not expected to occur at least until 2009 because home prices were falling "almost like never before, with the exception of the Great Depression". The impact of booming home valuations on the U.S. economy since the 2001–2002 recession was an important factor in the recovery, because a large component of consumer spending was fueled by the related refinancing boom, which allowed people to both reduce their monthly mortgage payments with lower interest rates and withdraw equity from their homes as their value increased. Although an economic bubble is difficult to identify except in hindsight, numerous economic and cultural factors led several economists (especially in late 2004 and early 2005) to argue that a housing bubble existed in the U.S. Dean Baker identified the bubble in August 2002, thereafter repeatedly warning of its nature and depth, and the political reasons it was being ignored. Prior to that, Robert Prechter wrote about it extensively as did Professor Shiller in his original publication of "Irrational Exuberance" in the year 2000. The burst of the housing bubble was predicted by a handful of political and economic analysts, such as Jeffery Robert Hunn in a March 3, 2003, editorial
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble Hunn wrote: Many contested any suggestion that there could be a housing bubble, particularly at its peak from 2004 to 2006, with some rejecting the "house bubble" label in 2008. Claims that there was no warning of the crisis were further repudiated in an August 2008 article in "The New York Times", which reported that in mid-2004 Richard F. Syron, the CEO of Freddie Mac, received a memo from David Andrukonis, the company's former chief risk officer, warning him that Freddie Mac was financing risk-laden loans that threatened Freddie Mac's financial stability. In his memo, Mr. Andrukonis wrote that these loans "would likely pose an enormous financial and reputational risk to the company and the country". The article revealed that more than two-dozen high-ranking executives said that Mr. Syron had simply decided to ignore the warnings. Other cautions came as early as 2001, when the late Federal Reserve governor Edward Gramlich warned of the risks posed by subprime mortgages. In September 2003, at a hearing of the House Financial Services Committee, Congressman Ron Paul identified the housing bubble and foretold the difficulties it would cause: "Like all artificially-created bubbles, the boom in housing prices cannot last forever. When housing prices fall, homeowners will experience difficulty as their equity is wiped out. Furthermore, the holders of the mortgage debt will also have a loss
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble " Reuters reported in October 2007 that a Merrill Lynch analyst too had warned in 2006 that companies could suffer from their subprime investments. The "Economist" magazine stated, "The worldwide rise in house prices is the biggest bubble in history", so any explanation needs to consider its global causes as well as those specific to the United States. The then Federal Reserve Board Chairman Alan Greenspan said in mid-2005 that "at a minimum, there's a little 'froth' (in the U.S. housing market) ... it's hard not to see that there are a lot of local bubbles"; Greenspan admitted in 2007 that "froth" "was a euphemism for a bubble". In early 2006, President Bush said of the U.S. housing boom: "If houses get too expensive, people will stop buying them ... Economies should cycle". Throughout the bubble period there was little if any mention of the fact that housing in many areas was (and still is) selling for well above replacement cost. On the basis of 2006 market data that were indicating a marked decline, including lower sales, rising inventories, falling median prices and increased foreclosure rates, some economists have concluded that the correction in the U.S. housing market began in 2006. A May 2006 "Fortune" magazine report on the US housing bubble states: "The great housing bubble has finally started to deflate ... In many once-sizzling markets around the country, accounts of dropping list prices have replaced tales of waiting lists for unbuilt condos and bidding wars over humdrum three-bedroom colonials
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble " The chief economist of Freddie Mac and the director of Joint Center for Housing Studies (JCHS) denied the existence of a national housing bubble and expressed doubt that any significant decline in home prices was possible, citing consistently rising prices since the Great Depression, an anticipated increased demand from the Baby Boom generation, and healthy levels of employment. However, some have suggested that the funding received by JCHS from the real estate industry may have affected their judgment. David Lereah, former chief economist of the National Association of Realtors (NAR), distributed "Anti-Bubble Reports" in August 2005 to "respond to the irresponsible bubble accusations made by your local media and local academics". Among other statements, the reports stated that people "should [not] be concerned that home prices are rising faster than family income", that "there is virtually no risk of a national housing price bubble based on the fundamental demand for housing and predictable economic factors", and that "a general slowing in the rate of price growth can be expected, but in many areas inventory shortages will persist and home prices are likely to continue to rise above historic norms". Following reports of rapid sales declines and price depreciation in August 2006, Lereah admitted that he expected "home prices to come down 5% nationally, more in some markets, less in others. And a few cities in Florida and California, where home prices soared to nose-bleed heights, could have 'hard landings'
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble " National home sales and prices both fell dramatically in March 2007 — the steepest plunge since the 1989 Savings and Loan crisis. According to NAR data, sales were down 13% to 482,000 from the peak of 554,000 in March 2006, and the national median price fell nearly 6% to $217,000 from a peak of $230,200 in July 2006. John A. Kilpatrick from Greenfield Advisors was cited by Bloomberg News on June 14, 2007, on the linkage between increased foreclosures and localized housing price declines: "Living in an area with multiple foreclosures can result in a 10 percent to 20 percent decrease in property values". He went on to say, "In some cases that can wipe out the equity of homeowners or leave them owing more on their mortgage than the house is worth. The innocent houses that just happen to be sitting next to those properties are going to take a hit." The US Senate Banking Committee held hearings on the housing bubble and related loan practices in 2006, titled "The Housing Bubble and its Implications for the Economy" and "Calculated Risk: Assessing Non-Traditional Mortgage Products". Following the collapse of the subprime mortgage industry in March 2007, Senator Chris Dodd, Chairman of the Banking Committee held hearings and asked executives from the top five subprime mortgage companies to testify and explain their lending practices. Dodd said that "predatory lending" had endangered home ownership for millions of people
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble In addition, Democratic senators such as Senator Charles Schumer of New York were already proposing a federal government bailout of subprime borrowers in order to save homeowners from losing their residences. Home price appreciation has been non-uniform to such an extent that some economists, including former Fed Chairman Alan Greenspan, have argued that United States was not experiencing a nationwide housing bubble "per se", but a number of local bubbles. However, in 2007 Greenspan admitted that there was in fact a bubble in the U.S. housing market, and that "all the froth bubbles add up to an aggregate bubble". Despite greatly relaxed lending standards and low interest rates, many regions of the country saw very little price appreciation during the "bubble period". Out of 20 largest metropolitan areas tracked by the S&P/Case-Shiller house price index, six (Dallas, Cleveland, Detroit, Denver, Atlanta, and Charlotte) saw less than 10% price growth in inflation-adjusted terms in 2001–2006. During the same period, seven metropolitan areas (Tampa, Miami, San Diego, Los Angeles, Las Vegas, Phoenix, and Washington, D.C.) appreciated by more than 80%. However, housing bubbles did not manifest themselves in each of these areas at the same time. San Diego and Los Angeles had maintained consistently high appreciation rates since late 1990s, whereas the Las Vegas and Phoenix bubbles did not develop until 2003 and 2004 respectively
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble It was in the East Coast, the more populated part of the country where the economic real estate turmoil was the worst. Somewhat paradoxically, as the housing bubble deflates some metropolitan areas (such as Denver and Atlanta) have been experiencing high foreclosure rates, even though they did not see much house appreciation in the first place and therefore did not appear to be contributing to the national bubble. This was also true of some cities in the Rust Belt such as Detroit and Cleveland, where weak local economies had produced little house price appreciation early in the decade but still saw declining values and increased foreclosures in 2007. As of January 2009 California, Michigan, Ohio and Florida were the states with the highest foreclosure rates. By July 2008, year-to-date prices had declined in 24 of 25 U.S. metropolitan areas, with California and the southwest experiencing the greatest price falls. According to the reports, only Milwaukee had seen an increase in house prices after July 2007. Prior to the real estate market correction of 2006–2007, the unprecedented increase in house prices starting in 1997 produced numerous wide-ranging effects in the economy of the United States. These trends were reversed during the real estate market correction of 2006–2007. As of August 2007, D.R. Horton's and Pulte Corp's shares had fallen to 1/3 of their respective peak levels as new residential home sales fell
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble Some of the cities and regions that had experienced the fastest growth during 2000–2005 began to experience high foreclosure rates. It was suggested that the weakness of the housing industry and the loss of the consumption that had been driven by the withdrawal of mortgage equity could lead to a recession, but as of mid-2007 the existence of this recession had not yet been ascertained. In March 2008, Thomson Financial reported that the "Chicago Federal Reserve Bank's National Activity Index for February sent a signal that a recession [had] probably begun". The share prices of Fannie Mae and Freddie Mac plummeted in 2008 as investors worried that they lacked sufficient capital to cover the losses on their $5 trillion portfolio of loans and loan guarantees. On June 16, 2010, it was announced that Fannie Mae and Freddie Mac would be delisted from the New York Stock Exchange; shares now trade on the over-the-counter market. Basing their statements on historic U.S. housing valuation trends, in 2005 and 2006 many economists and business writers predicted market corrections ranging from a few percentage points to 50% or more from peak values in some markets, and although this cooling had not yet affected all areas of the U.S., some warned that it still could, and that the correction would be "nasty" and "severe". Chief economist Mark Zandi of the economic research firm Moody's Economy.com predicted a "crash" of double-digit depreciation in some U.S. cities by 2007–2009
https://en.wikipedia.org/wiki?curid=1920610
United States housing bubble In a paper he presented to a Federal Reserve Board economic symposium in August 2007, Yale University economist Robert Shiller warned, "The examples we have of past cycles indicate that major declines in real home prices—even 50 percent declines in some places—are entirely possible going forward from today or from the not-too-distant future." To better understand how the mortgage crisis played out, a 2012 report from the University of Michigan analyzed data from the Panel Study of Income Dynamics (PSID), which surveyed roughly 9,000 representative households in 2009 and 2011. The data seem to indicate that, while conditions are still difficult, in some ways the crisis is easing: Over the period studied, the percentage of families behind on mortgage payments fell from 2.2 to 1.9; homeowners who thought it was "very likely or somewhat likely" that they would fall behind on payments fell from 6% to 4.6% of families. On the other hand, family's financial liquidity has decreased: "As of 2009, 18.5% of families had no liquid assets, and by 2011 this had grown to 23.4% of families." By mid-2016, the national housing price index was "about 1 percent shy of that 2006 bubble peak" in nominal terms but 20% below in inflation adjusted terms. In March 2007, the United States' subprime mortgage industry collapsed due to higher-than-expected home foreclosure rates (no verifying source), with more than 25 subprime lenders declaring bankruptcy, announcing significant losses, or putting themselves up for sale
https://en.wikipedia.org/wiki?curid=1920610